Create LVM
1. pvcreate /dev/sdb
2. vgcreate test_vg /dev/sdb
3. lvcreate -n test_dev -l $(vgdisplay test_vg | awk '{if($1=="Free" && $2=="PE" && $4=="Size") print $5}' ) test_vg
4. mkfs.xfs -f /dev/test_vg/test_dev
5. mkdir /test_dir
configure HA
cat /etc/ha.d/ha.cf
logfacility daemon # Log to syslog as facility daemon
node test1 # List our cluster members
node test2
keepalive 1 # Send one heartbeat each second
warntime 5 # Warn after 5 seconds
deadtime 15 # Declare nodes dead after 15 seconds
initdead 120 # Initial deadtime
udpport 6794 # UDP port (the default 694 didn't work)
bcast eth0 # Broadcast heartbeats on eth0 and eth1 interfaces
ping 172.16.1.1 # Ping our switches to monitor ethernet connectivity
auto_failback off # Don't fail back to primary mgmt server automatically
respawn hacluster /usr/lib64/heartbeat/ipfail # Failover on network failures
use_logd yes
cat /etc/ha.d/haresource
test1 LVM::test_vg \
Filesystem::/dev/test_vg/test_dev::/test_dir::xfs::ikeep \
172.16.1.3
Start Heartbeat
# /etc/init.d/heartbeat start
Starting High-Availability services: LVM[5825]: INFO: Running OK
ResourceManager[5801]: CRITICAL: Resource LVM::test_vg is active, and should not be!
ResourceManager[5801]: CRITICAL: Non-idle resources can affect data integrity!
ResourceManager[5801]: info: If you don't know what this means, then get help!
ResourceManager[5801]: info: Read the docs and/or source to /usr/share/heartbeat/ResourceManager for more details.
CRITICAL: Resource LVM::test_vg is active, and should not be!
CRITICAL: Non-idle resources can affect data integrity!
info: If you don't know what this means, then get help!
info: Read the docs and/or the source to /usr/share/heartbeat/ResourceManager for more details.
ResourceManager[5801]: CRITICAL: Non-idle resources will affect resource takeback!
ResourceManager[5801]: CRITICAL: Non-idle resources may affect data integrity!
Done.
This issue was LVM status is available.
So, make a not available to the LVM device.
# lvdisplay
--- Logical volume ---
LV Path /dev/test_vg/test_dev
LV Name test_dev
VG Name test_vg
LV UUID Bw36wU-jgZy-JMRp-g3vg-xgTZ-r5kW-QPmq0f
LV Write Access read/write
LV Creation host, time hatest, 2014-05-07 18:12:16 +0000
LV Status available
# open 1
LV Size 296.16 GiB
Current LE 75816
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
# lvchange -an test_vg/test_dir
# lvdisplay
--- Logical volume ---
LV Path /dev/test_vg/test_dev
LV Name test_dev
VG Name test_vg
LV UUID Bw36wU-jgZy-JMRp-g3vg-xgTZ-r5kW-QPmq0f
LV Write Access read/write
LV Creation host, time hatest, 2014-05-07 18:12:16 +0000
LV Status NOT available
# open 1
LV Size 296.16 GiB
Current LE 75816
Segments 1
Allocation inherit
Read ahead sectors auto
- currently set to 256
Block device 253:0
Restart HA
# /etc/init.d/heartbeat start
Starting High-Availability services: LVM[16033]: INFO: LVM Volume test_vg is offline
LVM[16025]: INFO: Resource is stopped
Done.
# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda3 122G 9.2G 107G 8% /
tmpfs 36G 0 36G 0% /dev/shm
/dev/sda1 485M 34M 426M 8% /boot
/dev/mapper/test_vg-test_dev
292G 191M 277G 1% /test_dir