The Dell™ EqualLogic Host Integration Tools for Linux (HIT/Linux) includes Auto-Snapshot Manager/Linux Edition (ASM/LE) functionality – enabling the ability to create online, file system consistent copies of data stored on one or more PS Series groups. The resulting collection of copied data is called a Smart Copy. ASM/LE offers consistent Smart Copies utilizing the built-in snapshot, clone and replication facilities in PS Series arrays.
In an active/passive cluster configuration, only one node will have the filesystem mounted at any given time. A Smart Copy can be created by scheduling it to run on the node that has the filesystem mounted. Note: ASM/LE works on a GFS2 filesystem, the Smart Copy can be scheduled on either node and will be able to run since ownership is shared.
If you want to deploy ASM/LE in a two node active/passive RHEL clusters using the EXT4 filesystem, the following criteria must be met:
- HIT/Linux installed on both nodes of the cluster.
- ASM/LE configured on both nodes of the cluster.
- When creating volumes on the EqualLogic arrays, make sure they are configured for multiple iSCSI initiators.
- Best practice is to create LVM volumes, (optional), on the active node.
- Create mount points on both nodes.
- Run iscsiadm discoverydb on both nodes.
- The iscsid service running on both nodes.
- Cluster resource entry in the cluster.conf for each ext4 filesystem being used for ASM/LE. (See Example 1.0).
- Wrap the asmcli create smart-copy commands into a script, (optional). The script (runasmcli.sh) is an example of a way to take Smart Copies on cluster resources. It can be run manually or in a cron job (see Example 3.0). You can execute runasmcli.sh manually: runasmcli.sh serv:/mnt1 serv2:/mnt2 where serv and serv2 are the service names in the cluster.conf which are user defined services tied to the cluster resource and /mnt and /mnt2 are the filesystems (ext4) associated with serv and serv2.
- Optionally, a script can be created to check if a Smart Copy (asmcli) is in the processes of being created when the cluster node has stop or during a cluster failover. This script will make sure that the filesystems are in a clean state (not frozen) while being unmounted. In Example 2.0, the script (checkasm.sh) will check if asmcli is running on a node. If the active cluster node is stopping, it waits 15 seconds, long enough for the filesystem freeze/thaw to be completed. This script should be configured as a cluster resource in the cluster.conf, (see Example 1.0).
-<resources>
<lvmname="lvm" vg_name="vgcluster" lv_name="ap1" />
<lvmname="lvm2" vg_name="vgcluster2" lv_name="ap2" />
<fsname="FS" device="/dev/vgcluster/ap1" force_fsck="0" force_unmount="1" fsid="64050" fstype="ext4" mountpoint="/mnt/ap1" options="" self_fence="0" />
<fsname="FS2" device="/dev/vgcluster2/ap2" force_fsck="0" force_unmount="1" fsid="64051" fstype="ext4" mountpoint="/mnt/ap2" options="" self_fence="0" />
<scriptfile="/root/checkasm.sh" name="asmcli_script" />
</resources>
-<service autostart="1" domain="FD" name="serv" recovery="relocate">
<scriptref="asmcli_script" />
<lvmref="lvm" />
<fsref="FS" />
</service>
-<service autostart="1" domain="FD" name="serv2" recovery="relocate">
<scriptref="asmcli_script" />
<lvmref="lvm2" />
<fsref="FS2" />
</service>
</rm>
</cluster>
Example: 1.0
Example: 2.0
Example 3.0