Create shared disk for ocfs2:
Installing ocfs2 using yum:
# yum install ocfs*
Add the nodes to /etc/hosts if not already.
12.34.56.100 culbor1 12.34.56.101 culbor2
Use the following command to see your IP
# ifconfig
Configure /etc/ocfs2/cluster.conf using o2cb
# oc2b add-cluster cbcluster # oc2b add-node cbcluster culbor1 # oc2b add-node cbcluster culbor2
Example of cluster.conf afterwards:
node: name = culbor1 cluster = cbcluster number = 0 ip_address = 12.34.56.100 ip_port = 7777 node: name = culbor2 cluster = cbcluster number = 1 ip_address = 12.34.56.101 ip_port = 7777 cluster: name = cbcluster heartbeat_mode = local node_count = 2
Note: the node name must be identical to the hostname
Copy the cluster.conf to all nodes
# scp -pr /etc/ocfs2/ culbor2:/etc/
Configure the Timeout settings with
# service o2cb configure
>Check if it all worked with
# service o2cb status
Try to force-reload if it didn't
# service o2cb force-reload
Set o2cb to autostart on all nodes
#chkconfig o2cb on #chkconfig ocfs2 on
Change the following kernel parameters on all nodes in /etc/sysctl.conf (reboot required to enable these settings)
# vi /etc/sysctl.conf
kernel.panic_on_oops = 1 kernel.panic = 30
Disable selinux and iptables (warning not for production environment!)
# chkconfig iptables off # vi /etc/selinux/config
SELINUX=disabled
Use fdisk to set the partition (only needed on one node)
# fdisk /dev/sdb
Navigate through the menu:
n (new partition) p (primary) 1 (first partition) confirm confirm Close fdisk with "w" to write the partition table
Use fdisk on all other nodes to confirm the partition table
# fdisk /dev/sdb
Issue w
to write the changes to the disk
Create the folder where you want to mount the new disk (on all nodes)
# mkdir /u02
Format the disk with mkfs (only needed on one node). You can use mkfs parameters (such as label) if you like.
# mkfs.ocfs2 /dev/sdb1
Edit /etc/fstab and add the following line (on all nodes)
/dev/sdb1 /u02 ocfs2 defaults 1 2
Mount the new disk (on all nodes)
# mount /u02