I’ve found a lot of articles online about how to set up simple two node ‘example’ clusters using Corosync and Pacemaker but very little documentation about how to go beyond that, so when I finally got it working, I thought I’d share. This article assumes all of the nodes are on the same subnet, going beyond that would require routing rules and single points of failure, an issue i will address in a future article. It also assumes the virtual IP you are using is also on the same subnet. This setup is active/passive meaning that only one of the nodes claims the virtual IP at a time. When one of the nodes goes down, the others make a decision about which should get it next. There is a very brief outage between the time a node goes down and the time the virtual IP is reassigned to another node, this can be helped by tweaking timeout settings, but that’s an article for another day. 3 of the 4 nodes can go down and the virtual IP will still be accessible on the 4th.
To differentiate between commands that shoud be run on all four nodes and commands that should only be run on one of them, I will precede the commands with ‘%%%%’ and ‘%’ respectively.
For more information about Pacemaker and Corosync, I recommend the free Clusters from Scratch book, available here: http://clusterlabs.org/doc/en-US/Pacemaker/1.1/html/Clusters_from_Scratch/
%%%% yum install pacemaker pcs
%%%% systemctl start pcsd
%%%% systemctl enable pcsd
This will create a hacluster user which we need to set a password for as it will be used to authenticate the nodes in the cluster
%%%% passwd hacluster
Replace the placeholders with the actual IPs of the nodes, enter the credentials for the hacluster user when prompted.
% pcs cluster auth NODE1_IP_ADDRESS NODE2_IP_ADDRESS NODE3_IP_ADDRESS NODE4_IP_ADDRESS
5. On one of the nodes, run a setup command that will create a basic config file(/etc/corosync/corosync.conf) on all four nodes
Replace CLUSTER_NAME with any value.
% pcs cluster setup --name CLUSTER_NAME NODE1_IP_ADDRESS NODE2_IP_ADDRESS NODE3_IP_ADDRESS NODE4_IP_ADDRESS
On each of the nodes, you will need to rewrite the /etc/corosync/corosync.conf file and add needed configurations. Follow the example file below.
totem { version: 2 secauth: off cluster_name: CLUSTER_NAME # Use the value from step 5 transport: udpu interface { ringnumber: 0 bindnetaddr: MY_IP_ADDRESS # Replace this with the IP address of the local machine broadcast: yes mcastport: 5405 } } nodelist { # Replace all four IP addresses here node { ring0_address: NODE1_IP_ADDRESS nodeid: 1 } node { ring0_address: NODE2_IP_ADDRESS nodeid: 2 } node { ring0_address: NODE3_IP_ADDRESS nodeid: 3 } node { ring0_address: NODE4_IP_ADDRESS nodeid: 4 } } quorum { provider: corosync_votequorum wait_for_all: 1 } logging { to_logfile: yes logfile: /var/log/cluster/corosync.log to_syslog: yes }
% pcs cluster start –all
This assumes that VIRTUAL_IP is on the same subnet as the other 4 IP addresses, CIDR_SUBNET_MASK should be the CIDR notation form of the same subnet mask used by the other nodes (so if your subnet mask is 255.255.255.0, use CIDR_SUBNET MASK is 24)
% sudo pcs resource create Cluster_VIP ocf:heartbeat:IPaddr2 ip=VIRTUAL_IP cidr_netmask=CIDR_SUBNET_MASK op monitor interval=20s
Now we can check the status of the cluster using
pcs status
You should now be able to ping your virtual IP from any of the nodes.
转载本站任何文章请注明:转载至神刀安全网,谢谢神刀安全网 » Sharing a highly available virtual IP among four Centos 7 nodes