Example configurations

From ClusterLabs

This page is created in the hope that it will guide you in creating your first configurations with Pacemaker

Basic information

On this page we describe how to configure Pacemaker with crm console. Bellow every configuration you will find a link to the actual XML(cib.xml) generated by those commands.

Enter configuration mode

Connect to the crm manager

[root@web1 ~]# crm

Create a new shadow copy of the current configuration named test-conf

crm(live)# cib new test-conf

Enter in the configuration menu

crm(live)# cib use test-conf
crm(test-conf)# configure
crm(test-conf)configure#

See current configuration

see the crm configuration

This will show you the base CRM directives configured in your currently running Heartbeat cluster

crm(test-conf)configure# show

see the cib.xml

This will show you the cib.xml contents currently in use on your running Heartbeat cluster

crm(test-conf)configure# show xml

Commit changes

Verify that your changes will not break the current setup

crm(test-conf)configure# verify

Exit configuration mode

crm(test-conf)configure# end

Commit the changes you have made in test-conf

crm(live)# cib commit test-conf

Exit from the CRM CLI interface

crm(live)# quit

In all examples

  1. you should first create a shadow cib
  2. you will have to commit changes made in the shadow cib
  3. our shared IP is: 85.9.12.3
  4. our default gateway IP is: 85.9.12.100
  5. machine 1 is with hostname jaba.failover.net
  6. machine 2 is with hostname joda.failover.net
  7. stonith is disabled
  8. you have configured both nodes jaba and joda in the cib.xml (if not please see the XML exmaples)

Failover IP

Additional assumptions for this example:

  • we monitor the IP every 10 seconds

Here we create a resource which will use IPaddr (OCF script, provided by Heartbeat).br / Tell this resource that it has one paramter (ip) which has one value (85.9.12.3). br / Tell this resource that it has one opratation (monitor) which has one parameter (interval) with one value (10s)

crm(test-conf)configure# primitive failover-ip ocf:heartbeat:IPaddr params ip=85.9.12.3 op monitor interval=10s

Failover IP + One service

Here we assume that:

  • our service that we migrate is Apache
  • we monitor the IP every 10 seconds
  • we monitor the Service(apache) every 15 seconds

Here we create a resource which will use IPaddr (OCF script, provided by Heartbeat).br / Tell this resource that it has one paramter (ip) which has one value (85.9.12.3). br / Tell this resource that it has one opratation (monitor) which has one parameter (interval) with one value (10s)

crm(test-conf)configure# primitive failover-ip ocf:heartbeat:IPaddr params ip=85.9.12.3 op monitor interval=10s

Here we create another resource which will use apache (LSB script, default location /etc/init.d/apache).br / Tell this resource that it has one opratation (monitor) which has one parameter (interval) with one value (15s)

crm(test-conf)configure# primitive failover-apache lsb::apache op monitor interval=15s


Failover IP Service in a Group

Here we assume that:

  • our service that we migrate is Apache
  • we monitor the IP every 10 seconds
  • we monitor the Service(apache) every 15 seconds
  • we have both the IP and the Service in a group called my_web_cluster
crm(test-conf)configure# primitive failover-ip ocf:heartbeat:IPaddr params ip=85.9.12.3 op monitor interval=10s
crm(test-conf)configure# primitive failover-apache lsb::apache op monitor interval=15s
crm(test-conf)configure# group my_web_cluster failover-ip failover-apache

Failover IP Service in a Group running on a connected node

We still assume the last example. But on top of that, we want the group to run on a node that has a working network connection to our default gateway. Therefore, we configure [pingd] and create a location constraint that looks at the pingd attribute representing that network connectivity.

Set up pingd

You do not have to make any change to /etc/ha.d/ha.cf for this to work:

crm(pingd)configure# primitive pingd ocf:pacemaker:pingd \ 
                      params host_list=85.9.12.100 multiplier=100 \
                      op monitor interval=15s timeout=5s
crm(pingd)configure# clone pingdclone pingd meta globally-unique=false

pingd location constraint

This tells the cluster to only run the group on a node with a working network connection to the default gateway.

crm(pingd)configure# location my_web_cluster_on_connected_node my_web_cluster \
 rule -inf: not_defined pingd or pingd lte 0