If you have suggestions for possible FAQ entries, please send them to: andrew@beekhof.net


Why Can't I Create a Wiki Account?

Automatic account creation has been disabled for security and anti-spam reasons. If you'd like an account, please email pacemaker@clusterlabs.org, and we can create one manually for you.

Why was the Project Started?

Pacemaker grew out of the Heartbeat project.

See the project history for more details.

Why is the Project Called Pacemaker?

First of all, the reason it's not called the CRM (for Cluster Resource Manager) is because of the abundance of terms that are commonly abbreviated to those three letters.

The Pacemaker name came from Kham, a good friend of Pacemaker author Andrew Beekhof's, and was originally used by a Java GUI Beekhof was prototyping in early 2007. The GUI was abandoned, and when it came time to choose a name for this project, Lars suggested it was an even better fit for an independent CRM.

The idea stems from the analogy between the role of this software and that of the little device that keeps the human heart pumping.

Pacemaker monitors the cluster and intervenes when necessary to ensure the smooth operation of the services it provides.

There were a number of other names (and acronyms) tossed around, but suffice to say Pacemaker was the best of the lot :-)

What is the Project's Relationship with Corosync?

Pacemaker keeps your applications running when they or the machines they're running on fail. However it can't do this without connectivity to the other machines in the cluster - a significant problem in its own right.

Corosync provides pacemaker:

  • a mechanism to reliably send messages between nodes,
  • notifications when machines appear and disappear
  • a list of machines that are up that is consistent throughout the cluster

What is the Project's Relationship with OpenAIS?

Originally Corosync and OpenAIS were the same thing. Then they split into two parts... the core messaging and membership capabilities are now called Corosync, and OpenAIS retained the layer containing the implementation of the AIS standard. Pacemaker itself only needs the Corosync piece in order to function.

Is there any documentation?

Yes, see the Pacemaker documentation set.

Where should I ask questions?

Basic questions can often be answered on the ClusterLabs IRC channel, but sending them to the relevant mailing list is always a good idea so that everyone can benefit from the answer.

Do I need shared storage?

No. We can help manage it if you have some, but Pacemaker itself has no need for shared storage.

Which cluster filesystems does Pacemaker support?

Pacemaker supports the popular OCFS2 and GFS2 filesystems. As you'd expect, you can use them on top of real disks or network block devices like DRBD.

What kind of applications can I manage with Pacemaker?

Pacemaker is application agnostic, meaning anything that can be scripted can be made highly available - provided the script conforms to one of the supported standards: LSB, OCF, Systemd, or Upstart.

Do I need a fencing device?

Yes. Fencing is the only 100% reliable way to ensure the integrity of your data and that applications are only active on one host. Although Pacemaker is technically able to function without Fencing, there are a good reasons SUSE and Red Hat will not support such a configuration.

Do I need to know XML to configure Pacemaker?

No. Although Pacemaker uses XML as its native configuration format, there exist at least 2 CLIs and 4 GUIs that present the configuration in a human friendly format.

How do I synchronize the cluster configuration?

Any changes to Pacemaker's configuration are automatically replicated to other machines. The configuration is also versioned, so any offline machines will be updated when they return.

Should I choose pcs or crmsh?

Arguably the best advice is to use whichever one comes with your distro. This is the one that will be tailored to that environment, receive regular bugfixes and feature in the documentation.

Of course, for years people have been side-loading all of Pacemaker onto enterprise distros that didn't ship it, so doing the same for just a configuration tool should be easy if your favorite distro does not ship your favorite tool.

What if my question isn't here?

See our help page and let us know!

What Versions of Pacemaker Are Supported?

When seeking assistance, please try to ensure you have one of the versions supported directly by the project. Please refer to the Releases page for further details including the schedule of planned releases.

Supported Branches

Series First Released Latest Version Release Date Next Release Planned
2.1 8 Jun 2021 2.1.1 9 Sep 2021 early 2022

Deprecated Branches

Series Last Release First Released Last Released
2.0 2.0.5 6 Jul 2018 02 Dec 2020
1.1 1.1.23 15 Jan 2010 22 June 2020
1.0 1.0.13 9 Oct 2008 13 Feb 2013
0.7 0.7.3 25 Jun 2008 22 Sep 2008
0.6 0.6.7 16 Jan 2008 15 Dec 2008


How Do I Install Pacemaker?

Installation from source and from pre-built packages is described on the Install page.

Can I use Pacemaker with Corosync 2.x and later?

Yes. This is the only option supported in Pacemaker 2.0.0 and later. See the documentation for details.

Can I use Pacemaker with Heartbeat?

Only with Pacemaker versions less than 2.0.0. See Linux-HA documentation for details.

Can I use Pacemaker with CMAN?

Only with Pacemaker versions greater than or equal to 1.1.5 and less than 2.0.0. See the documentation for details.

Can I use Pacemaker with Corosync 1.x?

Only with Pacemaker versions less than 2.0.0. You will need to configure Corosync to load Pacemaker's custom plugin. See the documentation for details.

Can I Have a Mixed Heartbeat/OpenAIS/Corosync/CMAN Cluster?


Where Can I Get the Source Code?

 git clone git://github.com/ClusterLabs/pacemaker.git

Where Can I Get Pre-built Packages?

Most users should be able to install Pacemaker directly from their distribution.

Pacemaker currently ships with Fedora (since 12), Red Hat Enterprise Linux (since 6.0 beta1), openSUSE (since 11.0), Debian (since "Squeeze"), Ubuntu LTS (since 10.4 "Lucid Lynx”) and as a key component of the High Availability Extension for SUSE Linux Enterprise Server 11 (available free of charge to existing SLES10 customers).

Users of other distributions should refer to our Install page.

What Do the Prefixes in Changelog Mean?

  • High, Med, Low: These all indicate how much the end-user/admin should care about the change.
  • Dev: These are changes that fix bugs that don't exist in any released version of the project


  • High - Preventing a segfault, implementing an important new feature or major changes to the behavior of a feature
  • Med - Hard to trigger bugs, bugs with workarounds, minor functional changes
  • Low - Non-functional changes, formatting or logging changes, changes to test code

How Do I Test My Cluster?

Pacemaker comes with a Cluster Test Suite (CTS for short) which is an integral part of our release testing. Traditionally this had been hard to set up and use however a new tool has been written to simplify the process.

It can be found at: http://github.com/ClusterLabs/pacemaker/tree/master/cts/cluster_test

Please give it a try and send feedback via the mailing list.

Resource is Too Active

Pacemaker will try and determine what resources are active on a machine when it starts. To do this, it sends what we call a probe which uses the monitor operation of your ResourceAgent.

There are two common reasons for seeing this message:

  • Your resource really is active on more than one node
    • Check you are _not_ starting it on boot
    • Did Pacemaker suffer an internal failure? If so, please check the Help:Contents page and report it
  • Your resource doesn't implement the monitor operation correctly
    • Make sure your Resource Agent conforms to the OCF-spec by using the ocf-tester script

You may also want to read the documentation for the multiple-active option which controls what Pacemaker does when it encounters this condition.

I Killed a Node but the Cluster Didn't Recover

One of the most common reasons for this is the way quorum is calculated for a 2-node cluster. Unlike Heartbeat, OpenAIS doesn't pretend 2-node clusters always have quorum.

In order to have quorum, more than half of the total number of cluster nodes need to be online. Clearly this is not the case when a node failure occurs in a 2-node cluster.

If you want to allow the remaining node to provide all the cluster services, you need to set the no-quorum-policy to ignore.

 crm configure property  no-quorum-policy=ignore

This provides the same behavior as Heartbeat, just be sure to set up STONITH to ensure data integrity.

How Do I Upgrade from Older Versions of Heartbeat?

If you plan to continue using the Heartbeat stack (as opposed to Corosync), check out the step-by-step guide on their website.


How Do I Enable the GUI? (Corosync 1)

First you need to install the pacemaker-pygui package. Then you need to find the following lines in corosync.conf

service {
	# Load the Pacemaker Cluster Resource Manager
	name: pacemaker
	ver:  0

and add

	use_mgmtd: 1

before the closing bracket.

How Do I Enable the GUI? (Heartbeat)

First you need to install the pacemaker-pygui package. Then you need to add the following lines to ha.cf

 apiauth	mgmtd	uid=root
 respawn	root	/usr/lib/heartbeat/mgmtd -v

These used to be implied when crm yes was present but only when heartbeat is built with the built-in mgmtd (which it no longer is).

NOTE: People on 64-bit platforms will probably need to replace lib with lib64

Colocation Sets

The sequential option does not refer to ordering. Instead it tells Pacemaker to create a colocation chain between the members of the set.


 colocation myset inf: app1 app2 app3 app4

is the equivalent of

 colocation myset-1 inf: app2 app1
 colocation myset-2 inf: app3 app2
 colocation myset-3 inf: app4 app3

(Eg. app4 -> app3 -> app2 -> app1)

Putting them in brackets sets sequential=false and removes the internal constraints. So

 colocation myset inf: app1 ( app2 app3 app4 )

is actually the equivalent of

 colocation myset-1 inf: app2 app1
 colocation myset-2 inf: app3 app1
 colocation myset-3 inf: app4 app1

(Eg. app2 -> app1, app3 -> app1, app4 -> app1)

The difference has implications when there is a failure. With sequential turned on, a failure in app2 results in app3 and app4 also being restarted. However with sequential turned off, a failure in app2 does not affect app3 or app4.

In both cases, a failure in app1 results in all resources being restarted.