DRBD PgSQL HowTo

From ClusterLabs

Jump to: navigation, search


PostgreSQL DRBD active/passive cluster.

Set up 2 Linux servers. Use any distro per your taste. Commands bellow assume that you are using RedHat - like distro.

Install PostgreSQL and make sure that you disabled automatic startup for it:

chkconfig posgresql off

Initialize PostgreSQL database if necessary

service postgresql initdb

Follow DRBD HowTo 1.0 instructions to set up DRBD active/passive cluster with Pacemaker. After that your cib file should look something like this:

node node1
node node2
primitive drbd0 ocf:linbit:drbd \
        params drbd_resource="drbd0" \
        op monitor interval="60s" role="Master" timeout="10s" \
        op monitor interval="60s" role="Slave" timeout="10s"
ms ms-drbd0 drbd0 \
        meta clone-max="2" clone-node-max="1" master-max="1" master-node-max="1" notify="yes" globally-unique="false" target-role="Started" 

Next let's add a shared IP. I use 192.168.1.130 address as a shared IP you should choose your own:

crm configure primitive dbIP ocf:heartbeat:IPaddr params ip="192.168.1.130"

Next step is adding a file system resource. Here I assume that you already created a ext3 file system on your shared DRBD device. You also should copy all files from /var/lib/pgsql/data directory to that new filesystem. This filesystem will be used a shared storage for database files:

crm configure primitive dbFS ocf:heartbeat:Filesystem \
    params fstype="ext3" directory="/var/lib/pgsql/data" device="/dev/drbd0"

Note on ownerships and permissions: To work properly /var/lib/pgsql/data directory has to be owned by the owner of PostreSQL database, usually it's posgresql user. Permission mask for that directory has to be 700. You'll have to mount drbd0 device to /var/lib/pgsql/data and change ownership and permissions then otherwise it'll be owned by root:root. Changing ownership of the mount point after filesystem was mounted will make that change consistent between mounts.

Next step is adding PostgreSQL resource:

crm configure primitive pgsqlDB ocf:heartbeat:pgsql

Here I used all default options. To make it more secure and reliable you might want to use additional options of pgsql OCF RA, especially such options as pghost, monitor_user and monitor_password to configure more sophisticated monitoring over PostgreSQL database.

Now let's combine all our resources into a group to be able to manage all of them as one:

 
crm configure group dbGroup dbIP dbFS pgsqlDB

Let's assume that we want to use node1 as a primary node for our cluster:

 crm configure location primNode dbGroup rule "1000: #uname eq node1"

Our group has to be collocated with DRBD master resource and start after DRBD started:

crm configure colocation dbGroup_on_master inf: dbGroup ms-drbd0:Master
crm configure order drbd0_before_dbGroup : ms-drbd0:promote dbGroup:start

Now we can start our group.

crm resoource start dbGroup

If you run crm_mon now it'll look like this:

Online: [ fc-node1 fc-node2 ]

Full list of resources:

 Master/Slave Set: ms-drbd0
     Masters: [ node1 ]
     Slaves: [ node2 ]
 Resource Group: dbGroup
     dbIP	(ocf::heartbeat:IPaddr):        Started node1
     dbFS	(ocf::heartbeat:Filesystem):    Started node1
     pgsqlDB    (ocf::heartbeat:pgsql): Started node1

Please note that all these instructions are very basic ones. To make your cluster really reliable you must use some kind of STONITH device to provide safety to your data. Also you'd probably want to use pingd mechanism to monitor network connectivity on your cluster nodes.

Personal tools