Load Balanced MySQL Replicated Cluster
This page details how to install a 2-node cluster to load-balance and make highly available:
- Multiple Instances of Apache Tomcat
- MySQL with Master-Master circular replication
It also shows how to set up STONITH on HP iLO remote supervisor access.
This is based upon SLES 11 64-bit with the high-availability extension http://www.novell.com/products/highavailability/ .
Contents
Background
MySQL can scale out very well for read nodes using replication but scaling write nodes is more problematic. Generally only one node is possible for inserting data into a database unless MySQL cluster is used. The reason for this is when data is inserted from separate locations, auto_increment columns can produce the same value from 2 different nodes for different data inserts, causing conflicts when the data tries to merge.
MySQL circular replication can be used to scale out write nodes but there are certain considerations to taken into account. The data will only be as complete as the speed of the replication. If data is inserted faster than the MySQL slave thread can run then each node can be missing data from the other node. This can be acceptable or not depending on your application and data requirements. For example if you use foreign keys in your database, inserts will fail if the data which the foreign key references has not yet been replicated. These issues need to be considered before you decide to employ Master-Master circular replication.
This howto will show how 5 Tomcat instances running on 5 separate IP addresses can be made highly available and load-balanced. This is to show how additional IP addresses can also be load-balanced using a virtual IP.
IP Addressing Schema
The services will be laid out like so:
NODE1
- 192.168.1.10 NODE1-TOMCAT1 MYSQL1 NODE1
- 192.168.1.11 NODE1-ILO
- 192.168.1.12 NODE1-TOMCAT2
- 192.168.1.13 NODE1-TOMCAT3
- 192.168.1.14 NODE1-TOMCAT4
- 192.168.1.15 NODE1-TOMCAT5
NODE2
- 192.168.1.20 NODE2-TOMCAT1 MYSQL2 NODE2
- 192.168.1.21 NODE2-ILO
- 192.168.1.22 NODE2-TOMCAT2
- 192.168.1.23 NODE2-TOMCAT3
- 192.168.1.24 NODE2-TOMCAT4
- 192.168.1.25 NODE2-TOMCAT5
Load-balanced IPs
- 192.168.1.100 VIP-TOMCAT1 VIP-MYSQL
- 192.168.1.102 VIP-TOMCAT2
- 192.168.1.103 VIP-TOMCAT3
- 192.168.1.104 VIP-TOMCAT4
- 192.168.1.105 VIP-TOMCAT5
Operating System Install
Install SLES 11 64-bit using an ISO image on a virtual machine or physical media on a physical box. Accept all defaults but don't bother adding the add-on product yet. Turn off the firewall so you can SSH to it after the install.
Networking
Set up the network config files like so:
/etc/sysconfig/network/ifcfg-eth0
The main node IP address is configured either during the server install, or afterwards using YAST. The extra IP addresses for each Tomcat service to run on are configured in here too.
<source lang="bash">BOOTPROTO='static' BROADCAST= ETHTOOL_OPTIONS= IPADDR='192.168.1.10/24' MTU= NAME='NetXtreme II BCM5708 Gigabit Ethernet' # Will be different depending on your hardware. NETWORK= REMOTE_IPADDR= STARTMODE='auto' USERCONTROL='no' IPADDR_VIP2=192.168.1.12 NETMASK_VIP2=255.255.255.255 NETWORK_VIP2=192.168.1.0 BROADCAST_VIP2=2.255.255.255 IPADDR_VIP3=192.168.1.13 NETMASK_VIP3=255.255.255.255 NETWORK_VIP3=192.168.1.0 BROADCAST_VIP3=192.168.1.255 IPADDR_VIP4=192.168.1.14 NETMASK_VIP4=255.255.255.255 NETWORK_VIP4=192.168.1.0 BROADCAST_VIP4=2.255.255.255 IPADDR_VIP5=192.168.1.15 NETMASK_VIP5=255.255.255.255 NETWORK_VIP5=192.168.1.0 BROADCAST_VIP5=2.255.255.255</source>
/etc/sysconfig/network/ifcfg-lo
Virtual IP's have to be configured in the loopback config file. They have to be configured here to ensure the interface doesn't respond to ARP packets, but also to allow Linux to route correctly locally.
<source lang="bash">IPADDR=127.0.0.1 NETMASK=255.0.0.0 NETWORK=127.0.0.0 BROADCAST=127.255.255.255 IPADDR_2=127.0.0.2/8 STARTMODE=onboot USERCONTROL=no FIREWALL=no IPADDR_VIP1=192.168.1.100 NETMASK_VIP1=255.255.255.255 NETWORK_VIP1=192.168.1.0 BROADCAST_VIP1=2.255.255.255 IPADDR_VIP2=192.168.1.102 NETMASK_VIP2=255.255.255.255 NETWORK_VIP2=192.168.1.0 BROADCAST_VIP2=2.255.255.255 IPADDR_VIP3=192.168.1.103 NETMASK_VIP3=255.255.255.255 NETWORK_VIP3=192.168.1.0 BROADCAST_VIP3=192.168.1.255 IPADDR_VIP4=192.168.1.104 NETMASK_VIP4=255.255.255.255 NETWORK_VIP4=192.168.1.0 BROADCAST_VIP4=2.255.255.255 IPADDR_VIP5=192.168.1.105 NETMASK_VIP5=255.255.255.255 NETWORK_VIP5=192.168.1.0 BROADCAST_VIP5=2.255.255.255</source>
/etc/hosts
It's a good idea to have a record of all hosts and IP's in the local /etc/hosts of each node. This way the cluster isn't relying on DNS for name resolution. Add the entries into /etc/hosts on each node.
<source lang="bash">127.0.0.1 localhost 192.168.1.10 NODE1-TOMCAT1 MYSQL1 NODE1 192.168.1.11 NODE1-ILO 192.168.1.12 NODE1-TOMCAT2 192.168.1.13 NODE1-TOMCAT3 192.168.1.14 NODE1-TOMCAT4 192.168.1.15 NODE1-TOMCAT5 192.168.1.20 NODE2-TOMCAT1 MYSQL2 NODE2 192.168.1.21 NODE2-ILO 192.168.1.22 NODE2-TOMCAT2 192.168.1.23 NODE2-TOMCAT3 192.168.1.24 NODE2-TOMCAT4 192.168.1.25 NODE2-TOMCAT5 192.168.1.100 VIP-TOMCAT1 VIP-MYSQL 192.168.1.102 VIP-TOMCAT2 192.168.1.103 VIP-TOMCAT3 192.168.1.104 VIP-TOMCAT4 192.168.1.105 VIP-TOMCAT5</source>
Cluster Setup
ARP Parameters
To load-balance using ldirectord and LVS we need to restrict ARP using /etc/sysctl.conf. These settings are taken from http://kb.linuxvirtualserver.org/wiki/Using_arp_announce/arp_ignore_to_disable_ARP which describes the effect they have.
<source lang="bash">net.ipv4.conf.all.arp_ignore = 1 net.ipv4.conf.eth0.arp_ignore = 1 net.ipv4.conf.all.arp_announce = 2 net.ipv4.conf.eth0.arp_announce = 2</source>
Add this info to /etc/sysctl.conf and re-read the config:
sysctl -p
Install Software
- Copy the SLES-11 DVD ISO and the SLE-HAE DVD ISO to a folder called /ISO on the server
- Run YAST
- Go to Software Repositories
- Delete any repositories that are set up in there
- Add the SLES-11 ISO and accept the agreement
- Add the SLE-HAE ISO and accept the agreement
- Go to Software management
- Change the filter to Pattern and select High-Availability.
- Accept everything and let it install all dependencies.
After the package install has finished exit out of YAST and run:
zypper install gcc perl-mailtools perl-dbi heartbeat-ldirectord
Accept all dependencies again.
MySQL Install
Install MySQL server, client and shared from your chosen source e.g. from MySQL Enterprise RPM's:
rpm -Uvh MySQL-server-advanced-gpl-5.1.38-0.sles11.x86_64.rpm \ MySQL-client-advanced-gpl-5.1.38-0.sles11.x86_64.rpm \ MySQL-shared-advanced-gpl-5.1.38-0.sles11.x86_64.rpm
Run mysql_secure_installation and set up appropriately.
Ensure MySQL doesn't start at boot time as the cluster will control MySQL using an OCF resource agent:
chkconfig mysql off
Ldirectord Setup
Missing Perl Socket6
Ldirectord in SLES 11 won't work due to a missing perl Socket6 library. You can get this as source and compile it from http://search.cpan.org/~umemoto/Socket6/ or find an RPM. Installing from source is probably the easiest. Download the latest version (currently 6.0.23) and extract it. CD into the extracted dir then run:
./configure perl Makefile.PL make make install
ldirectord.cf
Create a file named /etc/ha.d/ldirectord.cf and edit it. Add the following into it:
<source lang="bash">checktimeout=5 checkinterval=7 autoreload=yes logfile="/var/log/ldirectord.log" quiescent=yes emailalert=your.email@address.com
- A server with a page at the main root of the site that displays "Tomcat1"
virtual=192.168.1.100:80
real=192.168.1.10:80 gate real=192.168.1.20:80 gate service=http request="/" receive="Tomcat1" scheduler=wlc protocol=tcp checktype=negotiate
- Tomcat2 - This web site cannot be checked correctly using negotiate so a TCP connect is used to check it is available.
- + Also, we use the SH scheduler to source hash and 'jail' connections to certain servers depending on their source IP.
- + Useful for SSL connections or when you can't use clustered session management.
virtual=192.168.1.102:80
real=192.168.1.12:80 gate real=192.168.1.22:80 gate service=https scheduler=sh protocol=tcp checktype=connect
- Tomcat3
virtual=192.168.1.103:80
real=192.168.1.13:80 gate real=192.168.1.23:80 gate service=http request="/" receive="Tomcat3" scheduler=wlc protocol=tcp checktype=negotiate
- Tomcat4 HTTP - Running on a non-standard port.
virtual=192.168.1.104:1500
real=192.168.1.14:1500 gate real=192.168.1.24:1500 gate service=http request="/" receive="Tomcat4" scheduler=wlc protocol=tcp checktype=negotiate
- Tomcat5 - Load-balancing the port 80 connector.
virtual=192.168.1.105:80
real=192.168.1.15:80 gate real=192.168.1.25:80 gate service=http request="/" receive="Tomcat5" scheduler=sh protocol=tcp checktype=negotiate
- Tomcat5 - Load-balancing the port 443 connector.
virtual=192.168.1.105:443
real=192.168.1.15:443 gate real=192.168.1.25:443 gate service=https request="/" receive="Active Quote Admin" scheduler=sh protocol=tcp checktype=negotiate
- MySQL - Using the negotiate checktype and the mysql service to ensure the DB service is acccessible.
virtual=192.168.1.100:3306
real=192.168.1.10:3306 gate real=192.168.1.20:3306 gate service=mysql login="check_user" passwd="check_password" database="ldirectord" request="SELECT * from connectioncheck;" scheduler=sh protocol=tcp checktype=negotiate</source>
All of the above should be self-explanatory or the comments should say what is going on. For more information check the man page for ldirectord.cf and look at the man page for ipvsadm for information on the different schedulers available.
Resources
Tomcat
In this howto we are using Tomcat CATALINA_BASE of /opt/tomcat1 /opt/tomcat2 etc and a CATALINA_HOME of /opt/tomcat.
Edit the server.xml config file of each Tomcat instance to have a connector for each port for the local IP and a connector for each port for the load-balanced virtual IP. For example tomcat1 listens on port 80 and 443 so will need a connector for both ports, on both the local and virtual IP address:
<source lang="xml"><Connector port="80" maxHttpHeaderSize="8192"
maxThreads="600" minSpareThreads="25" maxSpareThreads="75" address="192.168.1.10" enableLookups="false" redirectPort="443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> <Connector port="80" maxHttpHeaderSize="8192" maxThreads="600" minSpareThreads="25" maxSpareThreads="75" address="192.168.1.100" enableLookups="false" redirectPort="443" acceptCount="100" connectionTimeout="20000" disableUploadTimeout="true" /> <Connector port="443" maxHttpHeaderSize="8192" maxThreads="600" minSpareThreads="25" maxSpareThreads="75" address="192.168.1.10" enableLookups="false" disableUploadTimeout="true" acceptCount="200" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="conf/keystore" keystorePass="changeit" keystoreType="jks" /> <Connector port="443" maxHttpHeaderSize="8192" maxThreads="600" minSpareThreads="25" maxSpareThreads="75" address="192.168.1.100" enableLookups="false" disableUploadTimeout="true" acceptCount="200" scheme="https" secure="true" clientAuth="false" sslProtocol="TLS" keystoreFile="conf/keystore" keystorePass="changeit" keystoreType="jks" /></source>
The local IP will always be present on eth0 on NODE1, the virtual IP will either be present as an additional IP on eth0 or as an additional IP on lo depending on which node the ldirectord resource is running on.
LSB Init Scripts
OCF resource agents are the best to use if there is one available or if you write your own. However sometimes an LSB init script will suffice as long as it responds correctly to the start, stop and status arguments. It also has to give the correct return codes after each operation depending on the state of the resource. Check if your init script is LSB compatible by following the appendix at the end of the http://www.clusterlabs.org/wiki/Media:Configuration_Explained.pdf document.
There is a Tomcat resource agent already provided with Pacemaker. However if we were not able to use the OCF resource agent for any reason the following template for a Tomcat LSB init script should work fine. Just replace @service@ with the name of the Tomcat instance.
<source lang="bash">#!/bin/sh
- description: Start or stop the Tomcat server
-
- BEGIN INIT INFO
- Provides: @service@
- Required-Start: $network $syslog
- Required-Stop: $network
- Default-Start: 3
- Default-Stop: 0
- Description: Start or stop the Tomcat server
- END INIT INFO
NAME=@service@ export JRE_HOME=/opt/java export CATALINA_HOME=/opt/tomcat export CATALINA_BASE=/opt/$NAME export JAVA_HOME=/opt/java export JAVA_OPTS="-Dname=$NAME -XX:MaxPermSize=128m -Xms1024m -Xmx1536m"
check_running() {
NAME=$1 PID=`pgrep -f ".*\-Dname=$NAME " | wc -l ` [ $PID -gt 0 ] && echo "yes"
}
case "$1" in 'start')
sleep 1 RUNNING=`check_running $NAME` [ "$RUNNING" ] && echo "tomcat is already running" && exit 0 if [ -f $CATALINA_HOME/bin/startup.sh ]; then echo $"Starting Tomcat" $CATALINA_HOME/bin/startup.sh fi ;;
'stop')
sleep 1 RUNNING=`check_running $NAME` [ ! "$RUNNING" ] && echo "tomcat is already stopped" && exit 0 if [ -f $CATALINA_HOME/bin/shutdown.sh ]; then echo $"Stopping Tomcat" $CATALINA_HOME/bin/shutdown.sh fi ;;
'restart')
$0 stop $0 start ;;
'status')
RUNNING=`check_running $NAME` [ "$RUNNING" ] && echo "$NAME is running" && exit 0 || echo "$NAME is stopped" && exit 3;;
- )
echo echo $"Usage: $0 {start|stop}" echo exit 1;;
esac</source>
MySQL Configuration
MySQL Permissions
The permissions shown below are for a test setup. For production use a more fine-grained level of privilege control needs to be used.
NODE1
mysql -uroot -ppassword -e"grant all privileges on *.* to 'slave_user'@'localhost' identified by 'password'"; mysql -uroot -ppassword -e"grant all privileges on *.* to 'slave_user'@'%' identified by 'password'"; mysql -uroot -ppassword -e"flush privileges; reset master;"
NODE2
mysql -uroot -ppassword -e"grant all privileges on *.* to 'slave_user'@'localhost' identified by 'password'"; mysql -uroot -ppassword -e"grant all privileges on *.* to 'slave_user'@'%' identified by 'password'"; mysql -uroot -ppassword -e"flush privileges; reset master;"
Circular Replication Setup
Before performing this step ensure you can start MySQL manually using the init script and you have configured the /etc/my.cnf file correctly on each node using the following parameters in particular:
<source lang="bash">server-id = 1 # Increment per node auto_increment_increment = 2 # Set to the number of nodes you have (or are likely to have) auto_increment_offset = 1 # Set to the same as the server-id replicate-same-server-id = 0 # To ensure the slave thread doesn't try to write updates that this node has produced. log-bin # Turn on binary logging (neccessary for replication) log-slave-updates # Neccessary for chain or circular replication relay-log # As above relay-log-index # As above</source>
Once MySQL is happily running using the standard init script, run a reset master and show master status on each node to ensure that the master log file and position is the standard mysql-bin.000001 and 106. Also ensure that both MySQL installs are exactly the same with regards to privileges, databases and tables. We are going to be replicating changes to all databases so nothing can be different before replication starts.
NODE1
mysql -uroot -ppassword
<source lang="sql">stop slave; change master to master_host="NODE2", master_user="slave_user", master_password="password", master_log_file="mysql-bin.000001", master_log_pos=106; start slave;</source>
NODE2
mysql -uroot -ppassword
<source lang="sql">stop slave; change master to master_host="NODE1", master_user="slave_user", master_password="password", master_log_file="mysql-bin.000001", master_log_pos=106; start slave;</source>
Check the slave is running correctly on each node by running
show slave status\G
and checking the status is 'Waiting for master to send event' and the Slave_IO and the Slave_SQL threads are running.
MySQL Test Table
Add a database called ldirectord. This will be used by ldirectord to check the status of MySQL and by the MySQL OCF resource agent for checking status.
mysqladmin -uroot -ppassword create ldirectord
Then create a table called connectioncheck
mysql -uroot -ppassword
<source lang="sql">CREATE TABLE `connectioncheck` (
`i` int(1) NOT NULL, PRIMARY KEY (`i`)
) ENGINE=MyISAM DEFAULT CHARSET=latin1</source>
Of course, insert a single integer value into 1 row of this table. e.g.
<source lang="sql">INSERT INTO `connectioncheck` ( `i` ) VALUES ( 1);</source>
CRM Config
OpenAIS
Edit /etc/ais/openais.conf to your setup. The main lines you will need to change are:
<source lang="bash"> bindnetaddr: 192.168.1.0
mcastaddr: 226.94.1.1 mcastport: 5405</source>
Create a key for the AIS communication:
ais-keygen
Then scp the config to the other nodes in the cluster:
scp -r /etc/ais/ NODE2:/etc/
Then restart OpenAIS on both nodes:
rcopenais restart
Live CIB
Here we only provide the whole configuration for the cluster while explaining some of the aspects which are especially applicable to this situation. Due to the wealth of documentation for Pacemaker and the crm shell on this web site it doesn't require going over again.
Save the following into a file such as crm_config.txt
<source lang="bash">node NODE1 node NODE2 primitive STONITH-1 stonith:external/riloe \
params hostlist="NODE1 NODE2" ilo_hostname="192.168.1.11" ilo_user="Administrator" ilo_password="password" ilo_can_reset="true" \ op monitor interval="1h" timeout="1m" \ meta target-role="Started"
primitive STONITH-2 stonith:external/riloe \
params hostlist="NODE1 NODE2" ilo_hostname="192.168.1.21" ilo_user="Administrator" ilo_password="password" ilo_can_reset="true" \ op monitor interval="1h" timeout="1m" \ meta target-role="Started"
primitive Virtual-IP-Tomcat1 ocf:heartbeat:IPaddr2 \
params lvs_support="true" ip="192.168.1.100" cidr_netmask="24" broadcast="192.168.1.255" \ op monitor interval="1m" timeout="10s" \ meta migration-threshold="10"
primitive Virtual-IP-Tomcat2 ocf:heartbeat:IPaddr2 \
params lvs_support="true" ip="192.168.1.102" cidr_netmask="24" broadcast="192.168.1.255" \ op monitor interval="1m" timeout="10s" \ meta migration-threshold="10"
primitive Virtual-IP-Tomcat3 ocf:heartbeat:IPaddr2 \
params lvs_support="true" ip="192.168.1.103" cidr_netmask="24" broadcast="192.168.1.255" \ op monitor interval="1m" timeout="10s" \ meta migration-threshold="10"
primitive Virtual-IP-Tomcat4 ocf:heartbeat:IPaddr2 \
params lvs_support="true" ip="192.168.1.104" cidr_netmask="24" broadcast="192.168.1.255" \ op monitor interval="1m" timeout="10s" \ meta migration-threshold="10"
primitive Virtual-IP-Tomcat5 ocf:heartbeat:IPaddr2 \
params lvs_support="true" ip="192.168.1.105" cidr_netmask="24" broadcast="192.168.1.255" \ op monitor interval="1m" timeout="10s" \ meta migration-threshold="10"
primitive ldirectord ocf:heartbeat:ldirectord \
params configfile="/etc/ha.d/ldirectord.cf" \ op monitor interval="2m" timeout="20s" \ meta migration-threshold="10" target-role="Started"
primitive tomcat1 lsb:tomcat1 \
op monitor interval="30s" timeout="10s" \ meta migration-threshold="10" target-role="Started"
primitive tomcat2 lsb:tomcat2 \
op monitor interval="30s" timeout="10s" \ meta migration-threshold="10" target-role="Started"
primitive tomcat3 lsb:tomcat3 \
op monitor interval="30s" timeout="10s" \ meta migration-threshold="10" target-role="Started"
primitive tomcat4 lsb:tomcat4 \
op monitor interval="30s" timeout="10s" \ meta migration-threshold="10" target-role="Started"
primitive tomcat5 lsb:tomcat5 \
op monitor interval="30s" timeout="10s" \ meta migration-threshold="10" target-role="Started"
primitive mysql ocf:heartbeat:mysql \
params binary="/usr/bin/mysqld_safe" config="/etc/my.cnf" datadir="/var/lib/mysql" user="mysql" pid="/var/lib/mysql/mysql.pid" socket="/var/lib/mysql/mysql.sock" test_passwd="password" test_table="ldirectord.connectioncheck" test_user="slave_user" \ op monitor interval="20s" timeout="10s" \ meta migration-threshold="10" target-role="Started"
group Load-Balancing Virtual-IP-Tomcat1 Virtual-IP-Tomcat2 Virtual-IP-Tomcat3 Virtual-IP-Tomcat4 Virtual-IP-Tomcat5 ldirectord clone cl-tomcat1 tomcat1 clone cl-tomcat2 tomcat2 clone cl-tomcat3 tomcat3 clone cl-tomcat4 tomcat4 clone cl-tomcat5 tomcat5 clone cl-mysql mysql location l-st-1 STONITH-1 -inf: NODE1 location l-st-2 STONITH-2 -inf: NODE2 location Prefer-Node1 ldirectord \
rule $id="prefer-node1-rule" 100: #uname eq NODE1
property no-quorum-policy="ignore" \
start-failure-is-fatal="false" \ stonith-action="reboot"</source>
Some notes about the above configuration:
- We put lvs_support="true" into each Virtual-IP entry so the IPAddr2 resource agent can remove the IP from the loopback device before adding it to eth0. The node which isn't running ldirectord will still have the IP on the loopback device until it is promoted, where the IP will be removed from lo and added to eth0.
- The 2 STONITH devices are named corresponding to which node they will operate on. But lower down they are given -infinity location scores to ensure they can never run on that node.
- We set no-quorum-policy to ignore to ensure the cluster will continue to operate if 1 node is down. This is essential for 2-node clusters.
- start-failure-is-fatal is set to false to allow migration-threshold to work on each resource.
Import the crm config into the cluster on any active node:
crm configure < crm_config.txt
Check Cluster
Finally check the cluster to ensure it works:
crm_mon -1
<source lang="bash">============ Last updated: Mon Sep 28 11:01:25 2009 Current DC: NODE2 - partition with quorum Version: 1.0.3-0080ec086ae9c20ad5c4c3562000c0ad68374f0a 2 Nodes configured, 2 expected votes 9 Resources configured.
==
Online: [ NODE1 NODE2 ]
STONITH-1 (stonith:external/riloe): Started NODE2 STONITH-2 (stonith:external/riloe): Started NODE1 Resource Group: Load-Balancing
Virtual-IP-Tomcat1 (ocf::heartbeat:IPaddr2): Started NODE1 Virtual-IP-Tomcat2 (ocf::heartbeat:IPaddr2): Started NODE1 Virtual-IP-Tomcat3 (ocf::heartbeat:IPaddr2): Started NODE1 Virtual-IP-Tomcat4 (ocf::heartbeat:IPaddr2): Started NODE1 Virtual-IP-Tomcat5 (ocf::heartbeat:IPaddr2): Started NODE1 ldirectord (ocf::heartbeat:ldirectord): Started NODE1
Clone Set: cl-tomcat1
Started: [ NODE1 NODE2 ]
Clone Set: cl-tomcat2
Started: [ NODE2 NODE1 ]
Clone Set: cl-tomcat3
Started: [ NODE2 NODE1 ]
Clone Set: cl-tomcat4
Started: [ NODE2 NODE1 ]
Clone Set: cl-tomcat5
Started: [ NODE2 NODE1 ]
Clone Set: cl-mysql
Started: [ NODE2 NODE1 ]</source>