MySQL Master to Master Replication on Rackspace Cloud

Richard Benson09 August 2010Web, IT Proscomments
To go with our Load Balanced Web Cluster, which provides good availability for your web services, providing high availability for your database is also likely to be an important requirement.  In most modern web apps, there's not much use having your webservers available constantly if your database is down.

There are a number of solutions to this with MySQL and every situation will require a different response, there are a lot of good articles out there to help you decide which solution is best. In this article we will be covering MySQL Master-Master replication and installation on Debian (Lenny) using Rackspace Cloud Servers.

The standard model of MySQL replication is a single master with multiple slaves, which provides you with very good read reliability, but writes can only be made to the master node.  This means that if the master fails, you can't just switch to another node and carry on as before, your slaves will become out of sync.  Additionally you can't load balance between your nodes for reads and writes.  By using a multiple master configuration, you can drop a node at any point and either switch your connection string to using the remaining master, or use load balancing and failover with HAProxy.

Create and update servers

Start by provisioning two 256MB Debian 5.0 (Lenny) servers in your Rackspace Cloud account, we will call them node-1 and node-2 from here on, we will also assume the internal IP addresses detailed below, replace wherever you see in example with the internal IPs given by Rackspace:
10.0.0.1 - node-1
10.0.0.2 - node-2
Once they have been provisioned and booted up, run the standard procedure of updates and install MySQL.

Run the following commands on both servers:
apt-get update
apt-get upgrade -y
apt-get install mysql-server -y
<enter root password>

Set up Replication users

On both servers you need to add a slave user to both servers with a good password, we'll set the user and password the same on both servers.  Log into your MySQL service with the following command:
mysql -u root -p

Then on both servers enter the following commands once at the mysql prompt, the ";" at the end of each line is important, it tells the shell that you have finished the command and it can execute it.
GRANT REPLICATION SLAVE ON *.* TO 'your-slave-username'@'%' IDENTIFIED BY 'yourslavepassword';
FLUSH PRIVILEGES;
Exit the mysql command shell with:
quit
Now we can stop the MySQL service on both servers to allow us to edit the config file with the following command:
/etc/init.d/mysql stop

Edit the MySQL config file

The config file for both servers is going to be similar but not exactly the same, I will highlight where you need to change the settings for each server.

Open the MySQL configuration file:
nano /etc/mysql/my.cnf
First, you need to amend the "bind-address"

For the sake of tidiness, look for the comment section that contains the following:
# The following can be used as easy to replay backup logs or for replication.

And after the commented out part, add (or amend) the following parameters, you can do this the same on both servers for now, and you will change the appropriate parts on the second server later.
server-id               = 1
replicate-same-server-id=0
auto-increment-increment=2
auto-increment-offset=1

master-host = 10.0.0.2
master-user = your-slave-username
master-password = yourslavepassword
master-connect-retry = 60

log_bin                 = /var/log/mysql/mysql-bin.log
expire_logs_days        = 10
max_binlog_size         = 100M
relay-log = /var/log/mysql/slave-relay.log
relay-log-index = /var/log/mysql/slave-relay.index
report-host=node-1
On node-2, set "server-id" to 2, this can be any number you like, as long as it's unique across your whole replication structure.  The parameters "auto-increment-increment" and "auto-increment-offset" help stop collisions with auto-number fields by forcing each server to generate auto-numbers that are offset by the amount entered in "auto-increment-offset".  So in your two server structure, you set "auto-increment-increment" the same on both servers, and to the number of servers in your replication structure. Thereforw you set "auto-increment-offset" as 2 for your second server.  This means that node-1 will generate auto-numbers 1, 3, 5, 7, 9 etc. and node-2 will generate 2, 4, 6, 8, etc.

The last parameters to change are "report-host" to the name of the server you are editing, and "master-host" to the IP address of the other server from the one you are editing.

Start the services and start the slave

Your configuration is complete and you can now start up your MySQL services, and then the slave thread with the MySQL servers, on both servers:
/etc/init.d/mysql start
This will start the MySQL service, and will start logging all changes to the binary log, but neither server is yet reading from the other's logs for replication to take place, so on both servers log into the mysql console:
mysql -u root -p
Now in the console you can start the slave thread:
START SLAVE;
And then check it's status:
SHOW SLAVE STATUS\G;
Check in here that "Slave_IO_Running" and "Slave_SQL_Running" both say "Yes".  Finally exit the MySQL shell with "quit" and your replication structure is complete.

Conclusion

Now that you have a Master to Master replication structure running, you can read or write from either database and they will both be constantly up to date with each other.  You can use this for manual load balancing and failover by alternating connection strings in each web application, and amending all connection strings in the event of a failure.

A handy trick to save you changing all connection strings in the event of a failure is to add an entry to the hosts file on your application servers pointing to one of your active MySQL servers than if that server fails, you only have to edit the hosts file and all your applications will pick up the change immediately.

In the next article, we will show you how to add an HAProxy load balancer to this configuration and how to configure it with heartbeat for automatic failover.
comments powered by Disqus
Support Ticket
Remote Support
Support
clever girl