The day of tutorials started out with All Bases Covered: A Hands-on Introduction to High-availability MySQL and DRBD by Florian Haas and Philipp Reisner.
After a brief introduction to DRBD, they started discussing the configuration file. There were a couple settings that I had set incorrectly on my servers.
Since I have my two servers connected via a gigabit crossover cable, I had my synchronization rate set to 125MB. They recommended approximately 1/3 your network and disk I/O so that you’re applications don’t freeze up during synchronization. Their test system used 30MB so I’ll give it a try too.
Another setting they had different was the activity log extents. All of the references I looked at said to set the al-extents to 257. Actually, there’s an equation to find this value which is E = (R x t) / 4 where E is the al-extents, R is the synchronization rate, and t is the target synchronization time (in seconds). If the sync rate is 30MB and target sync time is 240 seconds, then the extent would be 1800, which rounded to the nearest prime is 1801.
Heartbeat is the cluster manager to detect when a node is unavailable. You should have at least 2 heartbeat connections between the two nodes. If eth0 is your public network and eth1 is your private network, you will want to configure Heartbeat to send the heartbeats across the public network using multicast and broadcast for the private network.
# /etc/ha.d/ha.cf bcast eth1 mcast eth0 188.8.131.52 694 1 0
The version of Heartbeat that they demonstrated was Heartbeat v2. I use the older v1, which isn’t as powerful, but much simpler to configure. It was also the first time I have seen the Heartbeat GUI. The GUI makes it easy as cake to manage the Heartbeat resources and offers a level of monitoring. You can tell the GUI was written by a developer since the usability could be improved greatly.
I specifically asked if DRBD has any issues with partitions larger than 2TB and Florian basically said if you can create the partition (meaning the driver supports it), then DRBD supports it. He mentioned something about how all SCSI devices use 32-bit integers for addressing that limits you to 2TB. This was news to me. My SATA RAID card is technically seen by Linux as a SCSI device. I’m not sure if this is 100% accurate, but nevertheless there is an easy solution. If you have 4TB of space, you can chop it up into two 2TB partitions, then use either software RAID 0 (stripe) or LVM (linear or striped map mode).
I can’t wait to build my next HA cluster, but this time using Heartbeat v2.