My brain is adjusting to being back to work after a computer less vacation so I will take this time to do a blog post on the database setup on UBC Blogs.
Before a few weeks ago we had one server and things were ok’ish but we planned on adding the BP for September along with a shiny CWL login button so it meant a lot more users might jump on board… so we added a couple more servers a database Master/Slave setup with a few more to come (another web and a cache).
But… before the Master/Slave we did database “partitioning”, what is this besides something that made me more than nervous for a fair amount? Well because WPMU creates around 8 database tables per blog overtime this will grow to a very large number of tables things start to get slow during backups (very slow actually would take 10-30 minutes to do a backup, this is because it would have to lock up a large number of tables 21k+) performance also begins to lag reportedly once you move past the 1000+ blog range, so we needed to move to a “Mulit-database” setup to be exact which basically involves blowing apart your database and enabling a Multi-DB plugin. *There are a few Multi-DB plugins: Hyper-DB used by WordPress.com which is quite complicated and provides little to now help documentation or support. SharDB developed by Ron Rennick which is based on HyperDB and then there is Multi-DB which is developed by WPMU Premium Dev and used by Edublogs… I decided to go with the Multi-DB *for now* first choice would have been SharDB because of the fact it’s based on HyperDB and it is a community plugin vs the “premium” but there were issues once the database moved to it’s own server (connection/timeout errors) it worked beautifully on the same box but on it’s own a no go. Ron mentioned this will be fixed in the future at which time I think we would switch.
Should be noted doing this step although nerve racking is relatively straight forward because of the great setup scripts that accompany both SharDB and Multi-DB and some great how to posts by Jim Groom I was able to figure this out and test during a Sakai presentation… implementation on the live took a fair amount of time approx 30 minutes (breaking 1 db into 256 dbs is slow). Once this was completed I have to say it felt faster (no empirical evidence) but doing things that were not so snappy prior like as an admin listing all users reported results much faster. I implemented the Master/Slave setup the following AM which is probably the coolest part because we have zero downtime with backups now and essentially a spare server… (if the master died we could flip to this quite easily). Here is how that was done *I am not a DBA like I said in the past so there maybe better ways but this seemed to work well so far*.
On the Master:
the my.cnf contains these settings related to master slave:
# for master/slave
# Enable binary logging
#Expire the logs every three days
Create a slave user on the master:
mysql> GRANT REPLICATION SLAVE ON
*.* to slave_user@'SLAVE_IP_ADDRESS'
IDENTIFIED BY 'slave_pw';
Dump the master db and import into the slave.
On the Slave include this in your my.cnf:
# for master/slave
# changes made to do slave
mysql> CHANGE MASTER TO
Start the slave:
mysql> start slave;
Check the status of the slave:
mysql> show slave status\G
If things worked out you should see:
Try to run SHOW MASTER STATUS and SHOW SLAVE STATUS (quickly master first) you will see the Relay_Log_Pos will be the pretty much the same:
mysql> show master status\G ;
*************************** 1. row ***************************
1 row in set (0.00 sec)
Exec_Master_Log_Pos should be close to the Position on master.
Now for backups just have your backup script run stop slave then dump and restart which gives no downtime for backups on production! Bloggers can post now at 3:00 am and not make passive aggressive tweets about blog DT.