Press enter to see results or esc to cancel.

RDS Migration from 5.5 to 5.6 with mysqldump

Amazon recently announced support for 5.6, unfortunately, direct upgrade from lower versions is not yet supported. On a recent migration work – running mysqldump flat out would’ve meant 6+hrs of downtime. How did we cut it off to 1h45m? Simple, run dump per table and pipe it directly to the new 5.6 instance in parallel using Percona Server’s mysqldump utility to take advantage of –innodb-optimize-keys.

Here’s the base script we used – of course, YMMV and make sure to optimize the destination instance as well!

#!/bin/bash
# export-run.sh
# This is the wrapper script which builds up the list of tables to split into $parallel parts and calls export-tables.sh

parallel=6
dblist="db1 db2 db3"
smysql="mysql -hsource-55.us-east-1.rds.amazonaws.com"
dmysql="mysql -hdest-56.us-east-1.rds.amazonaws.com"

dbin=$(echo $dblist|sed "s/ /','/g")

_echo() {
   echo "$(date +%Y-%m-%d_%H_%M_%S) INFO $1"
}

# List tables, split into $parallel parts that we will dump and import later
rm -rf table*
$smysql information_schema -BNe "SELECT CONCAT(TABLE_SCHEMA,'.',TABLE_NAME) FROM TABLES WHERE TABLE_SCHEMA IN ('${dbin}')" > tables.out
tcount=$(cat tables.out|wc -l)
maxlne=$((($tcount/$parallel)+1))
split -l$maxlne -a1 -d tables.out tablen
_echo "Will be dumping $tcount tables, $maxlne per thread"

_echo "Cleaning up databases on destination"
# Modify this if you actually want to DROP the databases from the destination first
for d in $dblist; do echo "$dmysql -e 'DROP DATABASE IF EXISTS $d; CREATE DATABASE $d'"; done

_echo "Starting parallel dump and imports"
> export.log
for t in $(ls|grep tablen); do ( ( bash export-tables.sh $t | tee -a export.log ) & ); done

After splitting the jobs into n files, you then fork processes using the script below to do the actual work.

#!/bin/bash
# export-tables.sh

smysql="-hsource-55.us-east-1.rds.amazonaws.com"
dmysql="mysql -hdest-56.us-east-1.rds.amazonaws.com"
tables=$1

_echo() {
   echo "$(date +%Y-%m-%d_%H_%M_%S) INFO $1"
}

for l in $(cat $tables); do
  d=$(echo $l|cut -d'.' -f1)
  t=$(echo $l|cut -d'.' -f2)
  _echo "Processing $d.$t"
  cmd="./ps/bin/mysqldump --order-by-primary --innodb-optimize-keys $smysql $d $t"
  $cmd | $dmysql $d
  _echo "... completed $d.$t"
done

And as bonus, here is how you can monitor for progress:

while true; do echo "$(date +%Y-%m-%d_%H_%M_%S) $(cat export.log |grep completed|wc -l)/$(cat tables.out|wc -l)"; sleep 5; done

This is exactly how we want mysqldump to be Morgan! 🙂