Zum Inhalt springen

Zero-Downtime MySQL Migration to Aurora Using AWS DMS and Binlog Replication

Migrating MySQL to Amazon Aurora RDS: Backup, Restore, and Version Compatibility Guide

Amazon Aurora offers high availability, scalability, and managed performance enhancements over traditional MySQL. Migrating your self-managed MySQL databases to Aurora can drastically simplify operations—but how you migrate depends on your MySQL version, data size, and constraints like foreign keys.

In this guide, I’ll walk you through two proven backup-and-restore approaches:

  1. Percona XtraBackup for physical backup and streaming to Aurora
  2. mysqldump to Amazon S3 and restore to Aurora RDS

And we’ll clarify when to use native restore, and when to use AWS Database Migration Service (DMS).

🛠️ Method 1: Physical Backup using Percona XtraBackup (for large, same-version migrations)

Step 1: Take a Physical Backup

From the EC2 instance or the on-prem MySQL server, run:

time xtrabackup --backup --slave-info --safe-slave-backup --stream=xbstream 
--parallel=6 --target-dir=/home/rack/240311-03603 2> /home/rack/240311-03603/xtrabackup_log 
| pv -s $(du -sb /san/mysql-fs/mysql/ | cut -f1) 
| zstd -3 -T8 | ssh user@destination_ip "zstd -d - | xbstream --parallel=3 -x -C /mysql_incoming"

Step 2: Prepare the Backup

xtrabackup --prepare --apply-log-only --target-dir=/mysql_incoming

Step 3: Upload to Amazon S3

aws s3 cp /mysql_incoming s3://<bucket-name>/mysql_prepared/ --recursive

Step 4: Restore into Aurora

Use the Aurora feature to restore from S3, following:
📖 AWS Docs – Restore from S3 to Aurora MySQL

💡 When to Use Percona XtraBackup?

  • You are migrating from a MySQL version that’s identical or Aurora-compatible (e.g., MySQL 5.7.35 → Aurora MySQL 5.7)
  • You want a faster and consistent snapshot with binary log position (great for replication setup)
  • You want to replicate to Aurora after restoration for a phased cutover

🛠️ Method 2: mysqldump and S3 Upload (for lighter or version-sensitive migrations)

Step 1: Export Schema

mysqldump -h <source-endpoint> -u root -p --no-data --set-gtid-purged=OFF mydb > schema.sql

Step 2: Upload to S3

aws s3 cp schema.sql s3://<bucket-name>/schema/

Step 3: Restore into Aurora

SSH into an EC2 or RDS-compatible shell:

mysql -h <aurora-endpoint> -u root -p < mydb < schema.sql

⚠️ If MySQL Versions Don’t Match: Use Staged Migration

Let’s say your source is MySQL 5.7.36 and Aurora only supports up to 5.7.35:

  1. Restore the backup to an intermediate RDS MySQL instance of same version (e.g., RDS MySQL 5.7.36)
  2. Use AWS DMS to migrate to Aurora (which supports MySQL 5.7.35 or compatible)

❗ DMS handles schema conversion and is ideal when you cannot do a physical restore due to version mismatch or when foreign key constraints exist.

🔐 Handling DEFINERS in Procedures, Triggers, Functions, and Views

When migrating MySQL databases to Amazon Aurora using tools like Percona XtraBackup or mysqldump, stored logic objects often retain a DEFINER clause that references the original MySQL user.

If the same user does not exist in Aurora or lacks necessary privileges, it can cause:

  • ⚠️ Import or replication failures
  • ❌ Application runtime errors
  • 🚩 Replication halts due to permission issues

✅ Step 1: Identify DEFINERs on the Source MySQL

SELECT ROUTINE_NAME, ROUTINE_TYPE, DEFINER 
FROM INFORMATION_SCHEMA.ROUTINES 
WHERE ROUTINE_SCHEMA = 'your_db';

SELECT TRIGGER_NAME, EVENT_OBJECT_TABLE, DEFINER 
FROM INFORMATION_SCHEMA.TRIGGERS 
WHERE TRIGGER_SCHEMA = 'your_db';

SELECT TABLE_NAME, DEFINER 
FROM INFORMATION_SCHEMA.VIEWS 
WHERE TABLE_SCHEMA = 'your_db';

👥 Step 2: Recreate DEFINER Users in Aurora with Required Privileges

CREATE USER 'admin'@'%' IDENTIFIED BY 'StrongPassword123!';
GRANT ALL PRIVILEGES ON your_db.* TO 'admin'@'%';
FLUSH PRIVILEGES;

If the user doesn’t require login capabilities:

ALTER USER 'admin'@'%' ACCOUNT LOCK;

✂️ Optional: Remove DEFINERs from Dumps

mysqldump --routines --triggers --skip-definer --no-data -u root -p your_db > schema.sql

Or manually remove them:

sed -i.bak 's/DEFINER=[^*]**/*/g' schema.sql

🔍 Step 3: Post-Restore Validation in Aurora

SHOW PROCEDURE STATUS WHERE Db = 'your_db';
SHOW TRIGGERS FROM your_db;
SELECT TABLE_NAME, DEFINER FROM INFORMATION_SCHEMA.VIEWS WHERE TABLE_SCHEMA='your_db';

🔁 Setting Up Replication from MySQL Source to Amazon Aurora

To maintain real-time sync between the source and the Aurora target until production cutover, follow these steps:

🛠️ Step 1: Enable Binary Logging on Source MySQL

In my.cnf on the source server:

[mysqld]
server-id = 1
log-bin = mysql-bin
binlog_format = ROW
binlog_row_image = FULL
expire_logs_days = 7
sync_binlog = 1

Restart MySQL:

sudo systemctl restart mysqld

👤 Step 2: Create Replication User on Source

CREATE USER 'repluser'@'%' IDENTIFIED BY 'replpassword';
GRANT REPLICATION SLAVE ON *.* TO 'repluser'@'%';
FLUSH PRIVILEGES;

📌 Step 3: Lock Tables and Get Replication Position

FLUSH TABLES WITH READ LOCK;
SHOW MASTER STATUS;

Note the File and Position, e.g., mysql-bin.000005, 1540. Keep session open until backup completes.

Then:

UNLOCK TABLES;

☁️ Step 4: Restore to Aurora (Already Done)

Assume restore was done using same binlog position.

🔗 Step 5: Configure External Replication in Aurora

CALL mysql.rds_set_external_master (
  host := 'source-mysql.example.com',
  port := 3306,
  user := 'repluser',
  password := 'replpassword',
  log_file := 'mysql-bin.000005',
  log_pos := 1540,
  ssl := 0
);

▶️ Step 6: Start Replication

CALL mysql.rds_start_replication;

🔎 Step 7: Monitor Replication Status

SHOW SLAVE STATUSG

Ensure:

  • Slave_IO_Running: Yes
  • Slave_SQL_Running: Yes
  • Seconds_Behind_Master: 0 (or minimal)

📊 Step 8: Monitor Replication Lag with CloudWatch

Enable and alert on:

  • AuroraReplicaLag
  • ReplicaLag

Use CloudWatch alarms if lag exceeds 30s.

🚦 Step 9: Keep Replication Running Until Cutover

  • Allow app writes to source until ready
  • Stop writes during switchover
  • Wait for Seconds_Behind_Master = 0
  • Switch DNS (Route53) or promote Aurora:
aws rds promote-read-replica-db-cluster --db-cluster-identifier your-cluster

🔐 Best Practices Before Replication or Cutover

  • Set binlog_format=ROW and binlog_row_image=FULL
  • Set read_only=ON on the Aurora replica
  • Adjust innodb_buffer_pool_size based on source sizing
  • Ensure all users and grants are recreated in Aurora
  • Pause replication before major changes; resume after sync
  • Use CloudWatch to monitor AuroraReplicaLag
  • Test workloads in a non-prod clone of Aurora before switching traffic

🛍️ Next Steps

In my next post, I’ll walk you through:

  • Creating DMS source and target endpoints
  • Setting up AWS DMS replication tasks
  • Handling view tables, user permissions, and Route 53 cutover

Stay tuned!

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert