- Home
- Skills
- Code Review
- Migration Review
Migration Review
Review data and schema migrations for rollback safety, backward compatibility, and zero-downtime deployment.
The Problem
Migrations are the highest-risk changes in any system. A schema migration that adds a NOT NULL column without a default locks the table and fails on existing rows. Renaming a column breaks every query that references the old name — including ones running on servers that have not yet been updated. Data migrations that transform millions of rows without batching exhaust memory and lock transactions for minutes. And if anything goes wrong, “just roll back” often means losing data that was written after the migration.
The Prompt
Review the following migration for safety and zero-downtime deployment. Act as a database reliability engineer evaluating a migration before it runs on production.
DATABASE: [PostgreSQL / MySQL / MongoDB]
TABLE SIZE: [e.g., 50M rows in users, 200M rows in events]
DEPLOYMENT: [e.g., rolling deployment, blue/green, single server]
ORM: [e.g., Prisma, Knex, Django, Alembic, raw SQL]
MIGRATION CODE:
[paste migration file(s)]
APPLICATION CODE CHANGES:
[paste any code that changes alongside the migration]
Evaluate across these dimensions:
1. **Backward Compatibility**
- Can the old application code work with the new schema?
- Can the new application code work with the old schema?
- Is there a transition period where both old and new servers coexist?
- Are column/table renames done as add-new → migrate-data → drop-old?
2. **Lock Analysis**
- Will any operation acquire an exclusive table lock?
- On a table with 50M+ rows, how long will the migration take?
- Are ALTER TABLE operations online-safe for this database version?
- Are large data updates batched to avoid long-running transactions?
3. **Rollback Safety**
- Is there a down() / rollback migration?
- Is the rollback safe (does it lose data)?
- Can the rollback run without the application being stopped?
- Are destructive operations (DROP COLUMN, DROP TABLE) deferred to a cleanup migration?
4. **Data Integrity**
- Are default values provided for new NOT NULL columns?
- Are data transformations idempotent (safe to run twice)?
- Are foreign key constraints added after backfill completes?
- Are there CHECK constraints that existing data might violate?
5. **Performance Impact**
- Will the migration cause connection pool exhaustion?
- Are index creations using CONCURRENTLY (PostgreSQL)?
- Is the migration broken into phases that can be paused/resumed?
6. **Deployment Coordination**
- What order: migrate then deploy, or deploy then migrate?
- Are there feature flags controlling access to new schema features?
- Is monitoring in place for migration progress and errors?
For each issue, provide:
- **Step**: Which part of the migration
- **Risk**: What goes wrong in production
- **Severity**: downtime / data-loss / performance / safety
- **Fix**: Safe migration strategy with code
Example Output
## Migration Review: 3 critical issues
### Downtime Risk: ALTER TABLE with NOT NULL on 50M rows
Code: `ALTER TABLE users ADD COLUMN phone VARCHAR(20) NOT NULL`
Risk: Rewrites entire table. Exclusive lock for 5-15 minutes on 50M rows.
Fix (3-phase safe migration):
-- Phase 1: Add nullable column (instant, no lock)
ALTER TABLE users ADD COLUMN phone VARCHAR(20);
-- Phase 2: Backfill in batches (application code, no lock)
UPDATE users SET phone = 'unknown' WHERE phone IS NULL AND id BETWEEN $1 AND $2;
-- Phase 3: After backfill complete, set NOT NULL
ALTER TABLE users ALTER COLUMN phone SET NOT NULL;
### Data Loss: Column Rename Without Transition
Code: `ALTER TABLE orders RENAME COLUMN total TO total_cents`
Risk: Old servers still running reference `total`. All queries fail until restarted.
Fix: Use expand/contract pattern:
-- Migration 1: Add new column, dual-write in app
ALTER TABLE orders ADD COLUMN total_cents INTEGER;
-- Migration 2: Backfill old data
UPDATE orders SET total_cents = total * 100 WHERE total_cents IS NULL;
-- Migration 3 (after all servers updated): Drop old column
ALTER TABLE orders DROP COLUMN total;
When to Use
Run this on every migration before it touches staging, especially for tables with more than 1M rows. Critical for teams with rolling deployments where old and new code must coexist during deployment. Use it as a migration review checklist that prevents the “we need 30 minutes of downtime” conversation.
Pro Tips
- Always include table sizes — a migration safe on 1,000 rows can be catastrophic on 10,000,000 rows. Size determines strategy.
- Test on a production-size copy — ask “Generate a test plan that validates this migration on a database with realistic data volume.”
- Phase destructive changes — never rename, drop, and add in the same migration. Separate into add → migrate → drop across multiple deployments.
- Ask for a runbook — follow up with “Generate a step-by-step runbook for executing this migration in production, including rollback triggers.”