# Incremental SQL Backup System Using PostgreSQL Logical Replication **PostgreSQL Version**: This design is based on PostgreSQL 18.0 documentation. While most features (logical replication, event triggers, pg_recvlogical) are available in earlier versions (PostgreSQL 10+), verify specific parameter availability (e.g., `max_slot_wal_keep_size` requires PostgreSQL 13+) for your target version. ## Executive Summary This document details the design for a PostgreSQL backup system that produces human-readable, plain SQL incremental backups using logical replication. The system creates backups that remain readable and restorable for 10+ years while supporting online operation and crash safety. **Design Decision**: Use `pg_recvlogical` with the `decoder_raw` plugin for DML capture, combined with event triggers using `pg_logical_emit_message()` for DDL tracking and periodic `pg_dumpall --globals-only` for shared objects. **Why This Works**: - **Built-in tooling handles complexity**: `pg_recvlogical` provides streaming infrastructure, crash recovery, and position tracking - **No transformation layer needed**: `decoder_raw` produces production-ready SQL directly - **Complete coverage**: Event triggers + `pg_logical_emit_message()` + `pg_dumpall --globals-only` captures all DDL at correct chronological positions - **Long-term readability**: Plain SQL format that can be executed years later - **Correct DDL/DML ordering**: DDL messages appear in replication stream at exact time of execution **Key Requirements**: 1. **DDL tracking** - Event triggers emit DDL via `pg_logical_emit_message()`; `pg_dumpall --globals-only` handles shared objects 2. **Replica identity configuration** - All tables need proper configuration for UPDATE/DELETE 3. **Aggressive monitoring** - Replication slots must be monitored to prevent operational issues 4. **decoder_raw extension** - Third-party plugin must be extended with TRUNCATE support and `message_cb` callback ## Architecture Overview ### High-Level Design ``` ┌─────────────────────────────────────────────────────────────┐ │ PostgreSQL Database │ │ │ │ ┌────────────────┐ ┌──────────────────┐ │ │ │ Regular Tables│────────▶│ WAL (Write-Ahead│ │ │ │ (DML Changes) │ │ Log) │ │ │ └────────────────┘ └──────────────────┘ │ │ │ │ │ ▼ │ │ ┌─────────────────────────┐ │ │ │ Logical Decoding Process│ │ │ │ (decoder_raw plugin) │ │ │ └─────────────────────────┘ │ │ │ │ └──────────────────────────────────────┼───────────────────────┘ │ ▼ ┌─────────────────────────────┐ │ Replication Slot │ │ (Tracks position, durable) │ └─────────────────────────────┘ │ ▼ ┌─────────────────────────────┐ │ pg_recvlogical Tool │ │ (Built-in PostgreSQL util) │ └─────────────────────────────┘ │ ┌──────────────┴──────────────┐ ▼ ▼ ┌─────────────────────┐ ┌─────────────────────┐ │ Incremental Files │ │ Full pg_dump │ │ (SQL Changes) │ │ (Periodic) │ │ - 2024-01-01.sql │ │ - base-2024-01.sql │ │ - 2024-01-02.sql │ │ - base-2024-02.sql │ │ - ... │ │ - ... │ └─────────────────────┘ └─────────────────────┘ ``` ### Core Components 1. **Logical Replication Slot**: Durable position tracker in PostgreSQL 2. **decoder_raw Plugin**: Transforms binary WAL to executable SQL 3. **pg_recvlogical**: Built-in PostgreSQL tool that streams logical decoding output 4. **Base Backup System**: Regular full `pg_dump` backups with `--snapshot` for consistency 5. **Schema Tracking System**: Event triggers + `pg_dumpall --globals-only` for DDL changes ## How It Works ### DML Capture via Logical Replication PostgreSQL's logical replication decodes the Write-Ahead Log (WAL) into logical changes. The `decoder_raw` plugin outputs these directly as executable SQL: ```sql BEGIN; INSERT INTO public.users (id, name, email) VALUES (1, 'Alice', 'alice@example.com'); UPDATE public.users SET name = 'Alice Smith' WHERE id = 1; DELETE FROM public.orders WHERE id = 42; COMMIT; ``` **Key Properties**: - **Crash-safe**: Replication slots persist position across crashes (positions persisted at checkpoint intervals; after crash, slot may return to earlier LSN causing recent changes to be replayed) - **Consistent**: Transaction boundaries are preserved - **Online**: Runs without blocking database operations - **Idempotent positioning**: Can restart from last known position (clients responsible for handling duplicate messages) ### DDL Capture via Event Triggers and Logical Messages Logical replication does **not** capture DDL (CREATE TABLE, ALTER TABLE, etc.). We solve this by emitting DDL commands directly into the logical replication stream using PostgreSQL's `pg_logical_emit_message()` function: ```sql -- Create event trigger function that emits DDL into replication stream CREATE OR REPLACE FUNCTION emit_ddl_to_stream() RETURNS event_trigger AS $$ BEGIN -- Emit DDL command directly into logical replication stream -- The 'true' parameter makes this transactional (part of current transaction) -- The 'ddl' prefix allows the output plugin to identify these messages PERFORM pg_logical_emit_message( true, -- transactional 'ddl', -- prefix for identification current_query()::text -- the actual DDL SQL command ); END; $$ LANGUAGE plpgsql; -- Register the event trigger CREATE EVENT TRIGGER emit_ddl_to_stream ON ddl_command_end EXECUTE FUNCTION emit_ddl_to_stream(); ``` **How it works**: 1. Event trigger fires on all DDL commands 2. `pg_logical_emit_message()` writes DDL into WAL as a logical decoding message 3. Message appears in replication stream at exact chronological position relative to DML 4. Output plugin's `message_cb` callback outputs DDL as executable SQL 5. Restore is simple: just execute the incremental backup file sequentially **Example incremental backup output**: ```sql BEGIN; INSERT INTO public.users (id, name, email) VALUES (1, 'Alice', 'alice@example.com'); COMMIT; BEGIN; -- DDL message appears at exact time of execution ALTER TABLE public.users DROP COLUMN email; COMMIT; BEGIN; -- Subsequent DML only references remaining columns INSERT INTO public.users (id, name) VALUES (2, 'Bob'); COMMIT; ``` **Key Properties**: - ✅ **Perfect chronological ordering**: DDL appears exactly when it was executed - ✅ **Transactional integrity**: DDL message commits with its transaction - ✅ **Simple restore**: Execute backup file sequentially with `psql -f` - ✅ **PostgreSQL built-in**: `pg_logical_emit_message()` available since PostgreSQL 9.6 **Limitations**: - Event triggers don't fire for shared objects: databases, roles, tablespaces, parameter privileges, and ALTER SYSTEM commands - Solution: Use periodic `pg_dumpall --globals-only` to capture shared objects ## Implementation Components ### 1. Initial Setup Script **Purpose**: Bootstrap the backup system **Tasks**: - Install decoder_raw plugin (compile from source) - Create logical replication slot with snapshot export (see detailed procedure below) - Capture the exported snapshot identifier - Take initial base backup using the exported snapshot - Set up event triggers for DDL capture - Create initial `pg_dumpall --globals-only` backup - Configure `REPLICA IDENTITY` on tables without primary keys - Document PostgreSQL version and installed extensions **Critical Detail - Initial Snapshot Consistency**: From PostgreSQL documentation (Section 47.2.5): > When a new replication slot is created using the streaming replication interface, a snapshot is exported which will show exactly the state of the database after which all changes will be included in the change stream. This ensures the base backup and incremental stream are perfectly aligned with no gaps or overlaps. **Step-by-Step Setup Procedure**: 1. **Create replication slot with snapshot export**: - Use `pg_recvlogical --create-slot` with `--if-not-exists` flag for idempotency - Command outputs snapshot identifier in format: `snapshot: 00000003-00000001-1` - Capture this identifier from stdout/stderr (appears on stderr with "snapshot:" prefix) 2. **Extract and validate snapshot identifier**: - Parse the snapshot identifier from slot creation output - Verify identifier was successfully captured before proceeding - Snapshot is only valid during the creating session/connection 3. **Take synchronized base backup**: - Use `pg_dump --snapshot= --file=backup.sql` with captured snapshot - This guarantees perfect alignment between base backup and incremental stream - Use `--compress=zstd:9` for large databases to reduce storage requirements 4. **Capture globals and metadata**: - Run `pg_dumpall --globals-only` for roles, tablespaces - Document PostgreSQL version, extensions, encoding in metadata file **Critical Considerations**: - **Snapshot validity**: Exported snapshot only valid until the session that created it disconnects; must use immediately or maintain connection - **Idempotency**: The `--if-not-exists` flag allows safe re-execution of setup scripts - **Timing**: Entire sequence (create slot → capture snapshot → run pg_dump) must complete while snapshot remains valid ### 2. Incremental Backup Collection **Tool**: Built-in `pg_recvlogical` utility with `decoder_raw` plugin **Key Configuration**: Run `pg_recvlogical --start` with the following important parameters: - **Status interval** (`--status-interval`): Controls how often client reports position back to server (recommended: 10 seconds) - Lower values (5-10s) allow server to advance slot position and free WAL more quickly - Too high risks slot lag and WAL accumulation - Balance between slot health and network overhead - **Fsync interval** (`--fsync-interval`): Controls disk write safety (recommended: 10 seconds) - Frequency of forced disk synchronization for crash safety on backup client - 0 disables fsync (faster but risks data loss if client crashes) - Higher values improve performance but increase window of potential loss - **Plugin options** (`--option`): Pass `include_transaction=on` to decoder_raw - Includes BEGIN/COMMIT statements in output - Essential for maintaining transaction boundaries during restore **What pg_recvlogical provides**: - Streams decoded changes continuously from the replication slot - Handles connection failures and automatic reconnection - Tracks LSN positions with status updates to the server - Supports file rotation via SIGHUP signal (for log rotation without stopping stream) **What decoder_raw provides**: - ✅ Schema-qualified table names: `INSERT INTO public.users (...)` - ✅ Proper column name quoting with `quote_identifier()` - ✅ Transaction boundaries via `include_transaction=on` option (BEGIN/COMMIT) - ✅ Intelligent replica identity handling (DEFAULT, INDEX, FULL, NOTHING) - ✅ Comprehensive data type support (booleans, NaN, Infinity, NULL, strings, bit strings) - ✅ TOAST optimization (skips unchanged TOAST columns in UPDATEs) - ✅ Production-quality memory management **What decoder_raw needs (via extension)**: - ⚠️ **`message_cb` callback** to handle DDL messages from `pg_logical_emit_message()` - ⚠️ **`truncate_cb` callback** to handle TRUNCATE operations **Custom wrapper script tasks**: - File rotation and timestamping - Coordinate with monitoring system - Metadata file generation (PostgreSQL version, extensions, encoding, collation) ### 3. Periodic Full Backup Script **Purpose**: Take regular full backups as restore points **Tasks**: - Execute `pg_dump --file=backup.sql` to create full database backup in plain SQL format - Execute `pg_dumpall --globals-only` to capture shared objects (databases, roles, tablespaces) - Compress backups to save space - Implement retention policy (delete old backups) **Backup Approach**: - Use `pg_dump --file=backup.sql` for plain SQL output - Maintains human-readable format consistent with design goals - Single connection to database - Universal compatibility across PostgreSQL versions **Compression**: - Use `pg_dump --compress=` for built-in compression (e.g., `--compress=zstd:9`) - Or compress after creation with external tools (gzip, zstd) - Compression significantly reduces storage requirements while maintaining recoverability ### 4. Restore Script **Purpose**: Restore database to latest captured state from base + incremental backups **Restore Process**: 1. Locate most recent full backup 2. Find all incremental backups since that full backup 3. Create new target database 4. Restore `pg_dumpall --globals-only` (shared objects: roles, tablespaces) 5. Restore full `pg_dump` backup using `psql -f backup.sql` 6. Apply all incremental SQL backups in chronological order using `psql -f incremental-*.sql` - DDL and DML are already interleaved in correct chronological order - No separate DDL extraction or ordering step needed 7. Synchronize sequence values using `setval()` with max values from tables 8. Verify data integrity (row counts, application smoke tests) **Sequence Synchronization**: - After applying all changes, sequences may be behind actual max values - Query `information_schema.sequences` to find all sequences - For each sequence: `SELECT setval('sequence_name', COALESCE(MAX(id_column), 1)) FROM table` - Can be automated by generating setval queries from schema metadata **Handling Duplicate Transactions**: After PostgreSQL crash, replication slot may return to earlier LSN, causing some transactions to be streamed again. The restore process handles this naturally through idempotent operations: - Most SQL operations are idempotent or fail safely: - INSERT fails on duplicate primary key (acceptable during restore) - UPDATE reapplies same values (idempotent) - DELETE succeeds or reports row not found (acceptable) - Transaction boundaries (BEGIN/COMMIT from `include_transaction=on`) preserve consistency - Simply apply all incremental files in order; duplicates will be handled correctly - No additional LSN tracking infrastructure required ### 5. Monitoring and Health Check Script **Purpose**: Prevent operational issues from inactive replication slots **Critical Metrics**: ```sql -- Check replication slot health SELECT slot_name, slot_type, database, active, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), restart_lsn)) as replication_lag, pg_size_pretty(pg_wal_lsn_diff(pg_current_wal_lsn(), confirmed_flush_lsn)) as confirmed_lag FROM pg_replication_slots WHERE slot_type = 'logical'; ``` **Alerting**: - **Critical alert** when `restart_lsn` falls more than 1GB behind - **Emergency alert** when slot lag exceeds 10GB or age exceeds 24 hours - **Emergency procedure** documented to drop slot if it threatens database availability **Available Tools**: - **Prometheus + postgres_exporter + Grafana**: Open-source monitoring stack - **pgDash** (https://pgdash.io/): Commercial PostgreSQL monitoring - **check_postgres**: Nagios/Icinga/Zabbix integration - **Built-in views**: `pg_replication_slots`, `pg_stat_replication_slots` ## decoder_raw Plugin Details **Source**: https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw **License**: PostgreSQL License (permissive, production-ready) **Compatibility**: PostgreSQL 9.4+ **Installation**: ```bash # Install PostgreSQL development headers apt-get install postgresql-server-dev-XX # Debian/Ubuntu yum install postgresql-devel # RHEL/CentOS # Clone and compile git clone https://github.com/michaelpq/pg_plugins.git cd pg_plugins/decoder_raw make sudo make install # Verify installation ls $(pg_config --pkglibdir)/decoder_raw.so ``` **Why decoder_raw is essential**: - Eliminates the entire SQL transformation layer - Handles all data type escaping correctly (strings, NULL, NaN, Infinity, booleans) - Produces production-ready SQL that can be executed with `psql -f changes.sql` - Mature codebase with comprehensive test suite - Clean code structure makes it straightforward to extend with additional callbacks **Required Extensions to decoder_raw**: The stock `decoder_raw` plugin does not implement two optional callbacks required for this design: **1. `message_cb` Callback for DDL Capture**: - Required to handle messages from `pg_logical_emit_message()` (documented in Section 47.6.4.8) - Filter messages by prefix: only output messages with prefix `'ddl'` - Output the message content as executable SQL (already valid DDL) - Example output: `ALTER TABLE public.users DROP COLUMN email;` - Straightforward implementation (~30-50 lines) **2. `truncate_cb` Callback for TRUNCATE Operations**: - The stock plugin silently ignores TRUNCATE operations during logical decoding - Event triggers for `ddl_command_end` only fire for DDL commands (CREATE, ALTER, DROP, etc.) - TRUNCATE is a DML operation, not DDL, and does NOT trigger event triggers - Must be captured via logical replication using `truncate_cb` (documented in Section 47.6.4.6) - The callback receives an array of relations since TRUNCATE can affect multiple tables via foreign keys - Should output: `TRUNCATE TABLE schema.table1, schema.table2;` - Straightforward implementation following existing patterns in decoder_raw.c (~50-100 lines) - Alternative: Enforce organizational policy against using TRUNCATE ## Key Challenges and Solutions ### 1. Replica Identity Required for UPDATE/DELETE **Problem**: Tables need replica identity for UPDATE/DELETE operations. From PostgreSQL documentation (Section 29.1.1): > Tables with a replica identity defined as `NOTHING`, `DEFAULT` without a primary key, or `USING INDEX` with a dropped index, **cannot support UPDATE or DELETE operations**. **Attempting such operations will result in an error on the publisher.** This means UPDATE/DELETE will **fail on the source database**, not just during restore! **Solution**: Ensure all tables have one of: - A primary key (automatic replica identity) - A unique index configured via `REPLICA IDENTITY USING INDEX index_name` - Explicit `REPLICA IDENTITY FULL` setting (inefficient, last resort) **Example**: ```sql -- Table without primary key will error on UPDATE/DELETE CREATE TABLE logs (timestamp TIMESTAMPTZ, message TEXT); -- Fix: Set replica identity to FULL ALTER TABLE logs REPLICA IDENTITY FULL; ``` ### 2. Replication Slots Prevent WAL Cleanup **Problem**: Inactive replication slots prevent WAL cleanup, leading to: 1. Disk fills up (WAL files not cleaned) 2. Table bloat (VACUUM cannot clean old row versions) 3. **Database shutdown** (transaction ID wraparound) From PostgreSQL documentation (Section 47.2.2): > In extreme cases this could cause the database to shut down to prevent transaction ID wraparound. **Solution**: - **Monitor slot lag aggressively** (see monitoring section) - Set `max_slot_wal_keep_size` parameter (PostgreSQL 13+) to limit WAL retention - Have documented emergency procedure to drop slot if needed - Consider `pg_replication_slot_advance()` to skip ahead (loses backup coverage) ### 3. Sequences Are Not Replicated **Problem**: Sequence values are not captured in logical replication. **Solution**: - Use `pg_dump --sequence-data` (enabled by default) in periodic full dumps - After restore, synchronize sequences: ```sql SELECT setval('users_id_seq', (SELECT MAX(id) FROM users)); ``` ### 4. Large Objects Are Not Replicated **Problem**: PostgreSQL large objects are not captured in logical replication. **Solution**: - **Preferred**: Use `BYTEA` columns instead (these ARE replicated) - **Alternative**: Use `pg_dump --large-objects` in periodic full backups - Note: Incremental changes to large objects NOT captured between full backups ### 5. Crash Recovery and Duplicate Handling **Problem**: After database crash, slot position may roll back, causing duplicate changes. From PostgreSQL documentation (Section 47.2.2): > The current position of each slot is persisted only at checkpoint, so in the case of a crash the slot might return to an earlier LSN, which will then cause recent changes to be sent again when the server restarts. **Solution**: The restore process handles duplicates naturally through idempotent operations. Per PostgreSQL documentation (Section 47.2.2): "Logical decoding clients are responsible for avoiding ill effects from handling the same message more than once." **Implementation**: - Most SQL operations in backup files are naturally idempotent: - INSERT will fail on duplicate primary key (acceptable during restore) - UPDATE will reapply same values (idempotent) - DELETE will succeed or report row not found (acceptable) - Transaction boundaries (BEGIN/COMMIT from `include_transaction=on`) ensure consistency - Simply apply all incremental files in chronological order - No additional LSN tracking infrastructure required - See Restore Script section (Section 4) for implementation details **Testing**: - Test crash scenarios with `pg_ctl stop -m immediate` to verify duplicate handling - Monitor `confirmed_flush_lsn` lag during normal operations (see Monitoring section) ### 6. Long-Term Readability **Challenges**: - PostgreSQL syntax may change between major versions (rare) - Extension dependencies may not exist in future systems - Encoding/collation definitions may change **Solution**: Include metadata file with each backup: - PostgreSQL version (full version string) - All installed extension names and versions - Database encoding - Locale and collation settings - Custom data types and enums Periodically test restoring old backups on current PostgreSQL versions. ## Prerequisites and Configuration ### PostgreSQL Configuration ```ini # postgresql.conf # Required: Set WAL level to logical wal_level = logical # Required: Allow at least one replication slot max_replication_slots = 10 # Recommended: Allow replication connections max_wal_senders = 10 # Recommended: Keep more WAL for safety wal_keep_size = 1GB # Recommended: Limit WAL retention for safety (PostgreSQL 13+) max_slot_wal_keep_size = 10GB # Optional: Tune checkpoint frequency to persist slot positions more often checkpoint_timeout = 5min ``` ### Client Requirements - PostgreSQL client utilities installed (`pg_recvlogical`, `pg_dump`, `pg_dumpall`) - Superuser or role with `REPLICATION` privilege - Permission to create replication slots - decoder_raw plugin compiled and installed ## Operational Procedures ### Backup Schedule **Recommended**: - **Incremental**: Continuously streaming via `pg_recvlogical` - **Full backup**: Daily at 2 AM - **Globals backup**: Daily (`pg_dumpall --globals-only`) - **Metadata export**: Daily (PostgreSQL version, extensions, encoding) ### Retention Policy - **Incremental backups**: Keep 7 days - **Full backups**: Keep 30 days, then one per month for 1 year - **Monitor disk space**: Alert if backup directory exceeds 80% capacity ### Disaster Recovery Runbook 1. **Stop application** to prevent new writes during restore 2. **Create new database** (don't overwrite production) 3. **Restore shared objects**: `psql -f globals-YYYY-MM-DD.sql` 4. **Restore full backup**: `psql dbname < base-YYYY-MM-DD.sql` 5. **Apply all incremental backups**: `for f in incremental-*.sql; do psql dbname < "$f"; done` - DDL and DML are already interleaved in correct chronological order 6. **Sync sequences**: Run `setval()` for all sequences to match table max values 7. **Verify data integrity**: Check row counts, run application smoke tests 8. **Test application** against restored database 9. **Switch over** application to restored database **See Section 4 (Restore Script)** for detailed procedures including sequence synchronization and duplicate transaction handling. ## Testing Strategy ### 1. Basic Functionality Test ```sql -- Create test database and setup CREATE DATABASE backup_test; \c backup_test -- Create test table CREATE TABLE test_users (id SERIAL PRIMARY KEY, name TEXT, created_at TIMESTAMP DEFAULT now()); -- Generate data and schema changes to test DDL/DML ordering INSERT INTO test_users (name) VALUES ('Alice'), ('Bob'), ('Charlie'); UPDATE test_users SET name = 'Alice Smith' WHERE id = 1; DELETE FROM test_users WHERE id = 3; -- Add column - DDL message should appear in stream here ALTER TABLE test_users ADD COLUMN email TEXT; -- Use the new column - should work because DDL already executed UPDATE test_users SET email = 'alice@example.com' WHERE id = 1; -- Drop column - DDL message should appear in stream here ALTER TABLE test_users DROP COLUMN created_at; -- Subsequent inserts should work without the dropped column INSERT INTO test_users (name, email) VALUES ('David', 'david@example.com'); -- Restore and verify: -- 1. All operations should replay successfully -- 2. DML before column add should not reference email column -- 3. DML after column add should reference email column -- 4. DML after column drop should not reference created_at column ``` ### 2. Crash Recovery Test ```bash # Start collecting incrementals # Generate load with pgbench # Simulate crash: pg_ctl stop -m immediate # Restart PostgreSQL # Verify no data loss and duplicates handled correctly # Restore and verify ``` ### 3. Long-Term Storage Test ```bash # Create backup # Store backup files # Wait (or simulate) years passing # Restore on modern PostgreSQL version # Verify SQL is still readable and executable ``` ### 4. Replica Identity Test ```sql -- Create table without primary key CREATE TABLE test_no_pk (col1 TEXT, col2 INT); -- Attempt UPDATE (should fail with replica identity error) UPDATE test_no_pk SET col2 = 5 WHERE col1 = 'test'; -- Fix with REPLICA IDENTITY FULL ALTER TABLE test_no_pk REPLICA IDENTITY FULL; -- Retry UPDATE (should succeed) ``` ### 5. TRUNCATE Handling Test ```sql -- Create test table CREATE TABLE test_truncate (id INT); INSERT INTO test_truncate VALUES (1), (2), (3); -- Perform TRUNCATE TRUNCATE test_truncate; -- Verify: Check if decoder_raw incremental backup captured TRUNCATE -- Expected: SHOULD be captured by extended decoder_raw with truncate_cb -- Look for: TRUNCATE TABLE public.test_truncate; -- Note: Event triggers do NOT capture TRUNCATE (it's DML, not DDL) -- Test TRUNCATE with multiple tables (foreign key cascade) CREATE TABLE parent_table (id INT PRIMARY KEY); CREATE TABLE child_table (parent_id INT REFERENCES parent_table(id)); INSERT INTO parent_table VALUES (1), (2); INSERT INTO child_table VALUES (1), (2); -- TRUNCATE CASCADE should capture both tables TRUNCATE parent_table, child_table; -- Expected output: TRUNCATE TABLE public.parent_table, public.child_table; ``` ## Performance Considerations **Write Amplification**: - WAL must be written (normal) - WAL must be decoded into logical format (additional CPU) - Event triggers fire on every DDL operation (minimal overhead) **Disk I/O**: - Additional WAL volume retained by replication slots - More frequent checkpoint I/O if checkpoint_timeout is tuned **Recommendations**: - Benchmark overhead on test system with production-like workload - Monitor CPU usage of WAL sender processes - Monitor disk usage for WAL and backup directories ## Next Steps for Proof of Concept 1. **Extend and install decoder_raw** - Clone pg_plugins repository - Install PostgreSQL development headers - Add `message_cb` callback to decoder_raw.c for DDL messages - Add `truncate_cb` callback to decoder_raw.c for TRUNCATE operations - Compile and install modified decoder_raw - Test both DDL message handling and TRUNCATE support 2. **Initial Setup** - Create replication slot with extended decoder_raw - Set up event triggers using `pg_logical_emit_message()` for DDL capture - Take initial synchronized base backup 3. **Streaming Collection** - Test `pg_recvlogical` with extended decoder_raw - Verify output is immediately executable SQL - Test with various data types and operations 4. **DDL Handling** - Test event trigger emits DDL messages correctly via `pg_logical_emit_message()` - Verify DDL appears in incremental backup stream at correct chronological position - Test DDL/DML interleaving (e.g., add column, insert with new column, drop column, insert without column) - Test `pg_dumpall --globals-only` captures shared objects - Verify simple sequential restore works correctly 5. **Monitoring Setup** - Configure replication slot monitoring - Set up critical alerting - Document emergency procedures 6. **Restore Process** - Build restore scripts - Test point-in-time recovery - Verify sequence synchronization 7. **Crash Recovery** - Test duplicate handling with `pg_ctl stop -m immediate` - Verify idempotent restore behavior 8. **Performance Testing** - Measure storage overhead - Measure CPU overhead - Benchmark restore time ## References ### PostgreSQL Documentation - PostgreSQL Documentation: Chapter 25 - Backup and Restore - PostgreSQL Documentation: Chapter 29 - Logical Replication - PostgreSQL Documentation: Chapter 47 - Logical Decoding - PostgreSQL Documentation: Section 29.8 - Logical Replication Restrictions - PostgreSQL Documentation: Section 47.6.4.8 - Generic Message Callback (message_cb) - PostgreSQL Documentation: Section 9.28.6 - `pg_logical_emit_message()` function - PostgreSQL Documentation: `pg_recvlogical` man page - PostgreSQL Documentation: `pg_dump` man page - PostgreSQL Documentation: `pg_dumpall` man page ### Essential Tools - **decoder_raw**: SQL output plugin for logical decoding - Source: https://github.com/michaelpq/pg_plugins/tree/main/decoder_raw - **CRITICAL COMPONENT**: Eliminates output transformation layer - License: PostgreSQL License (production-ready) - Compatibility: PostgreSQL 9.4+ ### Monitoring Tools - **Prometheus + postgres_exporter + Grafana**: Open-source monitoring stack - **pgDash**: PostgreSQL monitoring - https://pgdash.io/ - **check_postgres**: Nagios/Icinga/Zabbix integration - **pg_stat_replication_slots**: Built-in PostgreSQL monitoring view