2 47.2. Logical Decoding Concepts #
4 47.2.1. Logical Decoding
5 47.2.2. Replication Slots
6 47.2.3. Replication Slot Synchronization
8 47.2.5. Exported Snapshots
10 47.2.1. Logical Decoding #
12 Logical decoding is the process of extracting all persistent changes to
13 a database's tables into a coherent, easy to understand format which
14 can be interpreted without detailed knowledge of the database's
17 In PostgreSQL, logical decoding is implemented by decoding the contents
18 of the write-ahead log, which describe changes on a storage level, into
19 an application-specific form such as a stream of tuples or SQL
22 47.2.2. Replication Slots #
24 In the context of logical replication, a slot represents a stream of
25 changes that can be replayed to a client in the order they were made on
26 the origin server. Each slot streams a sequence of changes from a
31 PostgreSQL also has streaming replication slots (see Section 26.2.5),
32 but they are used somewhat differently there.
34 A replication slot has an identifier that is unique across all
35 databases in a PostgreSQL cluster. Slots persist independently of the
36 connection using them and are crash-safe.
38 A logical slot will emit each change just once in normal operation. The
39 current position of each slot is persisted only at checkpoint, so in
40 the case of a crash the slot might return to an earlier LSN, which will
41 then cause recent changes to be sent again when the server restarts.
42 Logical decoding clients are responsible for avoiding ill effects from
43 handling the same message more than once. Clients may wish to record
44 the last LSN they saw when decoding and skip over any repeated data or
45 (when using the replication protocol) request that decoding start from
46 that LSN rather than letting the server determine the start point. The
47 Replication Progress Tracking feature is designed for this purpose,
48 refer to replication origins.
50 Multiple independent slots may exist for a single database. Each slot
51 has its own state, allowing different consumers to receive changes from
52 different points in the database change stream. For most applications,
53 a separate slot will be required for each consumer.
55 A logical replication slot knows nothing about the state of the
56 receiver(s). It's even possible to have multiple different receivers
57 using the same slot at different times; they'll just get the changes
58 following on from when the last receiver stopped consuming them. Only
59 one receiver may consume changes from a slot at any given time.
61 A logical replication slot can also be created on a hot standby. To
62 prevent VACUUM from removing required rows from the system catalogs,
63 hot_standby_feedback should be set on the standby. In spite of that, if
64 any required rows get removed, the slot gets invalidated. It's highly
65 recommended to use a physical slot between the primary and the standby.
66 Otherwise, hot_standby_feedback will work but only while the connection
67 is alive (for example a node restart would break it). Then, the primary
68 may delete system catalog rows that could be needed by the logical
69 decoding on the standby (as it does not know about the catalog_xmin on
70 the standby). Existing logical slots on standby also get invalidated if
71 wal_level on the primary is reduced to less than logical. This is done
72 as soon as the standby detects such a change in the WAL stream. It
73 means that, for walsenders that are lagging (if any), some WAL records
74 up to the wal_level parameter change on the primary won't be decoded.
76 Creation of a logical slot requires information about all the currently
77 running transactions. On the primary, this information is available
78 directly, but on a standby, this information has to be obtained from
79 primary. Thus, slot creation may need to wait for some activity to
80 happen on the primary. If the primary is idle, creating a logical slot
81 on standby may take noticeable time. This can be sped up by calling the
82 pg_log_standby_snapshot function on the primary.
86 Replication slots persist across crashes and know nothing about the
87 state of their consumer(s). They will prevent removal of required
88 resources even when there is no connection using them. This consumes
89 storage because neither required WAL nor required rows from the system
90 catalogs can be removed by VACUUM as long as they are required by a
91 replication slot. In extreme cases this could cause the database to
92 shut down to prevent transaction ID wraparound (see Section 24.1.5). So
93 if a slot is no longer required it should be dropped.
95 47.2.3. Replication Slot Synchronization #
97 The logical replication slots on the primary can be synchronized to the
98 hot standby by using the failover parameter of
99 pg_create_logical_replication_slot, or by using the failover option of
100 CREATE SUBSCRIPTION during slot creation. Additionally, enabling
101 sync_replication_slots on the standby is required. By enabling
102 sync_replication_slots on the standby, the failover slots can be
103 synchronized periodically in the slotsync worker. For the
104 synchronization to work, it is mandatory to have a physical replication
105 slot between the primary and the standby (i.e., primary_slot_name
106 should be configured on the standby), and hot_standby_feedback must be
107 enabled on the standby. It is also necessary to specify a valid dbname
108 in the primary_conninfo. It's highly recommended that the said physical
109 replication slot is named in synchronized_standby_slots list on the
110 primary, to prevent the subscriber from consuming changes faster than
111 the hot standby. Even when correctly configured, some latency is
112 expected when sending changes to logical subscribers due to the waiting
113 on slots named in synchronized_standby_slots. When
114 synchronized_standby_slots is utilized, the primary server will not
115 completely shut down until the corresponding standbys, associated with
116 the physical replication slots specified in synchronized_standby_slots,
117 have confirmed receiving the WAL up to the latest flushed position on
122 While enabling sync_replication_slots allows for automatic periodic
123 synchronization of failover slots, they can also be manually
124 synchronized using the pg_sync_replication_slots function on the
125 standby. However, this function is primarily intended for testing and
126 debugging and should be used with caution. Unlike automatic
127 synchronization, it does not include cyclic retries, making it more
128 prone to synchronization failures, particularly during initial sync
129 scenarios where the required WAL files or catalog rows for the slot
130 might have already been removed or are at risk of being removed on the
131 standby. In contrast, automatic synchronization via
132 sync_replication_slots provides continuous slot updates, enabling
133 seamless failover and supporting high availability. Therefore, it is
134 the recommended method for synchronizing slots.
136 When slot synchronization is configured as recommended, and the initial
137 synchronization is performed either automatically or manually via
138 pg_sync_replication_slots, the standby can persist the synchronized
139 slot only if the following condition is met: The logical replication
140 slot on the primary must retain WALs and system catalog rows that are
141 still available on the standby. This ensures data integrity and allows
142 logical replication to continue smoothly after promotion. If the
143 required WALs or catalog rows have already been purged from the
144 standby, the slot will not be persisted to avoid data loss. In such
145 cases, the following log message may appear:
146 LOG: could not synchronize replication slot "failover_slot"
147 DETAIL: Synchronization could lead to data loss, because the remote slot needs
148 WAL at LSN 0/3003F28 and catalog xmin 754, but the standby has LSN 0/3003F28 and
151 If the logical replication slot is actively used by a consumer, no
152 manual intervention is needed; the slot will advance automatically, and
153 synchronization will resume in the next cycle. However, if no consumer
154 is configured, it is advisable to manually advance the slot on the
155 primary using pg_logical_slot_get_changes or
156 pg_logical_slot_get_binary_changes, allowing synchronization to
159 The ability to resume logical replication after failover depends upon
160 the pg_replication_slots.synced value for the synchronized slots on the
161 standby at the time of failover. Only persistent slots that have
162 attained synced state as true on the standby before failover can be
163 used for logical replication after failover. Temporary synced slots
164 cannot be used for logical decoding, therefore logical replication for
165 those slots cannot be resumed. For example, if the synchronized slot
166 could not become persistent on the standby due to a disabled
167 subscription, then the subscription cannot be resumed after failover
168 even when it is enabled.
170 To resume logical replication after failover from the synced logical
171 slots, the subscription's 'conninfo' must be altered to point to the
172 new primary server. This is done using ALTER SUBSCRIPTION ...
173 CONNECTION. It is recommended that subscriptions are first disabled
174 before promoting the standby and are re-enabled after altering the
179 There is a chance that the old primary is up again during the promotion
180 and if subscriptions are not disabled, the logical subscribers may
181 continue to receive data from the old primary server even after
182 promotion until the connection string is altered. This might result in
183 data inconsistency issues, preventing the logical subscribers from
184 being able to continue replication from the new primary server.
186 47.2.4. Output Plugins #
188 Output plugins transform the data from the write-ahead log's internal
189 representation into the format the consumer of a replication slot
192 47.2.5. Exported Snapshots #
194 When a new replication slot is created using the streaming replication
195 interface (see CREATE_REPLICATION_SLOT), a snapshot is exported (see
196 Section 9.28.5), which will show exactly the state of the database
197 after which all changes will be included in the change stream. This can
198 be used to create a new replica by using SET TRANSACTION SNAPSHOT to
199 read the state of the database at the moment the slot was created. This
200 transaction can then be used to dump the database's state at that point
201 in time, which afterwards can be updated using the slot's contents
202 without losing any changes.
204 Applications that do not require snapshot export may suppress it with
205 the SNAPSHOT 'nothing' option.