2 28.5. WAL Configuration #
4 There are several WAL-related configuration parameters that affect
5 database performance. This section explains their use. Consult
6 Chapter 19 for general information about setting server configuration
9 Checkpoints are points in the sequence of transactions at which it is
10 guaranteed that the heap and index data files have been updated with
11 all information written before that checkpoint. At checkpoint time, all
12 dirty data pages are flushed to disk and a special checkpoint record is
13 written to the WAL file. (The change records were previously flushed to
14 the WAL files.) In the event of a crash, the crash recovery procedure
15 looks at the latest checkpoint record to determine the point in the WAL
16 (known as the redo record) from which it should start the REDO
17 operation. Any changes made to data files before that point are
18 guaranteed to be already on disk. Hence, after a checkpoint, WAL
19 segments preceding the one containing the redo record are no longer
20 needed and can be recycled or removed. (When WAL archiving is being
21 done, the WAL segments must be archived before being recycled or
24 The checkpoint requirement of flushing all dirty data pages to disk can
25 cause a significant I/O load. For this reason, checkpoint activity is
26 throttled so that I/O begins at checkpoint start and completes before
27 the next checkpoint is due to start; this minimizes performance
28 degradation during checkpoints.
30 The server's checkpointer process automatically performs a checkpoint
31 every so often. A checkpoint is begun every checkpoint_timeout seconds,
32 or if max_wal_size is about to be exceeded, whichever comes first. The
33 default settings are 5 minutes and 1 GB, respectively. If no WAL has
34 been written since the previous checkpoint, new checkpoints will be
35 skipped even if checkpoint_timeout has passed. (If WAL archiving is
36 being used and you want to put a lower limit on how often files are
37 archived in order to bound potential data loss, you should adjust the
38 archive_timeout parameter rather than the checkpoint parameters.) It is
39 also possible to force a checkpoint by using the SQL command
42 Reducing checkpoint_timeout and/or max_wal_size causes checkpoints to
43 occur more often. This allows faster after-crash recovery, since less
44 work will need to be redone. However, one must balance this against the
45 increased cost of flushing dirty data pages more often. If
46 full_page_writes is set (as is the default), there is another factor to
47 consider. To ensure data page consistency, the first modification of a
48 data page after each checkpoint results in logging the entire page
49 content. In that case, a smaller checkpoint interval increases the
50 volume of output to the WAL, partially negating the goal of using a
51 smaller interval, and in any case causing more disk I/O.
53 Checkpoints are fairly expensive, first because they require writing
54 out all currently dirty buffers, and second because they result in
55 extra subsequent WAL traffic as discussed above. It is therefore wise
56 to set the checkpointing parameters high enough so that checkpoints
57 don't happen too often. As a simple sanity check on your checkpointing
58 parameters, you can set the checkpoint_warning parameter. If
59 checkpoints happen closer together than checkpoint_warning seconds, a
60 message will be output to the server log recommending increasing
61 max_wal_size. Occasional appearance of such a message is not cause for
62 alarm, but if it appears often then the checkpoint control parameters
63 should be increased. Bulk operations such as large COPY transfers might
64 cause a number of such warnings to appear if you have not set
65 max_wal_size high enough.
67 To avoid flooding the I/O system with a burst of page writes, writing
68 dirty buffers during a checkpoint is spread over a period of time. That
69 period is controlled by checkpoint_completion_target, which is given as
70 a fraction of the checkpoint interval (configured by using
71 checkpoint_timeout). The I/O rate is adjusted so that the checkpoint
72 finishes when the given fraction of checkpoint_timeout seconds have
73 elapsed, or before max_wal_size is exceeded, whichever is sooner. With
74 the default value of 0.9, PostgreSQL can be expected to complete each
75 checkpoint a bit before the next scheduled checkpoint (at around 90% of
76 the last checkpoint's duration). This spreads out the I/O as much as
77 possible so that the checkpoint I/O load is consistent throughout the
78 checkpoint interval. The disadvantage of this is that prolonging
79 checkpoints affects recovery time, because more WAL segments will need
80 to be kept around for possible use in recovery. A user concerned about
81 the amount of time required to recover might wish to reduce
82 checkpoint_timeout so that checkpoints occur more frequently but still
83 spread the I/O across the checkpoint interval. Alternatively,
84 checkpoint_completion_target could be reduced, but this would result in
85 times of more intense I/O (during the checkpoint) and times of less I/O
86 (after the checkpoint completed but before the next scheduled
87 checkpoint) and therefore is not recommended. Although
88 checkpoint_completion_target could be set as high as 1.0, it is
89 typically recommended to set it to no higher than 0.9 (the default)
90 since checkpoints include some other activities besides writing dirty
91 buffers. A setting of 1.0 is quite likely to result in checkpoints not
92 being completed on time, which would result in performance loss due to
93 unexpected variation in the number of WAL segments needed.
95 On Linux and POSIX platforms checkpoint_flush_after allows you to force
96 OS pages written by the checkpoint to be flushed to disk after a
97 configurable number of bytes. Otherwise, these pages may be kept in the
98 OS's page cache, inducing a stall when fsync is issued at the end of a
99 checkpoint. This setting will often help to reduce transaction latency,
100 but it also can have an adverse effect on performance; particularly for
101 workloads that are bigger than shared_buffers, but smaller than the
104 The number of WAL segment files in pg_wal directory depends on
105 min_wal_size, max_wal_size and the amount of WAL generated in previous
106 checkpoint cycles. When old WAL segment files are no longer needed,
107 they are removed or recycled (that is, renamed to become future
108 segments in the numbered sequence). If, due to a short-term peak of WAL
109 output rate, max_wal_size is exceeded, the unneeded segment files will
110 be removed until the system gets back under this limit. Below that
111 limit, the system recycles enough WAL files to cover the estimated need
112 until the next checkpoint, and removes the rest. The estimate is based
113 on a moving average of the number of WAL files used in previous
114 checkpoint cycles. The moving average is increased immediately if the
115 actual usage exceeds the estimate, so it accommodates peak usage rather
116 than average usage to some extent. min_wal_size puts a minimum on the
117 amount of WAL files recycled for future usage; that much WAL is always
118 recycled for future use, even if the system is idle and the WAL usage
119 estimate suggests that little WAL is needed.
121 Independently of max_wal_size, the most recent wal_keep_size megabytes
122 of WAL files plus one additional WAL file are kept at all times. Also,
123 if WAL archiving is used, old segments cannot be removed or recycled
124 until they are archived. If WAL archiving cannot keep up with the pace
125 that WAL is generated, or if archive_command or archive_library fails
126 repeatedly, old WAL files will accumulate in pg_wal until the situation
127 is resolved. A slow or failed standby server that uses a replication
128 slot will have the same effect (see Section 26.2.6). Similarly, if WAL
129 summarization is enabled, old segments are kept until they are
132 In archive recovery or standby mode, the server periodically performs
133 restartpoints, which are similar to checkpoints in normal operation:
134 the server forces all its state to disk, updates the pg_control file to
135 indicate that the already-processed WAL data need not be scanned again,
136 and then recycles any old WAL segment files in the pg_wal directory.
137 Restartpoints can't be performed more frequently than checkpoints on
138 the primary because restartpoints can only be performed at checkpoint
139 records. A restartpoint can be demanded by a schedule or by an external
140 request. The restartpoints_timed counter in the pg_stat_checkpointer
141 view counts the first ones while the restartpoints_req the second. A
142 restartpoint is triggered by schedule when a checkpoint record is
143 reached if at least checkpoint_timeout seconds have passed since the
144 last performed restartpoint or when the previous attempt to perform the
145 restartpoint has failed. In the last case, the next restartpoint will
146 be scheduled in 15 seconds. A restartpoint is triggered by request due
147 to similar reasons like checkpoint but mostly if WAL size is about to
148 exceed max_wal_size However, because of limitations on when a
149 restartpoint can be performed, max_wal_size is often exceeded during
150 recovery, by up to one checkpoint cycle's worth of WAL. (max_wal_size
151 is never a hard limit anyway, so you should always leave plenty of
152 headroom to avoid running out of disk space.) The restartpoints_done
153 counter in the pg_stat_checkpointer view counts the restartpoints that
154 have really been performed.
156 In some cases, when the WAL size on the primary increases quickly, for
157 instance during massive INSERT, the restartpoints_req counter on the
158 standby may demonstrate a peak growth. This occurs because requests to
159 create a new restartpoint due to increased WAL consumption cannot be
160 performed because the safe checkpoint record since the last
161 restartpoint has not yet been replayed on the standby. This behavior is
162 normal and does not lead to an increase in system resource consumption.
163 Only the restartpoints_done counter among the restartpoint-related ones
164 indicates that noticeable system resources have been spent.
166 There are two commonly used internal WAL functions: XLogInsertRecord
167 and XLogFlush. XLogInsertRecord is used to place a new record into the
168 WAL buffers in shared memory. If there is no space for the new record,
169 XLogInsertRecord will have to write (move to kernel cache) a few filled
170 WAL buffers. This is undesirable because XLogInsertRecord is used on
171 every database low level modification (for example, row insertion) at a
172 time when an exclusive lock is held on affected data pages, so the
173 operation needs to be as fast as possible. What is worse, writing WAL
174 buffers might also force the creation of a new WAL segment, which takes
175 even more time. Normally, WAL buffers should be written and flushed by
176 an XLogFlush request, which is made, for the most part, at transaction
177 commit time to ensure that transaction records are flushed to permanent
178 storage. On systems with high WAL output, XLogFlush requests might not
179 occur often enough to prevent XLogInsertRecord from having to do
180 writes. On such systems one should increase the number of WAL buffers
181 by modifying the wal_buffers parameter. When full_page_writes is set
182 and the system is very busy, setting wal_buffers higher will help
183 smooth response times during the period immediately following each
186 The commit_delay parameter defines for how many microseconds a group
187 commit leader process will sleep after acquiring a lock within
188 XLogFlush, while group commit followers queue up behind the leader.
189 This delay allows other server processes to add their commit records to
190 the WAL buffers so that all of them will be flushed by the leader's
191 eventual sync operation. No sleep will occur if fsync is not enabled,
192 or if fewer than commit_siblings other sessions are currently in active
193 transactions; this avoids sleeping when it's unlikely that any other
194 session will commit soon. Note that on some platforms, the resolution
195 of a sleep request is ten milliseconds, so that any nonzero
196 commit_delay setting between 1 and 10000 microseconds would have the
197 same effect. Note also that on some platforms, sleep operations may
198 take slightly longer than requested by the parameter.
200 Since the purpose of commit_delay is to allow the cost of each flush
201 operation to be amortized across concurrently committing transactions
202 (potentially at the expense of transaction latency), it is necessary to
203 quantify that cost before the setting can be chosen intelligently. The
204 higher that cost is, the more effective commit_delay is expected to be
205 in increasing transaction throughput, up to a point. The pg_test_fsync
206 program can be used to measure the average time in microseconds that a
207 single WAL flush operation takes. A value of half of the average time
208 the program reports it takes to flush after a single 8kB write
209 operation is often the most effective setting for commit_delay, so this
210 value is recommended as the starting point to use when optimizing for a
211 particular workload. While tuning commit_delay is particularly useful
212 when the WAL is stored on high-latency rotating disks, benefits can be
213 significant even on storage media with very fast sync times, such as
214 solid-state drives or RAID arrays with a battery-backed write cache;
215 but this should definitely be tested against a representative workload.
216 Higher values of commit_siblings should be used in such cases, whereas
217 smaller commit_siblings values are often helpful on higher latency
218 media. Note that it is quite possible that a setting of commit_delay
219 that is too high can increase transaction latency by so much that total
220 transaction throughput suffers.
222 When commit_delay is set to zero (the default), it is still possible
223 for a form of group commit to occur, but each group will consist only
224 of sessions that reach the point where they need to flush their commit
225 records during the window in which the previous flush operation (if
226 any) is occurring. At higher client counts a “gangway effect” tends to
227 occur, so that the effects of group commit become significant even when
228 commit_delay is zero, and thus explicitly setting commit_delay tends to
229 help less. Setting commit_delay can only help when (1) there are some
230 concurrently committing transactions, and (2) throughput is limited to
231 some degree by commit rate; but with high rotational latency this
232 setting can be effective in increasing transaction throughput with as
233 few as two clients (that is, a single committing client with one
234 sibling transaction).
236 The wal_sync_method parameter determines how PostgreSQL will ask the
237 kernel to force WAL updates out to disk. All the options should be the
238 same in terms of reliability, with the exception of fsync_writethrough,
239 which can sometimes force a flush of the disk cache even when other
240 options do not do so. However, it's quite platform-specific which one
241 will be the fastest. You can test the speeds of different options using
242 the pg_test_fsync program. Note that this parameter is irrelevant if
243 fsync has been turned off.
245 Enabling the wal_debug configuration parameter (provided that
246 PostgreSQL has been compiled with support for it) will result in each
247 XLogInsertRecord and XLogFlush WAL call being logged to the server log.
248 This option might be replaced by a more general mechanism in the
251 There are two internal functions to write WAL data to disk: XLogWrite
252 and issue_xlog_fsync. When track_wal_io_timing is enabled, the total
253 amounts of time XLogWrite writes and issue_xlog_fsync syncs WAL data to
254 disk are counted as write_time and fsync_time in pg_stat_io for the
255 object wal, respectively. XLogWrite is normally called by
256 XLogInsertRecord (when there is no space for the new record in WAL
257 buffers), XLogFlush and the WAL writer, to write WAL buffers to disk
258 and call issue_xlog_fsync. issue_xlog_fsync is normally called by
259 XLogWrite to sync WAL files to disk. If wal_sync_method is either
260 open_datasync or open_sync, a write operation in XLogWrite guarantees
261 to sync written WAL data to disk and issue_xlog_fsync does nothing. If
262 wal_sync_method is either fdatasync, fsync, or fsync_writethrough, the
263 write operation moves WAL buffers to kernel cache and issue_xlog_fsync
264 syncs them to disk. Regardless of the setting of track_wal_io_timing,
265 the number of times XLogWrite writes and issue_xlog_fsync syncs WAL
266 data to disk are also counted as writes and fsyncs in pg_stat_io for
267 the object wal, respectively.
269 The recovery_prefetch parameter can be used to reduce I/O wait times
270 during recovery by instructing the kernel to initiate reads of disk
271 blocks that will soon be needed but are not currently in PostgreSQL's
272 buffer pool. The maintenance_io_concurrency and wal_decode_buffer_size
273 settings limit prefetching concurrency and distance, respectively. By
274 default, it is set to try, which enables the feature on systems that
275 support issuing read-ahead advice.