1 <?xml version="1.0" encoding="UTF-8" standalone="no"?>
2 <!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.0 Transitional//EN" "http://www.w3.org/TR/xhtml1/DTD/xhtml1-transitional.dtd"><html xmlns="http://www.w3.org/1999/xhtml"><head><meta http-equiv="Content-Type" content="text/html; charset=UTF-8" /><title>28.5. WAL Configuration</title><link rel="stylesheet" type="text/css" href="stylesheet.css" /><link rev="made" href="pgsql-docs@lists.postgresql.org" /><meta name="generator" content="DocBook XSL Stylesheets Vsnapshot" /><link rel="prev" href="wal-async-commit.html" title="28.4. Asynchronous Commit" /><link rel="next" href="wal-internals.html" title="28.6. WAL Internals" /></head><body id="docContent" class="container-fluid col-10"><div class="navheader"><table width="100%" summary="Navigation header"><tr><th colspan="5" align="center">28.5. <acronym class="acronym">WAL</acronym> Configuration</th></tr><tr><td width="10%" align="left"><a accesskey="p" href="wal-async-commit.html" title="28.4. Asynchronous Commit">Prev</a> </td><td width="10%" align="left"><a accesskey="u" href="wal.html" title="Chapter 28. Reliability and the Write-Ahead Log">Up</a></td><th width="60%" align="center">Chapter 28. Reliability and the Write-Ahead Log</th><td width="10%" align="right"><a accesskey="h" href="index.html" title="PostgreSQL 18.0 Documentation">Home</a></td><td width="10%" align="right"> <a accesskey="n" href="wal-internals.html" title="28.6. WAL Internals">Next</a></td></tr></table><hr /></div><div class="sect1" id="WAL-CONFIGURATION"><div class="titlepage"><div><div><h2 class="title" style="clear: both">28.5. <acronym class="acronym">WAL</acronym> Configuration <a href="#WAL-CONFIGURATION" class="id_link">#</a></h2></div></div></div><p>
3 There are several <acronym class="acronym">WAL</acronym>-related configuration parameters that
4 affect database performance. This section explains their use.
5 Consult <a class="xref" href="runtime-config.html" title="Chapter 19. Server Configuration">Chapter 19</a> for general information about
6 setting server configuration parameters.
8 <em class="firstterm">Checkpoints</em><a id="id-1.6.15.7.3.2" class="indexterm"></a>
9 are points in the sequence of transactions at which it is guaranteed
10 that the heap and index data files have been updated with all
11 information written before that checkpoint. At checkpoint time, all
12 dirty data pages are flushed to disk and a special checkpoint record is
13 written to the WAL file. (The change records were previously flushed
14 to the <acronym class="acronym">WAL</acronym> files.)
15 In the event of a crash, the crash recovery procedure looks at the latest
16 checkpoint record to determine the point in the WAL (known as the redo
17 record) from which it should start the REDO operation. Any changes made to
18 data files before that point are guaranteed to be already on disk.
19 Hence, after a checkpoint, WAL segments preceding the one containing
20 the redo record are no longer needed and can be recycled or removed. (When
21 <acronym class="acronym">WAL</acronym> archiving is being done, the WAL segments must be
22 archived before being recycled or removed.)
24 The checkpoint requirement of flushing all dirty data pages to disk
25 can cause a significant I/O load. For this reason, checkpoint
26 activity is throttled so that I/O begins at checkpoint start and completes
27 before the next checkpoint is due to start; this minimizes performance
28 degradation during checkpoints.
30 The server's checkpointer process automatically performs
31 a checkpoint every so often. A checkpoint is begun every <a class="xref" href="runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT">checkpoint_timeout</a> seconds, or if
32 <a class="xref" href="runtime-config-wal.html#GUC-MAX-WAL-SIZE">max_wal_size</a> is about to be exceeded,
33 whichever comes first.
34 The default settings are 5 minutes and 1 GB, respectively.
35 If no WAL has been written since the previous checkpoint, new checkpoints
36 will be skipped even if <code class="varname">checkpoint_timeout</code> has passed.
37 (If WAL archiving is being used and you want to put a lower limit on how
38 often files are archived in order to bound potential data loss, you should
39 adjust the <a class="xref" href="runtime-config-wal.html#GUC-ARCHIVE-TIMEOUT">archive_timeout</a> parameter rather than the
40 checkpoint parameters.)
41 It is also possible to force a checkpoint by using the SQL
42 command <code class="command">CHECKPOINT</code>.
44 Reducing <code class="varname">checkpoint_timeout</code> and/or
45 <code class="varname">max_wal_size</code> causes checkpoints to occur
46 more often. This allows faster after-crash recovery, since less work
47 will need to be redone. However, one must balance this against the
48 increased cost of flushing dirty data pages more often. If
49 <a class="xref" href="runtime-config-wal.html#GUC-FULL-PAGE-WRITES">full_page_writes</a> is set (as is the default), there is
50 another factor to consider. To ensure data page consistency,
51 the first modification of a data page after each checkpoint results in
52 logging the entire page content. In that case,
53 a smaller checkpoint interval increases the volume of output to the WAL,
54 partially negating the goal of using a smaller interval,
55 and in any case causing more disk I/O.
57 Checkpoints are fairly expensive, first because they require writing
58 out all currently dirty buffers, and second because they result in
59 extra subsequent WAL traffic as discussed above. It is therefore
60 wise to set the checkpointing parameters high enough so that checkpoints
61 don't happen too often. As a simple sanity check on your checkpointing
62 parameters, you can set the <a class="xref" href="runtime-config-wal.html#GUC-CHECKPOINT-WARNING">checkpoint_warning</a>
63 parameter. If checkpoints happen closer together than
64 <code class="varname">checkpoint_warning</code> seconds,
65 a message will be output to the server log recommending increasing
66 <code class="varname">max_wal_size</code>. Occasional appearance of such
67 a message is not cause for alarm, but if it appears often then the
68 checkpoint control parameters should be increased. Bulk operations such
69 as large <code class="command">COPY</code> transfers might cause a number of such warnings
70 to appear if you have not set <code class="varname">max_wal_size</code> high
73 To avoid flooding the I/O system with a burst of page writes,
74 writing dirty buffers during a checkpoint is spread over a period of time.
75 That period is controlled by
76 <a class="xref" href="runtime-config-wal.html#GUC-CHECKPOINT-COMPLETION-TARGET">checkpoint_completion_target</a>, which is
77 given as a fraction of the checkpoint interval (configured by using
78 <code class="varname">checkpoint_timeout</code>).
79 The I/O rate is adjusted so that the checkpoint finishes when the
81 <code class="varname">checkpoint_timeout</code> seconds have elapsed, or before
82 <code class="varname">max_wal_size</code> is exceeded, whichever is sooner.
83 With the default value of 0.9,
84 <span class="productname">PostgreSQL</span> can be expected to complete each checkpoint
85 a bit before the next scheduled checkpoint (at around 90% of the last checkpoint's
86 duration). This spreads out the I/O as much as possible so that the checkpoint
87 I/O load is consistent throughout the checkpoint interval. The disadvantage of
88 this is that prolonging checkpoints affects recovery time, because more WAL
89 segments will need to be kept around for possible use in recovery. A user
90 concerned about the amount of time required to recover might wish to reduce
91 <code class="varname">checkpoint_timeout</code> so that checkpoints occur more frequently
92 but still spread the I/O across the checkpoint interval. Alternatively,
93 <code class="varname">checkpoint_completion_target</code> could be reduced, but this would
94 result in times of more intense I/O (during the checkpoint) and times of less I/O
95 (after the checkpoint completed but before the next scheduled checkpoint) and
96 therefore is not recommended.
97 Although <code class="varname">checkpoint_completion_target</code> could be set as high as
98 1.0, it is typically recommended to set it to no higher than 0.9 (the default)
99 since checkpoints include some other activities besides writing dirty buffers.
100 A setting of 1.0 is quite likely to result in checkpoints not being
101 completed on time, which would result in performance loss due to
102 unexpected variation in the number of WAL segments needed.
104 On Linux and POSIX platforms <a class="xref" href="runtime-config-wal.html#GUC-CHECKPOINT-FLUSH-AFTER">checkpoint_flush_after</a>
105 allows you to force OS pages written by the checkpoint to be
106 flushed to disk after a configurable number of bytes. Otherwise, these
107 pages may be kept in the OS's page cache, inducing a stall when
108 <code class="literal">fsync</code> is issued at the end of a checkpoint. This setting will
109 often help to reduce transaction latency, but it also can have an adverse
110 effect on performance; particularly for workloads that are bigger than
111 <a class="xref" href="runtime-config-resource.html#GUC-SHARED-BUFFERS">shared_buffers</a>, but smaller than the OS's page cache.
113 The number of WAL segment files in <code class="filename">pg_wal</code> directory depends on
114 <code class="varname">min_wal_size</code>, <code class="varname">max_wal_size</code> and
115 the amount of WAL generated in previous checkpoint cycles. When old WAL
116 segment files are no longer needed, they are removed or recycled (that is,
117 renamed to become future segments in the numbered sequence). If, due to a
118 short-term peak of WAL output rate, <code class="varname">max_wal_size</code> is
119 exceeded, the unneeded segment files will be removed until the system
120 gets back under this limit. Below that limit, the system recycles enough
121 WAL files to cover the estimated need until the next checkpoint, and
122 removes the rest. The estimate is based on a moving average of the number
123 of WAL files used in previous checkpoint cycles. The moving average
124 is increased immediately if the actual usage exceeds the estimate, so it
125 accommodates peak usage rather than average usage to some extent.
126 <code class="varname">min_wal_size</code> puts a minimum on the amount of WAL files
127 recycled for future usage; that much WAL is always recycled for future use,
128 even if the system is idle and the WAL usage estimate suggests that little
131 Independently of <code class="varname">max_wal_size</code>,
132 the most recent <a class="xref" href="runtime-config-replication.html#GUC-WAL-KEEP-SIZE">wal_keep_size</a> megabytes of
133 WAL files plus one additional WAL file are
134 kept at all times. Also, if WAL archiving is used, old segments cannot be
135 removed or recycled until they are archived. If WAL archiving cannot keep up
136 with the pace that WAL is generated, or if <code class="varname">archive_command</code>
137 or <code class="varname">archive_library</code>
138 fails repeatedly, old WAL files will accumulate in <code class="filename">pg_wal</code>
139 until the situation is resolved. A slow or failed standby server that
140 uses a replication slot will have the same effect (see
141 <a class="xref" href="warm-standby.html#STREAMING-REPLICATION-SLOTS" title="26.2.6. Replication Slots">Section 26.2.6</a>).
142 Similarly, if <a class="link" href="runtime-config-wal.html#RUNTIME-CONFIG-WAL-SUMMARIZATION" title="19.5.7. WAL Summarization">
143 WAL summarization</a> is enabled, old segments are kept
144 until they are summarized.
146 In archive recovery or standby mode, the server periodically performs
147 <em class="firstterm">restartpoints</em>,<a id="id-1.6.15.7.12.2" class="indexterm"></a>
148 which are similar to checkpoints in normal operation: the server forces
149 all its state to disk, updates the <code class="filename">pg_control</code> file to
150 indicate that the already-processed WAL data need not be scanned again,
151 and then recycles any old WAL segment files in the <code class="filename">pg_wal</code>
153 Restartpoints can't be performed more frequently than checkpoints on the
154 primary because restartpoints can only be performed at checkpoint records.
155 A restartpoint can be demanded by a schedule or by an external request.
156 The <code class="structfield">restartpoints_timed</code> counter in the
157 <a class="link" href="monitoring-stats.html#MONITORING-PG-STAT-CHECKPOINTER-VIEW" title="27.2.15. pg_stat_checkpointer"><code class="structname">pg_stat_checkpointer</code></a>
158 view counts the first ones while the <code class="structfield">restartpoints_req</code>
160 A restartpoint is triggered by schedule when a checkpoint record is reached
161 if at least <a class="xref" href="runtime-config-wal.html#GUC-CHECKPOINT-TIMEOUT">checkpoint_timeout</a> seconds have passed since
162 the last performed restartpoint or when the previous attempt to perform
163 the restartpoint has failed. In the last case, the next restartpoint
164 will be scheduled in 15 seconds.
165 A restartpoint is triggered by request due to similar reasons like checkpoint
166 but mostly if WAL size is about to exceed <a class="xref" href="runtime-config-wal.html#GUC-MAX-WAL-SIZE">max_wal_size</a>
167 However, because of limitations on when a restartpoint can be performed,
168 <code class="varname">max_wal_size</code> is often exceeded during recovery,
169 by up to one checkpoint cycle's worth of WAL.
170 (<code class="varname">max_wal_size</code> is never a hard limit anyway, so you should
171 always leave plenty of headroom to avoid running out of disk space.)
172 The <code class="structfield">restartpoints_done</code> counter in the
173 <a class="link" href="monitoring-stats.html#MONITORING-PG-STAT-CHECKPOINTER-VIEW" title="27.2.15. pg_stat_checkpointer"><code class="structname">pg_stat_checkpointer</code></a>
174 view counts the restartpoints that have really been performed.
176 In some cases, when the WAL size on the primary increases quickly,
177 for instance during massive <code class="command">INSERT</code>,
178 the <code class="structfield">restartpoints_req</code> counter on the standby
179 may demonstrate a peak growth.
180 This occurs because requests to create a new restartpoint due to increased
181 WAL consumption cannot be performed because the safe checkpoint record
182 since the last restartpoint has not yet been replayed on the standby.
183 This behavior is normal and does not lead to an increase in system resource
185 Only the <code class="structfield">restartpoints_done</code>
186 counter among the restartpoint-related ones indicates that noticeable system
187 resources have been spent.
189 There are two commonly used internal <acronym class="acronym">WAL</acronym> functions:
190 <code class="function">XLogInsertRecord</code> and <code class="function">XLogFlush</code>.
191 <code class="function">XLogInsertRecord</code> is used to place a new record into
192 the <acronym class="acronym">WAL</acronym> buffers in shared memory. If there is no
193 space for the new record, <code class="function">XLogInsertRecord</code> will have
194 to write (move to kernel cache) a few filled <acronym class="acronym">WAL</acronym>
195 buffers. This is undesirable because <code class="function">XLogInsertRecord</code>
196 is used on every database low level modification (for example, row
197 insertion) at a time when an exclusive lock is held on affected
198 data pages, so the operation needs to be as fast as possible. What
199 is worse, writing <acronym class="acronym">WAL</acronym> buffers might also force the
200 creation of a new WAL segment, which takes even more
201 time. Normally, <acronym class="acronym">WAL</acronym> buffers should be written
202 and flushed by an <code class="function">XLogFlush</code> request, which is
203 made, for the most part, at transaction commit time to ensure that
204 transaction records are flushed to permanent storage. On systems
205 with high WAL output, <code class="function">XLogFlush</code> requests might
206 not occur often enough to prevent <code class="function">XLogInsertRecord</code>
207 from having to do writes. On such systems
208 one should increase the number of <acronym class="acronym">WAL</acronym> buffers by
209 modifying the <a class="xref" href="runtime-config-wal.html#GUC-WAL-BUFFERS">wal_buffers</a> parameter. When
210 <a class="xref" href="runtime-config-wal.html#GUC-FULL-PAGE-WRITES">full_page_writes</a> is set and the system is very busy,
211 setting <code class="varname">wal_buffers</code> higher will help smooth response times
212 during the period immediately following each checkpoint.
214 The <a class="xref" href="runtime-config-wal.html#GUC-COMMIT-DELAY">commit_delay</a> parameter defines for how many
215 microseconds a group commit leader process will sleep after acquiring a
216 lock within <code class="function">XLogFlush</code>, while group commit
217 followers queue up behind the leader. This delay allows other server
218 processes to add their commit records to the WAL buffers so that all of
219 them will be flushed by the leader's eventual sync operation. No sleep
220 will occur if <a class="xref" href="runtime-config-wal.html#GUC-FSYNC">fsync</a> is not enabled, or if fewer
221 than <a class="xref" href="runtime-config-wal.html#GUC-COMMIT-SIBLINGS">commit_siblings</a> other sessions are currently
222 in active transactions; this avoids sleeping when it's unlikely that
223 any other session will commit soon. Note that on some platforms, the
224 resolution of a sleep request is ten milliseconds, so that any nonzero
225 <code class="varname">commit_delay</code> setting between 1 and 10000
226 microseconds would have the same effect. Note also that on some
227 platforms, sleep operations may take slightly longer than requested by
230 Since the purpose of <code class="varname">commit_delay</code> is to allow the
231 cost of each flush operation to be amortized across concurrently
232 committing transactions (potentially at the expense of transaction
233 latency), it is necessary to quantify that cost before the setting can
234 be chosen intelligently. The higher that cost is, the more effective
235 <code class="varname">commit_delay</code> is expected to be in increasing
236 transaction throughput, up to a point. The <a class="xref" href="pgtestfsync.html" title="pg_test_fsync"><span class="refentrytitle"><span class="application">pg_test_fsync</span></span></a> program can be used to measure the average time
237 in microseconds that a single WAL flush operation takes. A value of
238 half of the average time the program reports it takes to flush after a
239 single 8kB write operation is often the most effective setting for
240 <code class="varname">commit_delay</code>, so this value is recommended as the
241 starting point to use when optimizing for a particular workload. While
242 tuning <code class="varname">commit_delay</code> is particularly useful when the
243 WAL is stored on high-latency rotating disks, benefits can be
244 significant even on storage media with very fast sync times, such as
245 solid-state drives or RAID arrays with a battery-backed write cache;
246 but this should definitely be tested against a representative workload.
247 Higher values of <code class="varname">commit_siblings</code> should be used in
248 such cases, whereas smaller <code class="varname">commit_siblings</code> values
249 are often helpful on higher latency media. Note that it is quite
250 possible that a setting of <code class="varname">commit_delay</code> that is too
251 high can increase transaction latency by so much that total transaction
254 When <code class="varname">commit_delay</code> is set to zero (the default), it
255 is still possible for a form of group commit to occur, but each group
256 will consist only of sessions that reach the point where they need to
257 flush their commit records during the window in which the previous
258 flush operation (if any) is occurring. At higher client counts a
259 <span class="quote">“<span class="quote">gangway effect</span>”</span> tends to occur, so that the effects of group
260 commit become significant even when <code class="varname">commit_delay</code> is
261 zero, and thus explicitly setting <code class="varname">commit_delay</code> tends
262 to help less. Setting <code class="varname">commit_delay</code> can only help
263 when (1) there are some concurrently committing transactions, and (2)
264 throughput is limited to some degree by commit rate; but with high
265 rotational latency this setting can be effective in increasing
266 transaction throughput with as few as two clients (that is, a single
267 committing client with one sibling transaction).
269 The <a class="xref" href="runtime-config-wal.html#GUC-WAL-SYNC-METHOD">wal_sync_method</a> parameter determines how
270 <span class="productname">PostgreSQL</span> will ask the kernel to force
271 <acronym class="acronym">WAL</acronym> updates out to disk.
272 All the options should be the same in terms of reliability, with
273 the exception of <code class="literal">fsync_writethrough</code>, which can sometimes
274 force a flush of the disk cache even when other options do not do so.
275 However, it's quite platform-specific which one will be the fastest.
276 You can test the speeds of different options using the <a class="xref" href="pgtestfsync.html" title="pg_test_fsync"><span class="refentrytitle"><span class="application">pg_test_fsync</span></span></a> program.
277 Note that this parameter is irrelevant if <code class="varname">fsync</code>
280 Enabling the <a class="xref" href="runtime-config-developer.html#GUC-WAL-DEBUG">wal_debug</a> configuration parameter
281 (provided that <span class="productname">PostgreSQL</span> has been
282 compiled with support for it) will result in each
283 <code class="function">XLogInsertRecord</code> and <code class="function">XLogFlush</code>
284 <acronym class="acronym">WAL</acronym> call being logged to the server log. This
285 option might be replaced by a more general mechanism in the future.
287 There are two internal functions to write WAL data to disk:
288 <code class="function">XLogWrite</code> and <code class="function">issue_xlog_fsync</code>.
289 When <a class="xref" href="runtime-config-statistics.html#GUC-TRACK-WAL-IO-TIMING">track_wal_io_timing</a> is enabled, the total
290 amounts of time <code class="function">XLogWrite</code> writes and
291 <code class="function">issue_xlog_fsync</code> syncs WAL data to disk are counted as
292 <code class="varname">write_time</code> and <code class="varname">fsync_time</code> in
293 <a class="xref" href="monitoring-stats.html#PG-STAT-IO-VIEW" title="Table 27.23. pg_stat_io View">pg_stat_io</a> for the <code class="varname">object</code>
294 <code class="literal">wal</code>, respectively.
295 <code class="function">XLogWrite</code> is normally called by
296 <code class="function">XLogInsertRecord</code> (when there is no space for the new
297 record in WAL buffers), <code class="function">XLogFlush</code> and the WAL writer,
298 to write WAL buffers to disk and call <code class="function">issue_xlog_fsync</code>.
299 <code class="function">issue_xlog_fsync</code> is normally called by
300 <code class="function">XLogWrite</code> to sync WAL files to disk.
301 If <code class="varname">wal_sync_method</code> is either
302 <code class="literal">open_datasync</code> or <code class="literal">open_sync</code>,
303 a write operation in <code class="function">XLogWrite</code> guarantees to sync written
304 WAL data to disk and <code class="function">issue_xlog_fsync</code> does nothing.
305 If <code class="varname">wal_sync_method</code> is either <code class="literal">fdatasync</code>,
306 <code class="literal">fsync</code>, or <code class="literal">fsync_writethrough</code>,
307 the write operation moves WAL buffers to kernel cache and
308 <code class="function">issue_xlog_fsync</code> syncs them to disk. Regardless
309 of the setting of <code class="varname">track_wal_io_timing</code>, the number
310 of times <code class="function">XLogWrite</code> writes and
311 <code class="function">issue_xlog_fsync</code> syncs WAL data to disk are also
312 counted as <code class="varname">writes</code> and <code class="varname">fsyncs</code>
313 in <code class="structname">pg_stat_io</code> for the <code class="varname">object</code>
314 <code class="literal">wal</code>, respectively.
316 The <a class="xref" href="runtime-config-wal.html#GUC-RECOVERY-PREFETCH">recovery_prefetch</a> parameter can be used to reduce
317 I/O wait times during recovery by instructing the kernel to initiate reads
318 of disk blocks that will soon be needed but are not currently in
319 <span class="productname">PostgreSQL</span>'s buffer pool.
320 The <a class="xref" href="runtime-config-resource.html#GUC-MAINTENANCE-IO-CONCURRENCY">maintenance_io_concurrency</a> and
321 <a class="xref" href="runtime-config-wal.html#GUC-WAL-DECODE-BUFFER-SIZE">wal_decode_buffer_size</a> settings limit prefetching
322 concurrency and distance, respectively. By default, it is set to
323 <code class="literal">try</code>, which enables the feature on systems that support
324 issuing read-ahead advice.
325 </p></div><div class="navfooter"><hr /><table width="100%" summary="Navigation footer"><tr><td width="40%" align="left"><a accesskey="p" href="wal-async-commit.html" title="28.4. Asynchronous Commit">Prev</a> </td><td width="20%" align="center"><a accesskey="u" href="wal.html" title="Chapter 28. Reliability and the Write-Ahead Log">Up</a></td><td width="40%" align="right"> <a accesskey="n" href="wal-internals.html" title="28.6. WAL Internals">Next</a></td></tr><tr><td width="40%" align="left" valign="top">28.4. Asynchronous Commit </td><td width="20%" align="center"><a accesskey="h" href="index.html" title="PostgreSQL 18.0 Documentation">Home</a></td><td width="40%" align="right" valign="top"> 28.6. WAL Internals</td></tr></table></div></body></html>