2 19.4. Resource Consumption #
6 19.4.3. Kernel Resource Usage
7 19.4.4. Background Writer
9 19.4.6. Worker Processes
13 shared_buffers (integer) #
14 Sets the amount of memory the database server uses for shared
15 memory buffers. The default is typically 128 megabytes (128MB),
16 but might be less if your kernel settings will not support it
17 (as determined during initdb). This setting must be at least 128
18 kilobytes. However, settings significantly higher than the
19 minimum are usually needed for good performance. If this value
20 is specified without units, it is taken as blocks, that is
21 BLCKSZ bytes, typically 8kB. (Non-default values of BLCKSZ
22 change the minimum value.) This parameter can only be set at
25 If you have a dedicated database server with 1GB or more of RAM,
26 a reasonable starting value for shared_buffers is 25% of the
27 memory in your system. There are some workloads where even
28 larger settings for shared_buffers are effective, but because
29 PostgreSQL also relies on the operating system cache, it is
30 unlikely that an allocation of more than 40% of RAM to
31 shared_buffers will work better than a smaller amount. Larger
32 settings for shared_buffers usually require a corresponding
33 increase in max_wal_size, in order to spread out the process of
34 writing large quantities of new or changed data over a longer
37 On systems with less than 1GB of RAM, a smaller percentage of
38 RAM is appropriate, so as to leave adequate space for the
42 Controls whether huge pages are requested for the main shared
43 memory area. Valid values are try (the default), on, and off.
44 With huge_pages set to try, the server will try to request huge
45 pages, but fall back to the default if that fails. With on,
46 failure to request huge pages will prevent the server from
47 starting up. With off, huge pages will not be requested. The
48 actual state of huge pages is indicated by the server variable
51 At present, this setting is supported only on Linux and Windows.
52 The setting is ignored on other systems when set to try. On
53 Linux, it is only supported when shared_memory_type is set to
56 The use of huge pages results in smaller page tables and less
57 CPU time spent on memory management, increasing performance. For
58 more details about using huge pages on Linux, see
61 Huge pages are known as large pages on Windows. To use them, you
62 need to assign the user right “Lock pages in memory” to the
63 Windows user account that runs PostgreSQL. You can use Windows
64 Group Policy tool (gpedit.msc) to assign the user right “Lock
65 pages in memory”. To start the database server on the command
66 prompt as a standalone process, not as a Windows service, the
67 command prompt must be run as an administrator or User Access
68 Control (UAC) must be disabled. When the UAC is enabled, the
69 normal command prompt revokes the user right “Lock pages in
72 Note that this setting only affects the main shared memory area.
73 Operating systems such as Linux, FreeBSD, and Illumos can also
74 use huge pages (also known as “super” pages or “large” pages)
75 automatically for normal memory allocation, without an explicit
76 request from PostgreSQL. On Linux, this is called “transparent
77 huge pages” (THP). That feature has been known to cause
78 performance degradation with PostgreSQL for some users on some
79 Linux versions, so its use is currently discouraged (unlike
80 explicit use of huge_pages).
82 huge_page_size (integer) #
83 Controls the size of huge pages, when they are enabled with
84 huge_pages. The default is zero (0). When set to 0, the default
85 huge page size on the system will be used. This parameter can
86 only be set at server start.
88 Some commonly available page sizes on modern 64 bit server
89 architectures include: 2MB and 1GB (Intel and AMD), 16MB and
90 16GB (IBM POWER), and 64kB, 2MB, 32MB and 1GB (ARM). For more
91 information about usage and support, see Section 18.4.5.
93 Non-default settings are currently supported only on Linux.
95 temp_buffers (integer) #
96 Sets the maximum amount of memory used for temporary buffers
97 within each database session. These are session-local buffers
98 used only for access to temporary tables. If this value is
99 specified without units, it is taken as blocks, that is BLCKSZ
100 bytes, typically 8kB. The default is eight megabytes (8MB). (If
101 BLCKSZ is not 8kB, the default value scales proportionally to
102 it.) This setting can be changed within individual sessions, but
103 only before the first use of temporary tables within the
104 session; subsequent attempts to change the value will have no
105 effect on that session.
107 A session will allocate temporary buffers as needed up to the
108 limit given by temp_buffers. The cost of setting a large value
109 in sessions that do not actually need many temporary buffers is
110 only a buffer descriptor, or about 64 bytes, per increment in
111 temp_buffers. However if a buffer is actually used an additional
112 8192 bytes will be consumed for it (or in general, BLCKSZ
115 max_prepared_transactions (integer) #
116 Sets the maximum number of transactions that can be in the
117 “prepared” state simultaneously (see PREPARE TRANSACTION).
118 Setting this parameter to zero (which is the default) disables
119 the prepared-transaction feature. This parameter can only be set
122 If you are not planning to use prepared transactions, this
123 parameter should be set to zero to prevent accidental creation
124 of prepared transactions. If you are using prepared
125 transactions, you will probably want max_prepared_transactions
126 to be at least as large as max_connections, so that every
127 session can have a prepared transaction pending.
129 When running a standby server, you must set this parameter to
130 the same or higher value than on the primary server. Otherwise,
131 queries will not be allowed in the standby server.
134 Sets the base maximum amount of memory to be used by a query
135 operation (such as a sort or hash table) before writing to
136 temporary disk files. If this value is specified without units,
137 it is taken as kilobytes. The default value is four megabytes
138 (4MB). Note that a complex query might perform several sort and
139 hash operations at the same time, with each operation generally
140 being allowed to use as much memory as this value specifies
141 before it starts to write data into temporary files. Also,
142 several running sessions could be doing such operations
143 concurrently. Therefore, the total memory used could be many
144 times the value of work_mem; it is necessary to keep this fact
145 in mind when choosing the value. Sort operations are used for
146 ORDER BY, DISTINCT, and merge joins. Hash tables are used in
147 hash joins, hash-based aggregation, memoize nodes and hash-based
148 processing of IN subqueries.
150 Hash-based operations are generally more sensitive to memory
151 availability than equivalent sort-based operations. The memory
152 limit for a hash table is computed by multiplying work_mem by
153 hash_mem_multiplier. This makes it possible for hash-based
154 operations to use an amount of memory that exceeds the usual
155 work_mem base amount.
157 hash_mem_multiplier (floating point) #
158 Used to compute the maximum amount of memory that hash-based
159 operations can use. The final limit is determined by multiplying
160 work_mem by hash_mem_multiplier. The default value is 2.0, which
161 makes hash-based operations use twice the usual work_mem base
164 Consider increasing hash_mem_multiplier in environments where
165 spilling by query operations is a regular occurrence, especially
166 when simply increasing work_mem results in memory pressure
167 (memory pressure typically takes the form of intermittent out of
168 memory errors). The default setting of 2.0 is often effective
169 with mixed workloads. Higher settings in the range of 2.0 - 8.0
170 or more may be effective in environments where work_mem has
171 already been increased to 40MB or more.
173 maintenance_work_mem (integer) #
174 Specifies the maximum amount of memory to be used by maintenance
175 operations, such as VACUUM, CREATE INDEX, and ALTER TABLE ADD
176 FOREIGN KEY. If this value is specified without units, it is
177 taken as kilobytes. It defaults to 64 megabytes (64MB). Since
178 only one of these operations can be executed at a time by a
179 database session, and an installation normally doesn't have many
180 of them running concurrently, it's safe to set this value
181 significantly larger than work_mem. Larger settings might
182 improve performance for vacuuming and for restoring database
185 Note that when autovacuum runs, up to autovacuum_max_workers
186 times this memory may be allocated, so be careful not to set the
187 default value too high. It may be useful to control for this by
188 separately setting autovacuum_work_mem.
190 autovacuum_work_mem (integer) #
191 Specifies the maximum amount of memory to be used by each
192 autovacuum worker process. If this value is specified without
193 units, it is taken as kilobytes. It defaults to -1, indicating
194 that the value of maintenance_work_mem should be used instead.
195 The setting has no effect on the behavior of VACUUM when run in
196 other contexts. This parameter can only be set in the
197 postgresql.conf file or on the server command line.
199 vacuum_buffer_usage_limit (integer) #
200 Specifies the size of the Buffer Access Strategy used by the
201 VACUUM and ANALYZE commands. A setting of 0 will allow the
202 operation to use any number of shared_buffers. Otherwise valid
203 sizes range from 128 kB to 16 GB. If the specified size would
204 exceed 1/8 the size of shared_buffers, the size is silently
205 capped to that value. The default value is 2MB. If this value is
206 specified without units, it is taken as kilobytes. This
207 parameter can be set at any time. It can be overridden for
208 VACUUM and ANALYZE when passing the BUFFER_USAGE_LIMIT option.
209 Higher settings can allow VACUUM and ANALYZE to run more
210 quickly, but having too large a setting may cause too many other
211 useful pages to be evicted from shared buffers.
213 logical_decoding_work_mem (integer) #
214 Specifies the maximum amount of memory to be used by logical
215 decoding, before some of the decoded changes are written to
216 local disk. This limits the amount of memory used by logical
217 streaming replication connections. It defaults to 64 megabytes
218 (64MB). Since each replication connection only uses a single
219 buffer of this size, and an installation normally doesn't have
220 many such connections concurrently (as limited by
221 max_wal_senders), it's safe to set this value significantly
222 higher than work_mem, reducing the amount of decoded changes
225 commit_timestamp_buffers (integer) #
226 Specifies the amount of memory to use to cache the contents of
227 pg_commit_ts (see Table 66.1). If this value is specified
228 without units, it is taken as blocks, that is BLCKSZ bytes,
229 typically 8kB. The default value is 0, which requests
230 shared_buffers/512 up to 1024 blocks, but not fewer than 16
231 blocks. This parameter can only be set at server start.
233 multixact_member_buffers (integer) #
234 Specifies the amount of shared memory to use to cache the
235 contents of pg_multixact/members (see Table 66.1). If this value
236 is specified without units, it is taken as blocks, that is
237 BLCKSZ bytes, typically 8kB. The default value is 32. This
238 parameter can only be set at server start.
240 multixact_offset_buffers (integer) #
241 Specifies the amount of shared memory to use to cache the
242 contents of pg_multixact/offsets (see Table 66.1). If this value
243 is specified without units, it is taken as blocks, that is
244 BLCKSZ bytes, typically 8kB. The default value is 16. This
245 parameter can only be set at server start.
247 notify_buffers (integer) #
248 Specifies the amount of shared memory to use to cache the
249 contents of pg_notify (see Table 66.1). If this value is
250 specified without units, it is taken as blocks, that is BLCKSZ
251 bytes, typically 8kB. The default value is 16. This parameter
252 can only be set at server start.
254 serializable_buffers (integer) #
255 Specifies the amount of shared memory to use to cache the
256 contents of pg_serial (see Table 66.1). If this value is
257 specified without units, it is taken as blocks, that is BLCKSZ
258 bytes, typically 8kB. The default value is 32. This parameter
259 can only be set at server start.
261 subtransaction_buffers (integer) #
262 Specifies the amount of shared memory to use to cache the
263 contents of pg_subtrans (see Table 66.1). If this value is
264 specified without units, it is taken as blocks, that is BLCKSZ
265 bytes, typically 8kB. The default value is 0, which requests
266 shared_buffers/512 up to 1024 blocks, but not fewer than 16
267 blocks. This parameter can only be set at server start.
269 transaction_buffers (integer) #
270 Specifies the amount of shared memory to use to cache the
271 contents of pg_xact (see Table 66.1). If this value is specified
272 without units, it is taken as blocks, that is BLCKSZ bytes,
273 typically 8kB. The default value is 0, which requests
274 shared_buffers/512 up to 1024 blocks, but not fewer than 16
275 blocks. This parameter can only be set at server start.
277 max_stack_depth (integer) #
278 Specifies the maximum safe depth of the server's execution
279 stack. The ideal setting for this parameter is the actual stack
280 size limit enforced by the kernel (as set by ulimit -s or local
281 equivalent), less a safety margin of a megabyte or so. The
282 safety margin is needed because the stack depth is not checked
283 in every routine in the server, but only in key
284 potentially-recursive routines. If this value is specified
285 without units, it is taken as kilobytes. The default setting is
286 two megabytes (2MB), which is conservatively small and unlikely
287 to risk crashes. However, it might be too small to allow
288 execution of complex functions. Only superusers and users with
289 the appropriate SET privilege can change this setting.
291 Setting max_stack_depth higher than the actual kernel limit will
292 mean that a runaway recursive function can crash an individual
293 backend process. On platforms where PostgreSQL can determine the
294 kernel limit, the server will not allow this variable to be set
295 to an unsafe value. However, not all platforms provide the
296 information, so caution is recommended in selecting a value.
298 shared_memory_type (enum) #
299 Specifies the shared memory implementation that the server
300 should use for the main shared memory region that holds
301 PostgreSQL's shared buffers and other shared data. Possible
302 values are mmap (for anonymous shared memory allocated using
303 mmap), sysv (for System V shared memory allocated via shmget)
304 and windows (for Windows shared memory). Not all values are
305 supported on all platforms; the first supported option is the
306 default for that platform. The use of the sysv option, which is
307 not the default on any platform, is generally discouraged
308 because it typically requires non-default kernel settings to
309 allow for large allocations (see Section 18.4.1).
311 dynamic_shared_memory_type (enum) #
312 Specifies the dynamic shared memory implementation that the
313 server should use. Possible values are posix (for POSIX shared
314 memory allocated using shm_open), sysv (for System V shared
315 memory allocated via shmget), windows (for Windows shared
316 memory), and mmap (to simulate shared memory using memory-mapped
317 files stored in the data directory). Not all values are
318 supported on all platforms; the first supported option is
319 usually the default for that platform. The use of the mmap
320 option, which is not the default on any platform, is generally
321 discouraged because the operating system may write modified
322 pages back to disk repeatedly, increasing system I/O load;
323 however, it may be useful for debugging, when the pg_dynshmem
324 directory is stored on a RAM disk, or when other shared memory
325 facilities are not available.
327 min_dynamic_shared_memory (integer) #
328 Specifies the amount of memory that should be allocated at
329 server startup for use by parallel queries. When this memory
330 region is insufficient or exhausted by concurrent queries, new
331 parallel queries try to allocate extra shared memory temporarily
332 from the operating system using the method configured with
333 dynamic_shared_memory_type, which may be slower due to memory
334 management overheads. Memory that is allocated at startup with
335 min_dynamic_shared_memory is affected by the huge_pages setting
336 on operating systems where that is supported, and may be more
337 likely to benefit from larger pages on operating systems where
338 that is managed automatically. The default value is 0 (none).
339 This parameter can only be set at server start.
343 temp_file_limit (integer) #
344 Specifies the maximum amount of disk space that a process can
345 use for temporary files, such as sort and hash temporary files,
346 or the storage file for a held cursor. A transaction attempting
347 to exceed this limit will be canceled. If this value is
348 specified without units, it is taken as kilobytes. -1 (the
349 default) means no limit. Only superusers and users with the
350 appropriate SET privilege can change this setting.
352 This setting constrains the total space used at any instant by
353 all temporary files used by a given PostgreSQL process. It
354 should be noted that disk space used for explicit temporary
355 tables, as opposed to temporary files used behind-the-scenes in
356 query execution, does not count against this limit.
358 file_copy_method (enum) #
359 Specifies the method used to copy files. Possible values are
360 COPY (default) and CLONE (if operating support is available).
362 This parameter affects:
364 + CREATE DATABASE ... STRATEGY=FILE_COPY
365 + ALTER DATABASE ... SET TABLESPACE ...
367 CLONE uses the copy_file_range() (Linux, FreeBSD) or copyfile
368 (macOS) system calls, giving the kernel the opportunity to share
369 disk blocks or push work down to lower layers on some file
372 max_notify_queue_pages (integer) #
373 Specifies the maximum amount of allocated pages for NOTIFY /
374 LISTEN queue. The default value is 1048576. For 8 KB pages it
375 allows to consume up to 8 GB of disk space.
377 19.4.3. Kernel Resource Usage #
379 max_files_per_process (integer) #
380 Sets the maximum number of open files each server subprocess is
381 allowed to open simultaneously; files already opened in the
382 postmaster are not counted toward this limit. The default is one
385 If the kernel is enforcing a safe per-process limit, you don't
386 need to worry about this setting. But on some platforms
387 (notably, most BSD systems), the kernel will allow individual
388 processes to open many more files than the system can actually
389 support if many processes all try to open that many files. If
390 you find yourself seeing “Too many open files” failures, try
391 reducing this setting. This parameter can only be set at server
394 19.4.4. Background Writer #
396 There is a separate server process called the background writer, whose
397 function is to issue writes of “dirty” (new or modified) shared
398 buffers. When the number of clean shared buffers appears to be
399 insufficient, the background writer writes some dirty buffers to the
400 file system and marks them as clean. This reduces the likelihood that
401 server processes handling user queries will be unable to find clean
402 buffers and have to write dirty buffers themselves. However, the
403 background writer does cause a net overall increase in I/O load,
404 because while a repeatedly-dirtied page might otherwise be written only
405 once per checkpoint interval, the background writer might write it
406 several times as it is dirtied in the same interval. The parameters
407 discussed in this subsection can be used to tune the behavior for local
410 bgwriter_delay (integer) #
411 Specifies the delay between activity rounds for the background
412 writer. In each round the writer issues writes for some number
413 of dirty buffers (controllable by the following parameters). It
414 then sleeps for the length of bgwriter_delay, and repeats. When
415 there are no dirty buffers in the buffer pool, though, it goes
416 into a longer sleep regardless of bgwriter_delay. If this value
417 is specified without units, it is taken as milliseconds. The
418 default value is 200 milliseconds (200ms). Note that on some
419 systems, the effective resolution of sleep delays is 10
420 milliseconds; setting bgwriter_delay to a value that is not a
421 multiple of 10 might have the same results as setting it to the
422 next higher multiple of 10. This parameter can only be set in
423 the postgresql.conf file or on the server command line.
425 bgwriter_lru_maxpages (integer) #
426 In each round, no more than this many buffers will be written by
427 the background writer. Setting this to zero disables background
428 writing. (Note that checkpoints, which are managed by a
429 separate, dedicated auxiliary process, are unaffected.) The
430 default value is 100 buffers. This parameter can only be set in
431 the postgresql.conf file or on the server command line.
433 bgwriter_lru_multiplier (floating point) #
434 The number of dirty buffers written in each round is based on
435 the number of new buffers that have been needed by server
436 processes during recent rounds. The average recent need is
437 multiplied by bgwriter_lru_multiplier to arrive at an estimate
438 of the number of buffers that will be needed during the next
439 round. Dirty buffers are written until there are that many
440 clean, reusable buffers available. (However, no more than
441 bgwriter_lru_maxpages buffers will be written per round.) Thus,
442 a setting of 1.0 represents a “just in time” policy of writing
443 exactly the number of buffers predicted to be needed. Larger
444 values provide some cushion against spikes in demand, while
445 smaller values intentionally leave writes to be done by server
446 processes. The default is 2.0. This parameter can only be set in
447 the postgresql.conf file or on the server command line.
449 bgwriter_flush_after (integer) #
450 Whenever more than this amount of data has been written by the
451 background writer, attempt to force the OS to issue these writes
452 to the underlying storage. Doing so will limit the amount of
453 dirty data in the kernel's page cache, reducing the likelihood
454 of stalls when an fsync is issued at the end of a checkpoint, or
455 when the OS writes data back in larger batches in the
456 background. Often that will result in greatly reduced
457 transaction latency, but there also are some cases, especially
458 with workloads that are bigger than shared_buffers, but smaller
459 than the OS's page cache, where performance might degrade. This
460 setting may have no effect on some platforms. If this value is
461 specified without units, it is taken as blocks, that is BLCKSZ
462 bytes, typically 8kB. The valid range is between 0, which
463 disables forced writeback, and 2MB. The default is 512kB on
464 Linux, 0 elsewhere. (If BLCKSZ is not 8kB, the default and
465 maximum values scale proportionally to it.) This parameter can
466 only be set in the postgresql.conf file or on the server command
469 Smaller values of bgwriter_lru_maxpages and bgwriter_lru_multiplier
470 reduce the extra I/O load caused by the background writer, but make it
471 more likely that server processes will have to issue writes for
472 themselves, delaying interactive queries.
476 backend_flush_after (integer) #
477 Whenever more than this amount of data has been written by a
478 single backend, attempt to force the OS to issue these writes to
479 the underlying storage. Doing so will limit the amount of dirty
480 data in the kernel's page cache, reducing the likelihood of
481 stalls when an fsync is issued at the end of a checkpoint, or
482 when the OS writes data back in larger batches in the
483 background. Often that will result in greatly reduced
484 transaction latency, but there also are some cases, especially
485 with workloads that are bigger than shared_buffers, but smaller
486 than the OS's page cache, where performance might degrade. This
487 setting may have no effect on some platforms. If this value is
488 specified without units, it is taken as blocks, that is BLCKSZ
489 bytes, typically 8kB. The valid range is between 0, which
490 disables forced writeback, and 2MB. The default is 0, i.e., no
491 forced writeback. (If BLCKSZ is not 8kB, the maximum value
492 scales proportionally to it.)
494 effective_io_concurrency (integer) #
495 Sets the number of concurrent storage I/O operations that
496 PostgreSQL expects can be executed simultaneously. Raising this
497 value will increase the number of I/O operations that any
498 individual PostgreSQL session attempts to initiate in parallel.
499 The allowed range is 1 to 1000, or 0 to disable issuance of
500 asynchronous I/O requests. The default is 16.
502 Higher values will have the most impact on higher latency
503 storage where queries otherwise experience noticeable I/O stalls
504 and on devices with high IOPs. Unnecessarily high values may
505 increase I/O latency for all queries on the system
507 On systems with prefetch advice support,
508 effective_io_concurrency also controls the prefetch distance.
510 This value can be overridden for tables in a particular
511 tablespace by setting the tablespace parameter of the same name
512 (see ALTER TABLESPACE).
514 maintenance_io_concurrency (integer) #
515 Similar to effective_io_concurrency, but used for maintenance
516 work that is done on behalf of many client sessions.
518 The default is 16. This value can be overridden for tables in a
519 particular tablespace by setting the tablespace parameter of the
520 same name (see ALTER TABLESPACE).
522 io_max_combine_limit (integer) #
523 Controls the largest I/O size in operations that combine I/O,
524 and silently limits the user-settable parameter
525 io_combine_limit. This parameter can only be set in the
526 postgresql.conf file or on the server command line. The maximum
527 possible size depends on the operating system and block size,
528 but is typically 1MB on Unix and 128kB on Windows. The default
531 io_combine_limit (integer) #
532 Controls the largest I/O size in operations that combine I/O. If
533 set higher than the io_max_combine_limit parameter, the lower
534 value will silently be used instead, so both may need to be
535 raised to increase the I/O size. The maximum possible size
536 depends on the operating system and block size, but is typically
537 1MB on Unix and 128kB on Windows. The default is 128kB.
539 io_max_concurrency (integer) #
540 Controls the maximum number of I/O operations that one process
541 can execute simultaneously.
543 The default setting of -1 selects a number based on
544 shared_buffers and the maximum number of processes
545 (max_connections, autovacuum_worker_slots, max_worker_processes
546 and max_wal_senders), but not more than 64.
548 This parameter can only be set at server start.
551 Selects the method for executing asynchronous I/O. Possible
554 + worker (execute asynchronous I/O using worker processes)
555 + io_uring (execute asynchronous I/O using io_uring, requires a
556 build with --with-liburing / -Dliburing)
557 + sync (execute asynchronous-eligible I/O synchronously)
559 The default is worker.
561 This parameter can only be set at server start.
563 io_workers (integer) #
564 Selects the number of I/O worker processes to use. The default
565 is 3. This parameter can only be set in the postgresql.conf file
566 or on the server command line.
568 Only has an effect if io_method is set to worker.
570 19.4.6. Worker Processes #
572 max_worker_processes (integer) #
573 Sets the maximum number of background processes that the cluster
574 can support. This parameter can only be set at server start. The
577 When running a standby server, you must set this parameter to
578 the same or higher value than on the primary server. Otherwise,
579 queries will not be allowed in the standby server.
581 When changing this value, consider also adjusting
582 max_parallel_workers, max_parallel_maintenance_workers, and
583 max_parallel_workers_per_gather.
585 max_parallel_workers_per_gather (integer) #
586 Sets the maximum number of workers that can be started by a
587 single Gather or Gather Merge node. Parallel workers are taken
588 from the pool of processes established by max_worker_processes,
589 limited by max_parallel_workers. Note that the requested number
590 of workers may not actually be available at run time. If this
591 occurs, the plan will run with fewer workers than expected,
592 which may be inefficient. The default value is 2. Setting this
593 value to 0 disables parallel query execution.
595 Note that parallel queries may consume very substantially more
596 resources than non-parallel queries, because each worker process
597 is a completely separate process which has roughly the same
598 impact on the system as an additional user session. This should
599 be taken into account when choosing a value for this setting, as
600 well as when configuring other settings that control resource
601 utilization, such as work_mem. Resource limits such as work_mem
602 are applied individually to each worker, which means the total
603 utilization may be much higher across all processes than it
604 would normally be for any single process. For example, a
605 parallel query using 4 workers may use up to 5 times as much CPU
606 time, memory, I/O bandwidth, and so forth as a query which uses
609 For more information on parallel query, see Chapter 15.
611 max_parallel_maintenance_workers (integer) #
612 Sets the maximum number of parallel workers that can be started
613 by a single utility command. Currently, the parallel utility
614 commands that support the use of parallel workers are CREATE
615 INDEX when building a B-tree, GIN, or BRIN index, and VACUUM
616 without FULL option. Parallel workers are taken from the pool of
617 processes established by max_worker_processes, limited by
618 max_parallel_workers. Note that the requested number of workers
619 may not actually be available at run time. If this occurs, the
620 utility operation will run with fewer workers than expected. The
621 default value is 2. Setting this value to 0 disables the use of
622 parallel workers by utility commands.
624 Note that parallel utility commands should not consume
625 substantially more memory than equivalent non-parallel
626 operations. This strategy differs from that of parallel query,
627 where resource limits generally apply per worker process.
628 Parallel utility commands treat the resource limit
629 maintenance_work_mem as a limit to be applied to the entire
630 utility command, regardless of the number of parallel worker
631 processes. However, parallel utility commands may still consume
632 substantially more CPU resources and I/O bandwidth.
634 max_parallel_workers (integer) #
635 Sets the maximum number of workers that the cluster can support
636 for parallel operations. The default value is 8. When increasing
637 or decreasing this value, consider also adjusting
638 max_parallel_maintenance_workers and
639 max_parallel_workers_per_gather. Also, note that a setting for
640 this value which is higher than max_worker_processes will have
641 no effect, since parallel workers are taken from the pool of
642 worker processes established by that setting.
644 parallel_leader_participation (boolean) #
645 Allows the leader process to execute the query plan under Gather
646 and Gather Merge nodes instead of waiting for worker processes.
647 The default is on. Setting this value to off reduces the
648 likelihood that workers will become blocked because the leader
649 is not reading tuples fast enough, but requires the leader
650 process to wait for worker processes to start up before the
651 first tuples can be produced. The degree to which the leader can
652 help or hinder performance depends on the plan type, number of
653 workers and query duration.