4 pg_dump — export a PostgreSQL database as an SQL script or to other
9 pg_dump [connection-option...] [option...] [dbname]
13 pg_dump is a utility for exporting a PostgreSQL database. It makes
14 consistent exports even if the database is being used concurrently.
15 pg_dump does not block other users accessing the database (readers or
16 writers). Note, however, that except in simple cases, pg_dump is
17 generally not the right choice for taking regular backups of production
18 databases. See Chapter 25 for further discussion.
20 pg_dump only dumps a single database. To export an entire cluster, or
21 to export global objects that are common to all databases in a cluster
22 (such as roles and tablespaces), use pg_dumpall.
24 Dumps can be output in script or archive file formats. Script dumps are
25 plain-text files containing the SQL commands required to reconstruct
26 the database to the state it was in at the time it was saved. To
27 restore from such a script, feed it to psql. Script files can be used
28 to reconstruct the database even on other machines and other
29 architectures; with some modifications, even on other SQL database
32 The alternative archive file formats must be used with pg_restore to
33 rebuild the database. They allow pg_restore to be selective about what
34 is restored, or even to reorder the items prior to being restored. The
35 archive file formats are designed to be portable across architectures.
37 When used with one of the archive file formats and combined with
38 pg_restore, pg_dump provides a flexible archival and transfer
39 mechanism. pg_dump can be used to export an entire database, then
40 pg_restore can be used to examine the archive and/or select which parts
41 of the database are to be restored. The most flexible output file
42 formats are the “custom” format (-Fc) and the “directory” format (-Fd).
43 They allow for selection and reordering of all archived items, support
44 parallel restoration, and are compressed by default. The “directory”
45 format is the only format that supports parallel dumps.
47 While running pg_dump, one should examine the output for any warnings
48 (printed on standard error), especially in light of the limitations
53 Restoring a dump causes the destination to execute arbitrary code of
54 the source superusers' choice. Partial dumps and partial restores do
55 not limit that. If the source superusers are not trusted, the dumped
56 SQL statements must be inspected before restoring. Non-plain-text dumps
57 can be inspected by using pg_restore's --file option. Note that the
58 client running the dump and restore need not trust the source or
59 destination superusers.
63 The following command-line options control the content and format of
67 Specifies the name of the database to be dumped. If this is not
68 specified, the environment variable PGDATABASE is used. If that
69 is not set, the user name specified for the connection is used.
73 Dump only the data, not the schema (data definitions) or
74 statistics. Table data, large objects, and sequence values are
77 This option is similar to, but for historical reasons not
78 identical to, specifying --section=data.
83 Include large objects in the dump. This is the default behavior
84 except when --schema, --table, --schema-only, --statistics-only,
85 or --no-data is specified. The -b switch is therefore only
86 useful to add large objects to dumps where a specific schema or
87 table has been requested. Note that large objects are considered
88 data and therefore will be included when --data-only is used,
89 but not when --schema-only or --statistics-only is.
93 --no-blobs (deprecated)
94 Exclude large objects in the dump.
96 When both -b and -B are given, the behavior is to output large
97 objects, when data is being dumped, see the -b documentation.
101 Output commands to DROP all the dumped database objects prior to
102 outputting the commands for creating them. This option is useful
103 when the restore is to overwrite an existing database. If any of
104 the objects do not exist in the destination database, ignorable
105 error messages will be reported during restore, unless
106 --if-exists is also specified.
108 This option is ignored when emitting an archive (non-text)
109 output file. For the archive formats, you can specify the option
110 when you call pg_restore.
114 Begin the output with a command to create the database itself
115 and reconnect to the created database. (With a script of this
116 form, it doesn't matter which database in the destination
117 installation you connect to before running the script.) If
118 --clean is also specified, the script drops and recreates the
119 target database before reconnecting to it.
121 With --create, the output also includes the database's comment
122 if any, and any configuration variable settings that are
123 specific to this database, that is, any ALTER DATABASE ... SET
124 ... and ALTER ROLE ... IN DATABASE ... SET ... commands that
125 mention this database. Access privileges for the database itself
126 are also dumped, unless --no-acl is specified.
128 This option is ignored when emitting an archive (non-text)
129 output file. For the archive formats, you can specify the option
130 when you call pg_restore.
134 Dump only extensions matching pattern. When this option is not
135 specified, all non-system extensions in the target database will
136 be dumped. Multiple extensions can be selected by writing
137 multiple -e switches. The pattern parameter is interpreted as a
138 pattern according to the same rules used by psql's \d commands
139 (see Patterns), so multiple extensions can also be selected by
140 writing wildcard characters in the pattern. When using
141 wildcards, be careful to quote the pattern if needed to prevent
142 the shell from expanding the wildcards.
144 Any configuration relation registered by
145 pg_extension_config_dump is included in the dump if its
146 extension is specified by --extension.
150 When -e is specified, pg_dump makes no attempt to dump any other
151 database objects that the selected extension(s) might depend
152 upon. Therefore, there is no guarantee that the results of a
153 specific-extension dump can be successfully restored by
154 themselves into a clean database.
158 Create the dump in the specified character set encoding. By
159 default, the dump is created in the database encoding. (Another
160 way to get the same result is to set the PGCLIENTENCODING
161 environment variable to the desired dump encoding.) The
162 supported encodings are described in Section 23.3.1.
166 Send output to the specified file. This parameter can be omitted
167 for file based output formats, in which case the standard output
168 is used. It must be given for the directory output format
169 however, where it specifies the target directory instead of a
170 file. In this case the directory is created by pg_dump and must
175 Selects the format of the output. format can be one of the
180 Output a plain-text SQL script file (the default).
184 Output a custom-format archive suitable for input into
185 pg_restore. Together with the directory output format,
186 this is the most flexible output format in that it allows
187 manual selection and reordering of archived items during
188 restore. This format is also compressed by default.
192 Output a directory-format archive suitable for input into
193 pg_restore. This will create a directory with one file for
194 each table and large object being dumped, plus a so-called
195 Table of Contents file describing the dumped objects in a
196 machine-readable format that pg_restore can read. A
197 directory format archive can be manipulated with standard
198 Unix tools; for example, files in an uncompressed archive
199 can be compressed with the gzip, lz4, or zstd tools. This
200 format is compressed by default using gzip and also
201 supports parallel dumps.
205 Output a tar-format archive suitable for input into
206 pg_restore. The tar format is compatible with the
207 directory format: extracting a tar-format archive produces
208 a valid directory-format archive. However, the tar format
209 does not support compression. Also, when using tar format
210 the relative order of table data items cannot be changed
215 Run the dump in parallel by dumping njobs tables simultaneously.
216 This option may reduce the time needed to perform the dump but
217 it also increases the load on the database server. You can only
218 use this option with the directory output format because this is
219 the only output format where multiple processes can write their
220 data at the same time.
222 pg_dump will open njobs + 1 connections to the database, so make
223 sure your max_connections setting is high enough to accommodate
226 Requesting exclusive locks on database objects while running a
227 parallel dump could cause the dump to fail. The reason is that
228 the pg_dump leader process requests shared locks (ACCESS SHARE)
229 on the objects that the worker processes are going to dump later
230 in order to make sure that nobody deletes them and makes them go
231 away while the dump is running. If another client then requests
232 an exclusive lock on a table, that lock will not be granted but
233 will be queued waiting for the shared lock of the leader process
234 to be released. Consequently any other access to the table will
235 not be granted either and will queue after the exclusive lock
236 request. This includes the worker process trying to dump the
237 table. Without any precautions this would be a classic deadlock
238 situation. To detect this conflict, the pg_dump worker process
239 requests another shared lock using the NOWAIT option. If the
240 worker process is not granted this shared lock, somebody else
241 must have requested an exclusive lock in the meantime and there
242 is no way to continue with the dump, so pg_dump has no choice
243 but to abort the dump.
245 To perform a parallel dump, the database server needs to support
246 synchronized snapshots, a feature that was introduced in
247 PostgreSQL 9.2 for primary servers and 10 for standbys. With
248 this feature, database clients can ensure they see the same data
249 set even though they use different connections. pg_dump -j uses
250 multiple database connections; it connects to the database once
251 with the leader process and once again for each worker job.
252 Without the synchronized snapshot feature, the different worker
253 jobs wouldn't be guaranteed to see the same data in each
254 connection, which could lead to an inconsistent backup.
258 Dump only schemas matching pattern; this selects both the schema
259 itself, and all its contained objects. When this option is not
260 specified, all non-system schemas in the target database will be
261 dumped. Multiple schemas can be selected by writing multiple -n
262 switches. The pattern parameter is interpreted as a pattern
263 according to the same rules used by psql's \d commands (see
264 Patterns), so multiple schemas can also be selected by writing
265 wildcard characters in the pattern. When using wildcards, be
266 careful to quote the pattern if needed to prevent the shell from
267 expanding the wildcards; see Examples below.
271 When -n is specified, pg_dump makes no attempt to dump any other
272 database objects that the selected schema(s) might depend upon.
273 Therefore, there is no guarantee that the results of a
274 specific-schema dump can be successfully restored by themselves
275 into a clean database.
279 Non-schema objects such as large objects are not dumped when -n
280 is specified. You can add large objects back to the dump with
281 the --large-objects switch.
284 --exclude-schema=pattern
285 Do not dump any schemas matching pattern. The pattern is
286 interpreted according to the same rules as for -n. -N can be
287 given more than once to exclude schemas matching any of several
290 When both -n and -N are given, the behavior is to dump just the
291 schemas that match at least one -n switch but no -N switches. If
292 -N appears without -n, then schemas matching -N are excluded
293 from what is otherwise a normal dump.
297 Do not output commands to set ownership of objects to match the
298 original database. By default, pg_dump issues ALTER OWNER or SET
299 SESSION AUTHORIZATION statements to set ownership of created
300 database objects. These statements will fail when the script is
301 run unless it is started by a superuser (or the same user that
302 owns all of the objects in the script). To make a script that
303 can be restored by any user, but will give that user ownership
304 of all the objects, specify -O.
306 This option is ignored when emitting an archive (non-text)
307 output file. For the archive formats, you can specify the option
308 when you call pg_restore.
312 This option is obsolete but still accepted for backwards
317 Dump only the object definitions (schema), not data or
320 This option cannot be used with --data-only or
321 --statistics-only. It is similar to, but for historical reasons
322 not identical to, specifying --section=pre-data
325 (Do not confuse this with the --schema option, which uses the
326 word “schema” in a different meaning.)
328 To exclude table data for only a subset of tables in the
329 database, see --exclude-table-data.
333 Specify the superuser user name to use when disabling triggers.
334 This is relevant only if --disable-triggers is used. (Usually,
335 it's better to leave this out, and instead start the resulting
336 script as superuser.)
340 Dump only tables with names matching pattern. Multiple tables
341 can be selected by writing multiple -t switches. The pattern
342 parameter is interpreted as a pattern according to the same
343 rules used by psql's \d commands (see Patterns), so multiple
344 tables can also be selected by writing wildcard characters in
345 the pattern. When using wildcards, be careful to quote the
346 pattern if needed to prevent the shell from expanding the
347 wildcards; see Examples below.
349 As well as tables, this option can be used to dump the
350 definition of matching views, materialized views, foreign
351 tables, and sequences. It will not dump the contents of views or
352 materialized views, and the contents of foreign tables will only
353 be dumped if the corresponding foreign server is specified with
354 --include-foreign-data.
356 The -n and -N switches have no effect when -t is used, because
357 tables selected by -t will be dumped regardless of those
358 switches, and non-table objects will not be dumped.
362 When -t is specified, pg_dump makes no attempt to dump any other
363 database objects that the selected table(s) might depend upon.
364 Therefore, there is no guarantee that the results of a
365 specific-table dump can be successfully restored by themselves
366 into a clean database.
369 --exclude-table=pattern
370 Do not dump any tables matching pattern. The pattern is
371 interpreted according to the same rules as for -t. -T can be
372 given more than once to exclude tables matching any of several
375 When both -t and -T are given, the behavior is to dump just the
376 tables that match at least one -t switch but no -T switches. If
377 -T appears without -t, then tables matching -T are excluded from
378 what is otherwise a normal dump.
382 Specifies verbose mode. This will cause pg_dump to output
383 detailed object comments and start/stop times to the dump file,
384 and progress messages to standard error. Repeating the option
385 causes additional debug-level messages to appear on standard
390 Print the pg_dump version and exit.
395 Prevent dumping of access privileges (grant/revoke commands).
400 --compress=method[:detail]
401 Specify the compression method and/or the compression level to
402 use. The compression method can be set to gzip, lz4, zstd, or
403 none for no compression. A compression detail string can
404 optionally be specified. If the detail string is an integer, it
405 specifies the compression level. Otherwise, it should be a
406 comma-separated list of items, each of the form keyword or
407 keyword=value. Currently, the supported keywords are level and
410 If no compression level is specified, the default compression
411 level will be used. If only a level is specified without
412 mentioning an algorithm, gzip compression will be used if the
413 level is greater than 0, and no compression will be used if the
416 For the custom and directory archive formats, this specifies
417 compression of individual table-data segments, and the default
418 is to compress using gzip at a moderate level. For plain text
419 output, setting a nonzero compression level causes the entire
420 output file to be compressed, as though it had been fed through
421 gzip, lz4, or zstd; but the default is not to compress. With
422 zstd compression, long mode may improve the compression ratio,
423 at the cost of increased memory use.
425 The tar archive format currently does not support compression at
429 This option is for use by in-place upgrade utilities. Its use
430 for other purposes is not recommended or supported. The behavior
431 of the option may change in future releases without notice.
435 Dump data as INSERT commands with explicit column names (INSERT
436 INTO table (column, ...) VALUES ...). This will make restoration
437 very slow; it is mainly useful for making dumps that can be
438 loaded into non-PostgreSQL databases. Any error during restoring
439 will cause only rows that are part of the problematic INSERT to
440 be lost, rather than the entire table contents.
442 --disable-dollar-quoting
443 This option disables the use of dollar quoting for function
444 bodies, and forces them to be quoted using SQL standard string
448 This option is relevant only when creating a dump that includes
449 data but does not include schema. It instructs pg_dump to
450 include commands to temporarily disable triggers on the target
451 tables while the data is restored. Use this if you have
452 referential integrity checks or other triggers on the tables
453 that you do not want to invoke during data restore.
455 Presently, the commands emitted for --disable-triggers must be
456 done as superuser. So, you should also specify a superuser name
457 with -S, or preferably be careful to start the resulting script
460 This option is ignored when emitting an archive (non-text)
461 output file. For the archive formats, you can specify the option
462 when you call pg_restore.
464 --enable-row-security
465 This option is relevant only when dumping the contents of a
466 table which has row security. By default, pg_dump will set
467 row_security to off, to ensure that all data is dumped from the
468 table. If the user does not have sufficient privileges to bypass
469 row security, then an error is thrown. This parameter instructs
470 pg_dump to set row_security to on instead, allowing the user to
471 dump the parts of the contents of the table that they have
474 Note that if you use this option currently, you probably also
475 want the dump be in INSERT format, as the COPY FROM during
476 restore does not support row security.
478 --exclude-extension=pattern
479 Do not dump any extensions matching pattern. The pattern is
480 interpreted according to the same rules as for -e.
481 --exclude-extension can be given more than once to exclude
482 extensions matching any of several patterns.
484 When both -e and --exclude-extension are given, the behavior is
485 to dump just the extensions that match at least one -e switch
486 but no --exclude-extension switches. If --exclude-extension
487 appears without -e, then extensions matching --exclude-extension
488 are excluded from what is otherwise a normal dump.
490 --exclude-table-and-children=pattern
491 This is the same as the -T/--exclude-table option, except that
492 it also excludes any partitions or inheritance child tables of
493 the table(s) matching the pattern.
495 --exclude-table-data=pattern
496 Do not dump data for any tables matching pattern. The pattern is
497 interpreted according to the same rules as for -t.
498 --exclude-table-data can be given more than once to exclude
499 tables matching any of several patterns. This option is useful
500 when you need the definition of a particular table even though
501 you do not need the data in it.
503 To exclude data for all tables in the database, see
504 --schema-only or --statistics-only.
506 --exclude-table-data-and-children=pattern
507 This is the same as the --exclude-table-data option, except that
508 it also excludes data of any partitions or inheritance child
509 tables of the table(s) matching the pattern.
511 --extra-float-digits=ndigits
512 Use the specified value of extra_float_digits when dumping
513 floating-point data, instead of the maximum available precision.
514 Routine dumps made for backup purposes should not use this
518 Specify a filename from which to read patterns for objects to
519 include or exclude from the dump. The patterns are interpreted
520 according to the same rules as the corresponding options:
521 -t/--table, --table-and-children, -T/--exclude-table, and
522 --exclude-table-and-children for tables, -n/--schema and
523 -N/--exclude-schema for schemas, --include-foreign-data for data
524 on foreign servers, --exclude-table-data and
525 --exclude-table-data-and-children for table data, and
526 -e/--extension and --exclude-extension for extensions. To read
527 from STDIN, use - as the filename. The --filter option can be
528 specified in conjunction with the above listed options for
529 including or excluding objects, and can also be specified more
530 than once for multiple filter files.
532 The file lists one object pattern per row, with the following
535 { include | exclude } { extension | foreign_data | table | table_and_children |
536 table_data | table_data_and_children | schema } PATTERN
538 The first keyword specifies whether the objects matched by the
539 pattern are to be included or excluded. The second keyword
540 specifies the type of object to be filtered using the pattern:
542 + extension: extensions. This works like the -e/--extension or
543 --exclude-extension option.
544 + foreign_data: data on foreign servers. This works like the
545 --include-foreign-data option. This keyword can only be used
546 with the include keyword.
547 + table: tables. This works like the -t/--table or
548 -T/--exclude-table option.
549 + table_and_children: tables including any partitions or
550 inheritance child tables. This works like the
551 --table-and-children or --exclude-table-and-children option.
552 + table_data: table data of any tables matching pattern. This
553 works like the --exclude-table-data option. This keyword can
554 only be used with the exclude keyword.
555 + table_data_and_children: table data of any tables matching
556 pattern as well as any partitions or inheritance children of
557 the table(s). This works like the
558 --exclude-table-data-and-children option. This keyword can
559 only be used with the exclude keyword.
560 + schema: schemas. This works like the -n/--schema or
561 -N/--exclude-schema option.
563 Lines starting with # are considered comments and ignored.
564 Comments can be placed after an object pattern row as well.
565 Blank lines are also ignored. See Patterns for how to perform
568 Example files are listed below in the Examples section.
571 Use DROP ... IF EXISTS commands to drop objects in --clean mode.
572 This suppresses “does not exist” errors that might otherwise be
573 reported. This option is not valid unless --clean is also
576 --include-foreign-data=foreignserver
577 Dump the data for any foreign table with a foreign server
578 matching foreignserver pattern. Multiple foreign servers can be
579 selected by writing multiple --include-foreign-data switches.
580 Also, the foreignserver parameter is interpreted as a pattern
581 according to the same rules used by psql's \d commands (see
582 Patterns), so multiple foreign servers can also be selected by
583 writing wildcard characters in the pattern. When using
584 wildcards, be careful to quote the pattern if needed to prevent
585 the shell from expanding the wildcards; see Examples below. The
586 only exception is that an empty pattern is disallowed.
590 Using wildcards in --include-foreign-data may result in access
591 to unexpected foreign servers. Also, to use this option
592 securely, make sure that the named server must have a trusted
597 When --include-foreign-data is specified, pg_dump does not check
598 that the foreign table is writable. Therefore, there is no
599 guarantee that the results of a foreign table dump can be
600 successfully restored.
603 Dump data as INSERT commands (rather than COPY). This will make
604 restoration very slow; it is mainly useful for making dumps that
605 can be loaded into non-PostgreSQL databases. Any error during
606 restoring will cause only rows that are part of the problematic
607 INSERT to be lost, rather than the entire table contents. Note
608 that the restore might fail altogether if you have rearranged
609 column order. The --column-inserts option is safe against column
610 order changes, though even slower.
612 --load-via-partition-root
613 When dumping data for a table partition, make the COPY or INSERT
614 statements target the root of the partitioning hierarchy that
615 contains it, rather than the partition itself. This causes the
616 appropriate partition to be re-determined for each row when the
617 data is loaded. This may be useful when restoring data on a
618 server where rows do not always fall into the same partitions as
619 they did on the original server. That could happen, for example,
620 if the partitioning column is of type text and the two systems
621 have different definitions of the collation used to sort the
624 --lock-wait-timeout=timeout
625 Do not wait forever to acquire shared table locks at the
626 beginning of the dump. Instead fail if unable to lock a table
627 within the specified timeout. The timeout may be specified in
628 any of the formats accepted by SET statement_timeout. (Allowed
629 formats vary depending on the server version you are dumping
630 from, but an integer number of milliseconds is accepted by all
634 Do not dump COMMENT commands.
640 Do not dump row security policies.
643 Do not dump publications.
646 Do not dump schema (data definitions).
649 Do not dump security labels.
652 Do not dump statistics. This is the default.
655 Do not dump subscriptions.
658 By default, pg_dump will wait for all files to be written safely
659 to disk. This option causes pg_dump to return without waiting,
660 which is faster, but means that a subsequent operating system
661 crash can leave the dump corrupt. Generally, this option is
662 useful for testing but should not be used when dumping data from
663 production installation.
665 --no-table-access-method
666 Do not output commands to select table access methods. With this
667 option, all objects will be created with whichever table access
668 method is the default during restore.
670 This option is ignored when emitting an archive (non-text)
671 output file. For the archive formats, you can specify the option
672 when you call pg_restore.
675 Do not output commands to select tablespaces. With this option,
676 all objects will be created in whichever tablespace is the
677 default during restore.
679 This option is ignored when emitting an archive (non-text)
680 output file. For the archive formats, you can specify the option
681 when you call pg_restore.
683 --no-toast-compression
684 Do not output commands to set TOAST compression methods. With
685 this option, all columns will be restored with the default
688 --no-unlogged-table-data
689 Do not dump the contents of unlogged tables and sequences. This
690 option has no effect on whether or not the table and sequence
691 definitions (schema) are dumped; it only suppresses dumping the
692 table and sequence data. Data in unlogged tables and sequences
693 is always excluded when dumping from a standby server.
695 --on-conflict-do-nothing
696 Add ON CONFLICT DO NOTHING to INSERT commands. This option is
697 not valid unless --inserts, --column-inserts or
698 --rows-per-insert is also specified.
700 --quote-all-identifiers
701 Force quoting of all identifiers. This option is recommended
702 when dumping a database from a server whose PostgreSQL major
703 version is different from pg_dump's, or when the output is
704 intended to be loaded into a server of a different major
705 version. By default, pg_dump quotes only identifiers that are
706 reserved words in its own major version. This sometimes results
707 in compatibility issues when dealing with servers of other
708 versions that may have slightly different sets of reserved
709 words. Using --quote-all-identifiers prevents such issues, at
710 the price of a harder-to-read dump script.
712 --restrict-key=restrict_key
713 Use the provided string as the psql \restrict key in the dump
714 output. This can only be specified for plain-text dumps, i.e.,
715 when --format is set to plain or the --format option is omitted.
716 If no restrict key is specified, pg_dump will generate a random
717 one as needed. Keys may contain only alphanumeric characters.
719 This option is primarily intended for testing purposes and other
720 scenarios that require repeatable output (e.g., comparing dump
721 files). It is not recommended for general use, as a malicious
722 server with advance knowledge of the key may be able to inject
723 arbitrary code that will be executed on the machine that runs
724 psql with the dump output.
726 --rows-per-insert=nrows
727 Dump data as INSERT commands (rather than COPY). Controls the
728 maximum number of rows per INSERT command. The value specified
729 must be a number greater than zero. Any error during restoring
730 will cause only rows that are part of the problematic INSERT to
731 be lost, rather than the entire table contents.
733 --section=sectionname
734 Only dump the named section. The section name can be pre-data,
735 data, or post-data. This option can be specified more than once
736 to select multiple sections. The default is to dump all
739 The data section contains actual table data, large-object
740 contents, sequence values, and statistics for tables,
741 materialized views, and foreign tables. Post-data items include
742 definitions of indexes, triggers, rules, statistics for indexes,
743 and constraints other than validated check and not-null
744 constraints. Pre-data items include all other data definition
748 Include sequence data in the dump. This is the default behavior
749 except when --no-data, --schema-only, or --statistics-only is
752 --serializable-deferrable
753 Use a serializable transaction for the dump, to ensure that the
754 snapshot used is consistent with later database states; but do
755 this by waiting for a point in the transaction stream at which
756 no anomalies can be present, so that there isn't a risk of the
757 dump failing or causing other transactions to roll back with a
758 serialization_failure. See Chapter 13 for more information about
759 transaction isolation and concurrency control.
761 This option is not beneficial for a dump which is intended only
762 for disaster recovery. It could be useful for a dump used to
763 load a copy of the database for reporting or other read-only
764 load sharing while the original database continues to be
765 updated. Without it the dump may reflect a state which is not
766 consistent with any serial execution of the transactions
767 eventually committed. For example, if batch processing
768 techniques are used, a batch may show as closed in the dump
769 without all of the items which are in the batch appearing.
771 This option will make no difference if there are no read-write
772 transactions active when pg_dump is started. If read-write
773 transactions are active, the start of the dump may be delayed
774 for an indeterminate length of time. Once running, performance
775 with or without the switch is the same.
777 --snapshot=snapshotname
778 Use the specified synchronized snapshot when making a dump of
779 the database (see Table 9.100 for more details).
781 This option is useful when needing to synchronize the dump with
782 a logical replication slot (see Chapter 47) or with a concurrent
785 In the case of a parallel dump, the snapshot name defined by
786 this option is used rather than taking a new snapshot.
792 Dump only the statistics, not the schema (data definitions) or
793 data. Statistics for tables, materialized views, foreign tables,
794 and indexes are dumped.
797 Require that each extension (-e/--extension), schema
798 (-n/--schema) and table (-t/--table) pattern match at least one
799 extension/schema/table in the database to be dumped. This also
800 applies to filters used with --filter. Note that if none of the
801 extension/schema/table patterns find matches, pg_dump will
802 generate an error even without --strict-names.
804 This option has no effect on --exclude-extension,
805 -N/--exclude-schema, -T/--exclude-table, or
806 --exclude-table-data. An exclude pattern failing to match any
807 objects is not considered an error.
810 When set to fsync, which is the default, pg_dump
811 --format=directory will recursively open and synchronize all
812 files in the archive directory.
814 On Linux, syncfs may be used instead to ask the operating system
815 to synchronize the whole file system that contains the archive
816 directory. See recovery_init_sync_method for information about
817 the caveats to be aware of when using syncfs.
819 This option has no effect when --no-sync is used or --format is
820 not set to directory.
822 --table-and-children=pattern
823 This is the same as the -t/--table option, except that it also
824 includes any partitions or inheritance child tables of the
825 table(s) matching the pattern.
827 --use-set-session-authorization
828 Output SQL-standard SET SESSION AUTHORIZATION commands instead
829 of ALTER OWNER commands to determine object ownership. This
830 makes the dump more standards-compatible, but depending on the
831 history of the objects in the dump, might not restore properly.
832 Also, a dump using SET SESSION AUTHORIZATION will certainly
833 require superuser privileges to restore correctly, whereas ALTER
834 OWNER requires lesser privileges.
838 Show help about pg_dump command line arguments, and exit.
840 The following command-line options control the database connection
845 Specifies the name of the database to connect to. This is
846 equivalent to specifying dbname as the first non-option argument
847 on the command line. The dbname can be a connection string. If
848 so, connection string parameters will override any conflicting
849 command line options.
853 Specifies the host name of the machine on which the server is
854 running. If the value begins with a slash, it is used as the
855 directory for the Unix domain socket. The default is taken from
856 the PGHOST environment variable, if set, else a Unix domain
857 socket connection is attempted.
861 Specifies the TCP port or local Unix domain socket file
862 extension on which the server is listening for connections.
863 Defaults to the PGPORT environment variable, if set, or a
868 User name to connect as.
872 Never issue a password prompt. If the server requires password
873 authentication and a password is not available by other means
874 such as a .pgpass file, the connection attempt will fail. This
875 option can be useful in batch jobs and scripts where no user is
876 present to enter a password.
880 Force pg_dump to prompt for a password before connecting to a
883 This option is never essential, since pg_dump will automatically
884 prompt for a password if the server demands password
885 authentication. However, pg_dump will waste a connection attempt
886 finding out that the server wants a password. In some cases it
887 is worth typing -W to avoid the extra connection attempt.
890 Specifies a role name to be used to create the dump. This option
891 causes pg_dump to issue a SET ROLE rolename command after
892 connecting to the database. It is useful when the authenticated
893 user (specified by -U) lacks privileges needed by pg_dump, but
894 can switch to a role with the required rights. Some
895 installations have a policy against logging in directly as a
896 superuser, and use of this option allows dumps to be made
897 without violating the policy.
906 Default connection parameters.
909 Specifies whether to use color in diagnostic messages. Possible
910 values are always, auto and never.
912 This utility, like most other PostgreSQL utilities, also uses the
913 environment variables supported by libpq (see Section 32.15).
917 pg_dump internally executes SELECT statements. If you have problems
918 running pg_dump, make sure you are able to select information from the
919 database using, for example, psql. Also, any default connection
920 settings and environment variables used by the libpq front-end library
923 The database activity of pg_dump is normally collected by the
924 cumulative statistics system. If this is undesirable, you can set
925 parameter track_counts to false via PGOPTIONS or the ALTER USER
930 If your database cluster has any local additions to the template1
931 database, be careful to restore the output of pg_dump into a truly
932 empty database; otherwise you are likely to get errors due to duplicate
933 definitions of the added objects. To make an empty database without any
934 local additions, copy from template0 not template1, for example:
935 CREATE DATABASE foo WITH TEMPLATE template0;
937 When a dump without schema is chosen and the option --disable-triggers
938 is used, pg_dump emits commands to disable triggers on user tables
939 before inserting the data, and then commands to re-enable them after
940 the data has been inserted. If the restore is stopped in the middle,
941 the system catalogs might be left in the wrong state.
943 If --statistics is specified, pg_dump will include most optimizer
944 statistics in the resulting dump file. However, some statistics may not
945 be included, such as those created explicitly with CREATE STATISTICS or
946 custom statistics added by an extension. Therefore, it may be useful to
947 run ANALYZE after restoring from a dump file to ensure optimal
948 performance; see Section 24.1.3 and Section 24.1.6 for more
951 Because pg_dump is used to transfer data to newer versions of
952 PostgreSQL, the output of pg_dump can be expected to load into
953 PostgreSQL server versions newer than pg_dump's version. pg_dump can
954 also dump from PostgreSQL servers older than its own version.
955 (Currently, servers back to version 9.2 are supported.) However,
956 pg_dump cannot dump from PostgreSQL servers newer than its own major
957 version; it will refuse to even try, rather than risk making an invalid
958 dump. Also, it is not guaranteed that pg_dump's output can be loaded
959 into a server of an older major version — not even if the dump was
960 taken from a server of that version. Loading a dump file into an older
961 server may require manual editing of the dump file to remove syntax not
962 understood by the older server. Use of the --quote-all-identifiers
963 option is recommended in cross-version cases, as it can prevent
964 problems arising from varying reserved-word lists in different
967 When dumping logical replication subscriptions, pg_dump will generate
968 CREATE SUBSCRIPTION commands that use the connect = false option, so
969 that restoring the subscription does not make remote connections for
970 creating a replication slot or for initial table copy. That way, the
971 dump can be restored without requiring network access to the remote
972 servers. It is then up to the user to reactivate the subscriptions in a
973 suitable way. If the involved hosts have changed, the connection
974 information might have to be changed. It might also be appropriate to
975 truncate the target tables before initiating a new full table copy. If
976 users intend to copy initial data during refresh they must create the
977 slot with two_phase = false. After the initial sync, the two_phase
978 option will be automatically enabled by the subscriber if the
979 subscription had been originally created with two_phase = true option.
981 It is generally recommended to use the -X (--no-psqlrc) option when
982 restoring a database from a plain-text pg_dump script to ensure a clean
983 restore process and prevent potential conflicts with non-default psql
988 To dump a database called mydb into an SQL-script file:
989 $ pg_dump mydb > db.sql
991 To reload such a script into a (freshly created) database named newdb:
992 $ psql -X -d newdb -f db.sql
994 To dump a database into a custom-format archive file:
995 $ pg_dump -Fc mydb > db.dump
997 To dump a database into a directory-format archive:
998 $ pg_dump -Fd mydb -f dumpdir
1000 To dump a database into a directory-format archive in parallel with 5
1002 $ pg_dump -Fd mydb -j 5 -f dumpdir
1004 To reload an archive file into a (freshly created) database named
1006 $ pg_restore -d newdb db.dump
1008 To reload an archive file into the same database it was dumped from,
1009 discarding the current contents of that database:
1010 $ pg_restore -d postgres --clean --create db.dump
1012 To dump a single table named mytab:
1013 $ pg_dump -t mytab mydb > db.sql
1015 To dump all tables whose names start with emp in the detroit schema,
1016 except for the table named employee_log:
1017 $ pg_dump -t 'detroit.emp*' -T detroit.employee_log mydb > db.sql
1019 To dump all schemas whose names start with east or west and end in gsm,
1020 excluding any schemas whose names contain the word test:
1021 $ pg_dump -n 'east*gsm' -n 'west*gsm' -N '*test*' mydb > db.sql
1023 The same, using regular expression notation to consolidate the
1025 $ pg_dump -n '(east|west)*gsm' -N '*test*' mydb > db.sql
1027 To dump all database objects except for tables whose names begin with
1029 $ pg_dump -T 'ts_*' mydb > db.sql
1031 To specify an upper-case or mixed-case name in -t and related switches,
1032 you need to double-quote the name; else it will be folded to lower case
1033 (see Patterns). But double quotes are special to the shell, so in turn
1034 they must be quoted. Thus, to dump a single table with a mixed-case
1035 name, you need something like
1036 $ pg_dump -t "\"MixedCaseName\"" mydb > mytab.sql
1038 To dump all tables whose names start with mytable, except for table
1039 mytable2, specify a filter file filter.txt like:
1040 include table mytable*
1041 exclude table mytable2
1043 $ pg_dump --filter=filter.txt mydb > db.sql
1047 pg_dumpall, pg_restore, psql