3 .\" Author: The PostgreSQL Global Development Group
4 .\" Generator: DocBook XSL Stylesheets vsnapshot <http://docbook.sf.net/>
6 .\" Manual: PostgreSQL 18.0 Documentation
7 .\" Source: PostgreSQL 18.0
10 .TH "PGBENCH" "1" "2025" "PostgreSQL 18.0" "PostgreSQL 18.0 Documentation"
11 .\" -----------------------------------------------------------------
12 .\" * Define some portability stuff
13 .\" -----------------------------------------------------------------
14 .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
15 .\" http://bugs.debian.org/507673
16 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
17 .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
20 .\" -----------------------------------------------------------------
21 .\" * set default formatting
22 .\" -----------------------------------------------------------------
23 .\" disable hyphenation
25 .\" disable justification (adjust text to left margin only)
27 .\" -----------------------------------------------------------------
28 .\" * MAIN CONTENT STARTS HERE *
29 .\" -----------------------------------------------------------------
31 pgbench \- run a benchmark test on PostgreSQL
33 .HP \w'\fBpgbench\fR\ 'u
34 \fBpgbench\fR \fB\-i\fR [\fIoption\fR...] [\fIdbname\fR]
35 .HP \w'\fBpgbench\fR\ 'u
36 \fBpgbench\fR [\fIoption\fR...] [\fIdbname\fR]
40 is a simple program for running benchmark tests on
41 PostgreSQL\&. It runs the same sequence of SQL commands over and over, possibly in multiple concurrent database sessions, and then calculates the average transaction rate (transactions per second)\&. By default,
43 tests a scenario that is loosely based on TPC\-B, involving five
47 commands per transaction\&. However, it is easy to test other cases by writing your own transaction script files\&.
57 transaction type: <builtin: TPC\-B (sort of)>
62 maximum number of tries: 1
63 number of transactions per client: 1000
64 number of transactions actually processed: 10000/10000
65 number of failed transactions: 0 (0\&.000%)
66 latency average = 11\&.013 ms
67 latency stddev = 7\&.351 ms
68 initial connection time = 45\&.758 ms
69 tps = 896\&.967014 (without initial connection time)
75 The first seven lines report some of the most important parameter settings\&. The sixth line reports the maximum number of tries for transactions with serialization or deadlock errors (see
76 Failures and Serialization/Deadlock Retries
77 for more information)\&. The eighth line reports the number of transactions completed and intended (the latter being just the product of number of clients and number of transactions per client); these will be equal unless the run failed before completion or some SQL command(s) failed\&. (In
79 mode, only the actual number of transactions is printed\&.) The next line reports the number of failed transactions due to serialization or deadlock errors (see
80 Failures and Serialization/Deadlock Retries
81 for more information)\&. The last line reports the number of transactions per second\&.
83 The default TPC\-B\-like transaction test requires specific tables to be set up beforehand\&.
85 should be invoked with the
87 (initialize) option to create and populate these tables\&. (When you are testing a custom script, you don\*(Aqt need this step, but will instead need to do whatever setup your test needs\&.) Initialization looks like:
93 pgbench \-i [ \fIother\-options\fR ] \fIdbname\fR
101 is the name of the already\-created database to test in\&. (You may also need
105 options to specify how to connect to the database server\&.)
111 .nr an-no-space-flag 1
124 pgbench_tellers, destroying any existing tables of these names\&. Be very careful to use another database if you have tables having these names!
130 of 1, the tables initially contain this many rows:
137 \-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-\-
140 pgbench_accounts 100000
147 You can (and, for most purposes, probably should) increase the number of rows by using the
149 (scale factor) option\&. The
151 (fillfactor) option might also be used at this point\&.
153 Once you have done the necessary setup, you can run your benchmark with a command that doesn\*(Aqt include
160 pgbench [ \fIoptions\fR ] \fIdbname\fR
166 In nearly all cases, you\*(Aqll need some options to make a useful test\&. The most important options are
170 (number of transactions),
174 (specify a custom script file)\&. See below for a full list\&.
177 The following is divided into three subsections\&. Different options are used during database initialization and while running benchmarks, but some options are useful in both cases\&.
178 .SS "Initialization Options"
181 accepts the following command\-line initialization arguments:
183 \fB[\-d]\fR\fB \fR\fB\fIdbname\fR\fR
185 \fB[\-\-dbname=]\fR\fB\fIdbname\fR\fR
187 Specifies the name of the database to test in\&. If this is not specified, the environment variable
189 is used\&. If that is not set, the user name specified for the connection is used\&.
196 Required to invoke initialization mode\&.
199 \fB\-I \fR\fB\fIinit_steps\fR\fR
201 \fB\-\-init\-steps=\fR\fB\fIinit_steps\fR\fR
203 Perform just a selected set of the normal initialization steps\&.
205 specifies the initialization steps to be performed, using one character per step\&. Each step is invoked in the specified order\&. The default is
206 dtgvp\&. The available steps are:
217 Create the tables used by the standard
226 g or G (Generate data, client\-side or server\-side)
228 Generate data and load it into the standard tables, replacing any data already present\&.
232 (client\-side data generation), data is generated in
234 client and then sent to the server\&. This uses the client/server bandwidth extensively through a
239 option to load data into ordinary (non\-partition) tables with version 14 or later of
241 to speed up subsequent
242 \fBVACUUM\fR\&. Using
244 causes logging to print one message every 100,000 rows while generating data for all tables\&.
248 (server\-side data generation), only small queries are sent from the
250 client and then data is actually generated in the server\&. No significant bandwidth is required for this variant, but the server will do more work\&. Using
252 causes logging not to print any progress message while generating data\&.
254 The default initialization behavior uses client\-side data generation (equivalent to
262 on the standard tables\&.
265 p (create Primary keys)
267 Create primary key indexes on the standard tables\&.
270 f (create Foreign keys)
272 Create foreign key constraints between the standard tables\&. (Note that this step is not performed by default\&.)
276 \fB\-F\fR \fIfillfactor\fR
278 \fB\-\-fillfactor=\fR\fIfillfactor\fR
285 tables with the given fillfactor\&. Default is 100\&.
292 Perform no vacuuming during initialization\&. (This option suppresses the
294 initialization step, even if it was specified in
302 Switch logging to quiet mode, producing only one progress message per 5 seconds\&. The default logging prints one message each 100,000 rows, which often outputs many lines per second (especially on good hardware)\&.
304 This setting has no effect if
310 \fB\-s\fR \fIscale_factor\fR
312 \fB\-\-scale=\fR\fIscale_factor\fR
314 Multiply the number of rows generated by the scale factor\&. For example,
316 will create 10,000,000 rows in the
318 table\&. Default is 1\&. When the scale is 20,000 or larger, the columns used to hold account identifiers (aid
319 columns) will switch to using larger integers (bigint), in order to be big enough to hold the range of account identifiers\&.
322 \fB\-\-foreign\-keys\fR
324 Create foreign key constraints between the standard tables\&. (This option adds the
326 step to the initialization step sequence, if it is not already present\&.)
329 \fB\-\-index\-tablespace=\fR\fB\fIindex_tablespace\fR\fR
331 Create indexes in the specified tablespace, rather than the default tablespace\&.
334 \fB\-\-partition\-method=\fR\fB\fINAME\fR\fR
340 method\&. Expected values are
343 hash\&. This option requires that
345 is set to non\-zero\&. If unspecified, default is
349 \fB\-\-partitions=\fR\fB\fINUM\fR\fR
355 partitions of nearly equal size for the scaled number of accounts\&. Default is
356 0, meaning no partitioning\&.
359 \fB\-\-tablespace=\fR\fB\fItablespace\fR\fR
361 Create tables in the specified tablespace, rather than the default tablespace\&.
364 \fB\-\-unlogged\-tables\fR
366 Create all tables as unlogged tables, rather than permanent tables\&.
368 .SS "Benchmarking Options"
371 accepts the following command\-line benchmarking arguments:
373 \fB\-b\fR \fIscriptname[@weight]\fR
375 \fB\-\-builtin\fR=\fIscriptname[@weight]\fR
377 Add the specified built\-in script to the list of scripts to be executed\&. Available built\-in scripts are:
381 select\-only\&. Unambiguous prefixes of built\-in names are accepted\&. With the special name
382 list, show the list of built\-in scripts and exit immediately\&.
384 Optionally, write an integer weight after
386 to adjust the probability of selecting this script versus other ones\&. The default weight is 1\&. See below for details\&.
389 \fB\-c\fR \fIclients\fR
391 \fB\-\-client=\fR\fIclients\fR
393 Number of clients simulated, that is, number of concurrent database sessions\&. Default is 1\&.
400 Establish a new connection for each transaction, rather than doing it just once per client session\&. This is useful to measure the connection overhead\&.
403 \fB\-D\fR \fIvarname\fR=\fIvalue\fR
405 \fB\-\-define=\fR\fIvarname\fR=\fIvalue\fR
407 Define a variable for use by a custom script (see below)\&. Multiple
409 options are allowed\&.
412 \fB\-f\fR \fIfilename[@weight]\fR
414 \fB\-\-file=\fR\fIfilename[@weight]\fR
416 Add a transaction script read from
418 to the list of scripts to be executed\&.
420 Optionally, write an integer weight after
422 to adjust the probability of selecting this script versus other ones\&. The default weight is 1\&. (To use a script file name that includes an
424 character, append a weight so that there is no ambiguity, for example
425 filen@me@1\&.) See below for details\&.
428 \fB\-j\fR \fIthreads\fR
430 \fB\-\-jobs=\fR\fIthreads\fR
432 Number of worker threads within
433 pgbench\&. Using more than one thread can be helpful on multi\-CPU machines\&. Clients are distributed as evenly as possible among available threads\&. Default is 1\&.
440 Write information about each transaction to a log file\&. See below for details\&.
443 \fB\-L\fR \fIlimit\fR
445 \fB\-\-latency\-limit=\fR\fIlimit\fR
447 Transactions that last more than
449 milliseconds are counted and reported separately, as
452 When throttling is used (\fB\-\-rate=\&.\&.\&.\fR), transactions that lag behind schedule by more than
454 ms, and thus have no hope of meeting the latency limit, are not sent to the server at all\&. They are counted and reported separately as
459 option is used, a transaction which fails due to a serialization anomaly or from a deadlock will not be retried if the total time of all its tries is greater than
461 ms\&. To limit only the time of tries and not their number, use
462 \-\-max\-tries=0\&. By default, the option
464 is set to 1 and transactions with serialization/deadlock errors are not retried\&. See
465 Failures and Serialization/Deadlock Retries
466 for more information about retrying such transactions\&.
469 \fB\-M\fR \fIquerymode\fR
471 \fB\-\-protocol=\fR\fIquerymode\fR
473 Protocol to use for submitting queries to the server:
483 simple: use simple query protocol\&.
494 extended: use extended query protocol\&.
505 prepared: use extended query protocol with prepared statements\&.
512 reuses the parse analysis result starting from the second query iteration, so
514 runs faster than in other modes\&.
516 The default is simple query protocol\&. (See
518 for more information\&.)
525 Perform no vacuuming before running the test\&. This option is
527 if you are running a custom test scenario that does not include the standard tables
536 \fB\-\-skip\-some\-updates\fR
538 Run built\-in simple\-update script\&. Shorthand for
539 \fB\-b simple\-update\fR\&.
544 \fB\-\-progress=\fR\fIsec\fR
546 Show progress report every
548 seconds\&. The report includes the time since the beginning of the run, the TPS since the last report, and the transaction latency average, standard deviation, and the number of failed transactions since the last report\&. Under throttling (\fB\-R\fR), the latency is computed with respect to the transaction scheduled start time, not the actual transaction beginning time, thus it also includes the average schedule lag time\&. When
550 is used to enable transaction retries after serialization/deadlock errors, the report includes the number of retried transactions and the sum of all retries\&.
555 \fB\-\-report\-per\-command\fR
557 Report the following statistics for each command after the benchmark finishes: the average per\-statement latency (execution time from the perspective of the client), the number of failures, and the number of retries after serialization or deadlock errors in this command\&. The report displays retry statistics only if the
559 option is not equal to 1\&.
564 \fB\-\-rate=\fR\fIrate\fR
566 Execute transactions targeting the specified rate instead of running as fast as possible (the default)\&. The rate is given in transactions per second\&. If the targeted rate is above the maximum possible rate, the rate limit won\*(Aqt impact the results\&.
568 The rate is targeted by starting transactions along a Poisson\-distributed schedule time line\&. The expected start time schedule moves forward based on when the client first started, not when the previous transaction ended\&. That approach means that when transactions go past their original scheduled end time, it is possible for later ones to catch up again\&.
570 When throttling is active, the transaction latency reported at the end of the run is calculated from the scheduled start times, so it includes the time each transaction had to wait for the previous transaction to finish\&. The wait time is called the schedule lag time, and its average and maximum are also reported separately\&. The transaction latency with respect to the actual transaction start time, i\&.e\&., the time spent executing the transaction in the database, can be computed by subtracting the schedule lag time from the reported latency\&.
573 \fB\-\-latency\-limit\fR
574 is used together with
575 \fB\-\-rate\fR, a transaction can lag behind so much that it is already over the latency limit when the previous transaction ends, because the latency is calculated from the scheduled start time\&. Such transactions are not sent to the server, but are skipped altogether and counted separately\&.
577 A high schedule lag time is an indication that the system cannot process transactions at the specified rate, with the chosen number of clients and threads\&. When the average transaction execution time is longer than the scheduled interval between each transaction, each successive transaction will fall further behind, and the schedule lag time will keep increasing the longer the test run is\&. When that happens, you will have to reduce the specified transaction rate\&.
580 \fB\-s\fR \fIscale_factor\fR
582 \fB\-\-scale=\fR\fIscale_factor\fR
584 Report the specified scale factor in
585 pgbench\*(Aqs output\&. With the built\-in tests, this is not necessary; the correct scale factor will be detected by counting the number of rows in the
587 table\&. However, when testing only custom benchmarks (\fB\-f\fR
588 option), the scale factor will be reported as 1 unless this option is used\&.
593 \fB\-\-select\-only\fR
595 Run built\-in select\-only script\&. Shorthand for
596 \fB\-b select\-only\fR\&.
599 \fB\-t\fR \fItransactions\fR
601 \fB\-\-transactions=\fR\fItransactions\fR
603 Number of transactions each client runs\&. Default is 10\&.
606 \fB\-T\fR \fIseconds\fR
608 \fB\-\-time=\fR\fIseconds\fR
610 Run the test for this many seconds, rather than a fixed number of transactions per client\&.
614 are mutually exclusive\&.
619 \fB\-\-vacuum\-all\fR
621 Vacuum all four standard tables before running the test\&. With neither
630 tables, and will truncate
634 \fB\-\-aggregate\-interval=\fR\fB\fIseconds\fR\fR
636 Length of aggregation interval (in seconds)\&. May be used only with
638 option\&. With this option, the log contains per\-interval summary data, as described below\&.
641 \fB\-\-exit\-on\-abort\fR
643 Exit immediately when any client is aborted due to some error\&. Without this option, even when a client is aborted, other clients could continue their run as specified by
649 will print an incomplete results in this case\&.
651 Note that serialization failures or deadlock failures do not abort the client, so they are not affected by this option\&. See
652 Failures and Serialization/Deadlock Retries
653 for more information\&.
656 \fB\-\-failures\-detailed\fR
658 Report failures in per\-transaction and aggregation logs, as well as in the main and per\-script reports, grouped by the following types:
668 serialization failures;
683 Failures and Serialization/Deadlock Retries
684 for more information\&.
687 \fB\-\-log\-prefix=\fR\fB\fIprefix\fR\fR
689 Set the filename prefix for the log files created by
690 \fB\-\-log\fR\&. The default is
694 \fB\-\-max\-tries=\fR\fB\fInumber_of_tries\fR\fR
696 Enable retries for transactions with serialization/deadlock errors and set the maximum number of these tries\&. This option can be combined with the
697 \fB\-\-latency\-limit\fR
698 option which limits the total time of all transaction tries; moreover, you cannot use an unlimited number of tries (\-\-max\-tries=0) without
699 \fB\-\-latency\-limit\fR
701 \fB\-\-time\fR\&. The default value is 1 and transactions with serialization/deadlock errors are not retried\&. See
702 Failures and Serialization/Deadlock Retries
703 for more information about retrying such transactions\&.
706 \fB\-\-progress\-timestamp\fR
708 When showing progress (option
709 \fB\-P\fR), use a timestamp (Unix epoch) instead of the number of seconds since the beginning of the run\&. The unit is in seconds, with millisecond precision after the dot\&. This helps compare logs generated by various tools\&.
712 \fB\-\-random\-seed=\fR\fIseed\fR
714 Set random generator seed\&. Seeds the system random number generator, which then produces a sequence of initial generator states, one for each thread\&. Values for
718 (the default, the seed is based on the current time),
720 (use a strong random source, failing if none is available), or an unsigned decimal integer value\&. The random generator is invoked explicitly from a pgbench script (random\&.\&.\&.
721 functions) or implicitly (for instance option
723 uses it to schedule transactions)\&. When explicitly set, the value used for seeding is shown on the terminal\&. Any value allowed for
725 may also be provided through the environment variable
726 PGBENCH_RANDOM_SEED\&. To ensure that the provided seed impacts all possible uses, put this option first or use the environment variable\&.
728 Setting the seed explicitly allows to reproduce a
730 run exactly, as far as random numbers are concerned\&. As the random state is managed per thread, this means the exact same
732 run for an identical invocation if there is one client per thread and there are no external or data dependencies\&. From a statistical viewpoint reproducing runs exactly is a bad idea because it can hide the performance variability or improve performance unduly, e\&.g\&., by hitting the same pages as a previous run\&. However, it may also be of great help for debugging, for instance re\-running a tricky case which leads to an error\&. Use wisely\&.
735 \fB\-\-sampling\-rate=\fR\fB\fIrate\fR\fR
737 Sampling rate, used when writing data into the log, to reduce the amount of log generated\&. If this option is given, only the specified fraction of transactions are logged\&. 1\&.0 means all transactions will be logged, 0\&.05 means only 5% of the transactions will be logged\&.
739 Remember to take the sampling rate into account when processing the log file\&. For example, when computing TPS values, you need to multiply the numbers accordingly (e\&.g\&., with 0\&.01 sample rate, you\*(Aqll only get 1/100 of the actual TPS)\&.
742 \fB\-\-show\-script=\fR\fIscriptname\fR
744 Show the actual code of builtin script
746 on stderr, and exit immediately\&.
749 \fB\-\-verbose\-errors\fR
751 Print messages about all errors and failures (errors without retrying) including which limit for retries was exceeded and how far it was exceeded for the serialization/deadlock failures\&. (Note that in this case the output can be significantly increased\&.) See
752 Failures and Serialization/Deadlock Retries
753 for more information\&.
758 also accepts the following common command\-line arguments for connection parameters and other common settings:
762 Print debugging output\&.
765 \fB\-h\fR \fIhostname\fR
767 \fB\-\-host=\fR\fIhostname\fR
769 The database server\*(Aqs host name
774 \fB\-\-port=\fR\fIport\fR
776 The database server\*(Aqs port number
779 \fB\-U\fR \fIlogin\fR
781 \fB\-\-username=\fR\fIlogin\fR
783 The user name to connect as
801 command line arguments, and exit\&.
805 A successful run will exit with status 0\&. Exit status 1 indicates static problems such as invalid command\-line options or internal errors which are supposed to never occur\&. Early errors that occur when starting benchmark such as initial connection failures also exit with status 1\&. Errors during the run such as database errors or problems in the script will result in exit status 2\&. In the latter case,
807 will print partial results if
808 \fB\-\-exit\-on\-abort\fR
809 option is not specified\&.
820 Default connection parameters\&.
823 This utility, like most other
825 utilities, uses the environment variables supported by
830 The environment variable
832 specifies whether to use color in diagnostic messages\&. Possible values are
838 .SS "What Is the \(lqTransaction\(rq Actually Performed in pgbench?"
841 executes test scripts chosen randomly from a specified list\&. The scripts may include built\-in scripts specified with
843 and user\-provided scripts specified with
844 \fB\-f\fR\&. Each script may be given a relative weight specified after an
846 so as to change its selection probability\&. The default weight is
847 1\&. Scripts with a weight of
851 The default built\-in transaction script (also invoked with
852 \fB\-b tpcb\-like\fR) issues seven commands per transaction over randomly chosen
857 delta\&. The scenario is inspired by the TPC\-B benchmark, but is not actually TPC\-B, hence the name\&.
878 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
889 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
900 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
911 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
922 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
939 \fB\-N\fR), steps 4 and 5 aren\*(Aqt included in the transaction\&. This will avoid update contention on these tables, but it makes the test case even less like TPC\-B\&.
950 has support for running custom benchmark scenarios by replacing the default transaction script (described above) with a transaction script read from a file (\fB\-f\fR
951 option)\&. In this case a
953 counts as one execution of a script file\&.
955 A script file contains one or more SQL commands terminated by semicolons\&. Empty lines and lines beginning with
957 are ignored\&. Script files can also contain
958 \(lqmeta commands\(rq, which are interpreted by
960 itself, as described below\&.
966 .nr an-no-space-flag 1
976 9\&.6, SQL commands in script files were terminated by newlines, and so they could not be continued across lines\&. Now a semicolon is
978 to separate consecutive SQL commands (though an SQL command does not need one if it is followed by a meta command)\&. If you need to create a script file that works with both old and new versions of
979 pgbench, be sure to write each SQL command on a single line ending with a semicolon\&.
983 scripts do not contain incomplete blocks of SQL transactions\&. If at runtime the client reaches the end of the script without completing the last transaction block, it will be aborted\&.
987 There is a simple variable\-substitution facility for script files\&. Variable names must consist of letters (including non\-Latin letters), digits, and underscores, with the first character not being a digit\&. Variables can be set by the command\-line
989 option, explained above, or by the meta commands explained below\&. In addition to any variables preset by
991 command\-line options, there are a few variables that are preset automatically, listed in
992 Table\ \&301\&. A value specified for these variables using
994 takes precedence over the automatic presets\&. Once set, a variable\*(Aqs value can be inserted into an SQL command by writing
995 :\fIvariablename\fR\&. When running more than one client session, each session has its own set of variables\&.
997 supports up to 255 variable uses in one statement\&.
1000 .nr an-no-space-flag 1
1003 .B Table\ \&301.\ \&pgbench Automatic Variables
1020 unique number identifying the client session (starts from zero)
1025 seed used in hash and pseudorandom permutation functions by default
1030 random generator seed (unless overwritten with \fB\-D\fR)
1035 current scale factor
1040 Script file meta commands begin with a backslash (\e) and normally extend to the end of the line, although they can be continued to additional lines by writing backslash\-return\&. Arguments to a meta command are separated by white space\&. These meta commands are supported:
1042 \egset [\fIprefix\fR] \easet [\fIprefix\fR]
1044 These commands may be used to end SQL queries, taking the place of the terminating semicolon (;)\&.
1048 command is used, the preceding SQL query is expected to return one row, the columns of which are stored into variables named after column names, and prefixed with
1054 command is used, all combined SQL queries (separated by
1055 \e;) have their columns stored into variables named after column names, and prefixed with
1057 if provided\&. If a query returns no row, no assignment is made and the variable can be tested for existence to detect this\&. If a query returns more than one row, the last value is kept\&.
1062 cannot be used in pipeline mode, since the query results are not yet available by the time the commands would need them\&.
1064 The following example puts the final account balance from the first query into variable
1065 \fIabalance\fR, and fills variables
1069 with integers from the third query\&. The result of the second query is discarded\&. The result of the two last combined queries are stored in variables
1078 UPDATE pgbench_accounts
1079 SET abalance = abalance + :delta
1081 RETURNING abalance \egset
1082 \-\- compound of two queries
1084 SELECT 2 AS two, 3 AS three \egset p_
1085 SELECT 4 AS four \e; SELECT 5 AS five \easet
1092 \eif \fIexpression\fR
1094 \eelif \fIexpression\fR
1100 This group of commands implements nestable conditional blocks, similarly to
1102 \eif \fIexpression\fR\&. Conditional expressions are identical to those with
1103 \eset, with non\-zero values interpreted as true\&.
1106 \eset \fIvarname\fR \fIexpression\fR
1110 to a value calculated from
1111 \fIexpression\fR\&. The expression may contain the
1113 constant, Boolean constants
1116 FALSE, integer constants such as
1117 5432, double constants such as
1118 3\&.14159, references to variables
1119 :\fIvariablename\fR,
1121 with their usual SQL precedence and associativity,
1123 CASE generic conditional expressions
1126 Functions and most operators return
1132 For conditional purposes, non zero numerical values are
1133 TRUE, zero numerical values and
1138 Too large or small integer and double constants, as well as integer arithmetic operators (+,
1142 /) raise errors on overflows\&.
1146 clause is provided to a
1147 CASE, the default value is
1156 \eset ntellers 10 * :scale
1157 \eset aid (1021 * random(1, 100000 * :scale)) % \e
1158 (100000 * :scale) + 1
1159 \eset divx CASE WHEN :x <> 0 THEN :y/:x ELSE NULL END
1166 \esleep \fInumber\fR [ us | ms | s ]
1168 Causes script execution to sleep for the specified duration in microseconds (us), milliseconds (ms) or seconds (s)\&. If the unit is omitted then seconds are the default\&.
1170 can be either an integer constant or a
1172 reference to a variable having an integer value\&.
1187 \esetshell \fIvarname\fR \fIcommand\fR [ \fIargument\fR \&.\&.\&. ]
1191 to the result of the shell command
1194 \fIargument\fR(s)\&. The command must return an integer value through its standard output\&.
1199 can be either a text constant or a
1201 reference to a variable\&. If you want to use an
1203 starting with a colon, write an additional colon at the beginning of
1212 \esetshell variable_to_be_assigned command literal_argument :variable ::literal_starting_with_colon
1219 \eshell \fIcommand\fR [ \fIargument\fR \&.\&.\&. ]
1222 \esetshell, but the result of the command is discarded\&.
1230 \eshell command literal_argument :variable ::literal_starting_with_colon
1243 This group of commands implements pipelining of SQL statements\&. A pipeline must begin with a
1244 \fB\estartpipeline\fR
1246 \fB\eendpipeline\fR\&. In between there may be any number of
1247 \fB\esyncpipeline\fR
1248 commands, which sends a
1250 without ending the ongoing pipeline and flushing the send buffer\&. In pipeline mode, statements are sent to the server without waiting for the results of previous statements\&. See
1252 for more details\&. Pipeline mode requires the use of extended query protocol\&.
1254 .SS "Built\-in Operators"
1256 The arithmetic, bitwise, comparison and logical operators listed in
1260 and may be used in expressions appearing in
1261 \eset\&. The operators are listed in increasing precedence order\&. Except as noted, operators taking two numeric inputs will produce a double value if either input is double, otherwise they produce an integer result\&.
1264 .nr an-no-space-flag 1
1267 .B Table\ \&302.\ \&pgbench Operators
1350 IS [NOT] (NULL|TRUE|FALSE)
1437 Less than or equal to
1465 Greater than or equal to
1604 Division (truncates the result towards zero if both inputs are integers)
1639 .SS "Built\-In Functions"
1641 The functions listed in
1645 and may be used in expressions appearing in
1649 .nr an-no-space-flag 1
1652 .B Table\ \&303.\ \&pgbench Functions
1713 Prints the argument to
1714 stderr, and returns the argument\&.
1745 raised to the given power)
1749 → 2\&.718281828459045
1760 if any argument is double, else
1764 Selects the largest value among the arguments\&.
1767 greatest(5, 4, 3, 2)
1780 This is an alias for
1781 \fBhash_murmur2\fR\&.
1785 → \-5817877081768721676
1798 \m[blue]\fBFNV\-1a hash\fR\m[]\&.
1801 hash_fnv1a(10, 5432)
1802 → \-7793829335365542153
1815 \m[blue]\fBMurmurHash2 hash\fR\m[]\&.
1818 hash_murmur2(10, 5432)
1819 → \-5817877081768721676
1845 if any argument is double, else
1849 Selects the smallest value among the arguments\&.
1852 least(5, 4, 3, 2\&.1)
1867 ln(2\&.718281828459045)
1899 \fIi\fR, in the range
1900 [0, size)\&. This is the new position of
1903 \fIsize\fR) in a pseudorandom permutation of the integers
1904 0\&.\&.\&.size\-1, parameterized by
1905 \fIseed\fR, see below\&.
1909 → an integer between 0 and 3
1918 Approximate value of
1923 → 3\&.14159265358979323846
1944 raised to the power of
1961 Computes a uniformly\-distributed random integer in
1966 → an integer between 1 and 10
1970 \fBrandom_exponential\fR
1979 Computes an exponentially\-distributed random integer in
1980 [lb, ub], see below\&.
1983 random_exponential(1, 10, 3\&.0)
1984 → an integer between 1 and 10
1988 \fBrandom_gaussian\fR
1997 Computes a Gaussian\-distributed random integer in
1998 [lb, ub], see below\&.
2001 random_gaussian(1, 10, 2\&.5)
2002 → an integer between 1 and 10
2006 \fBrandom_zipfian\fR
2015 Computes a Zipfian\-distributed random integer in
2016 [lb, ub], see below\&.
2019 random_zipfian(1, 10, 1\&.5)
2020 → an integer between 1 and 10
2042 function generates values using a uniform distribution, that is all the values are drawn within the specified range with equal probability\&. The
2047 functions require an additional double parameter which determines the precise shape of the distribution\&.
2051 \h'-04'\(bu\h'+03'\c
2057 For an exponential distribution,
2059 controls the distribution by truncating a quickly\-decreasing exponential distribution at
2060 \fIparameter\fR, and then projecting onto integers between the bounds\&. To be precise, with
2066 f(x) = exp(\-parameter * (x \- min) / (max \- min + 1)) / (1 \- exp(\-parameter))
2078 inclusive is drawn with probability:
2081 Intuitively, the larger the
2082 \fIparameter\fR, the more frequently values close to
2084 are accessed, and the less frequently values close to
2086 are accessed\&. The closer to 0
2088 is, the flatter (more uniform) the access distribution\&. A crude approximation of the distribution is that the most frequent 1% values in the range, close to
2089 \fImin\fR, are drawn
2090 \fIparameter\fR% of the time\&. The
2092 value must be strictly positive\&.
2097 \h'-04'\(bu\h'+03'\c
2103 For a Gaussian distribution, the interval is mapped onto a standard normal distribution (the classical bell\-shaped Gaussian curve) truncated at
2107 on the right\&. Values in the middle of the interval are more likely to be drawn\&. To be precise, if
2109 is the cumulative distribution function of the standard normal distribution, with mean
2112 (max + min) / 2\&.0, with
2118 f(x) = PHI(2\&.0 * parameter * (x \- mu) / (max \- min + 1)) /
2119 (2\&.0 * PHI(parameter) \- 1)
2131 inclusive is drawn with probability:
2132 f(i + 0\&.5) \- f(i \- 0\&.5)\&. Intuitively, the larger the
2133 \fIparameter\fR, the more frequently values close to the middle of the interval are drawn, and the less frequently values close to the
2137 bounds\&. About 67% of values are drawn from the middle
2138 1\&.0 / parameter, that is a relative
2140 around the mean, and 95% in the middle
2141 2\&.0 / parameter, that is a relative
2143 around the mean; for instance, if
2145 is 4\&.0, 67% of values are drawn from the middle quarter (1\&.0 / 4\&.0) of the interval (i\&.e\&., from
2148 5\&.0 / 8\&.0) and 95% from the middle half (2\&.0 / 4\&.0) of the interval (second and third quartiles)\&. The minimum allowed
2155 \h'-04'\(bu\h'+03'\c
2162 generates a bounded Zipfian distribution\&.
2164 defines how skewed the distribution is\&. The larger the
2165 \fIparameter\fR, the more frequently values closer to the beginning of the interval are drawn\&. The distribution is such that, assuming the range starts from 1, the ratio of the probability of drawing
2170 ((\fIk\fR+1)/\fIk\fR)**\fIparameter\fR\&. For example,
2171 random_zipfian(1, \&.\&.\&., 2\&.5)
2175 (2/1)**2\&.5 = 5\&.66
2176 times more frequently than
2177 2, which itself is produced
2178 (3/2)**2\&.5 = 2\&.76
2179 times more frequently than
2182 pgbench\*(Aqs implementation is based on "Non\-Uniform Random Variate Generation", Luc Devroye, p\&. 550\-551, Springer 1986\&. Due to limitations of that algorithm, the
2184 value is restricted to the range [1\&.001, 1000]\&.
2191 .nr an-no-space-flag 1
2199 When designing a benchmark which selects rows non\-uniformly, be aware that the rows chosen may be correlated with other data such as IDs from a sequence or the physical row ordering, which may skew performance measurements\&.
2201 To avoid this, you may wish to use the
2203 function, or some other additional step with similar effect, to shuffle the selected rows and remove such correlations\&.
2212 accept an input value and an optional seed parameter\&. In case the seed isn\*(Aqt provided the value of
2214 is used, which is initialized randomly unless set by the command\-line
2219 accepts an input value, a size, and an optional seed parameter\&. It generates a pseudorandom permutation of integers in the range
2220 [0, size), and returns the index of the input value in the permuted values\&. The permutation chosen is parameterized by the seed, which defaults to
2221 :default_seed, if not specified\&. Unlike the hash functions,
2223 ensures that there are no collisions or holes in the output values\&. Input values outside the interval are interpreted modulo the size\&. The function raises an error if the size is not positive\&.
2225 can be used to scatter the distribution of non\-uniform random functions such as
2229 so that values drawn more often are not trivially correlated\&. For instance, the following
2231 script simulates a possible real world workload typical for social media and blogging platforms where a few accounts generate excessive load:
2238 \eset r random_zipfian(1, :size, 1\&.07)
2239 \eset k 1 + permute(:r, :size)
2245 In some cases several distinct distributions are needed which don\*(Aqt correlate with each other and this is when the optional seed parameter comes in handy:
2251 \eset k1 1 + permute(:r, :size, :default_seed + 123)
2252 \eset k2 1 + permute(:r, :size, :default_seed + 321)
2258 A similar behavior can also be approximated with
2266 \eset r random_zipfian(1, 100 * :size, 1\&.07)
2267 \eset k 1 + abs(hash(:r)) % :size
2275 generates collisions, some values will not be reachable and others will be more frequent than expected from the original distribution\&.
2277 As an example, the full definition of the built\-in TPC\-B\-like transaction is:
2283 \eset aid random(1, 100000 * :scale)
2284 \eset bid random(1, 1 * :scale)
2285 \eset tid random(1, 10 * :scale)
2286 \eset delta random(\-5000, 5000)
2288 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
2289 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
2290 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
2291 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
2292 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
2299 This script allows each iteration of the transaction to reference different, randomly\-chosen rows\&. (This example also shows why it\*(Aqs important for each client session to have its own variables \(em otherwise they\*(Aqd not be independently touching different rows\&.)
2300 .SS "Per\-Transaction Logging"
2304 option (but without the
2305 \fB\-\-aggregate\-interval\fR
2308 writes information about each transaction to a log file\&. The log file will be named
2309 \fIprefix\fR\&.\fInnn\fR, where
2316 process\&. The prefix can be changed by using the
2317 \fB\-\-log\-prefix\fR
2320 option is 2 or higher, so that there are multiple worker threads, each will have its own log file\&. The first worker will use the same name for its log file as in the standard single worker case\&. The additional log files for the other workers will be named
2321 \fIprefix\fR\&.\fInnn\fR\&.\fImmm\fR, where
2323 is a sequential number for each worker starting with 1\&.
2325 Each line in a log file describes one transaction\&. It contains the following space\-separated fields:
2329 identifies the client session that ran the transaction
2332 \fItransaction_no\fR
2334 counts how many transactions have been run by that session
2339 transaction\*(Aqs elapsed time, in microseconds
2344 identifies the script file that was used for the transaction (useful when multiple scripts are specified with
2352 transaction\*(Aqs completion time, as a Unix\-epoch time stamp
2357 fractional\-second part of transaction\*(Aqs completion time, in microseconds
2362 transaction start delay, that is the difference between the transaction\*(Aqs scheduled start time and the time it actually started, in microseconds (present only if
2369 count of retries after serialization or deadlock errors during the transaction (present only if
2370 \fB\-\-max\-tries\fR
2371 is not equal to one)
2377 \fB\-\-latency\-limit\fR
2380 for a skipped transaction will be reported as
2381 skipped\&. If the transaction ends with a failure, its
2384 failed\&. If you use the
2385 \fB\-\-failures\-detailed\fR
2388 of the failed transaction will be reported as
2392 depending on the type of failure (see
2393 Failures and Serialization/Deadlock Retries
2394 for more information)\&.
2396 Here is a snippet of a log file generated in a single\-client run:
2402 0 199 2241 0 1175850568 995598
2403 0 200 2465 0 1175850568 998079
2404 0 201 2513 0 1175850569 608
2405 0 202 2038 0 1175850569 2663
2411 Another example with
2414 \-\-latency\-limit=5
2415 (note the additional
2423 0 81 4621 0 1412881037 912698 3005
2424 0 82 6173 0 1412881037 914578 4304
2425 0 83 skipped 0 1412881037 914578 5217
2426 0 83 skipped 0 1412881037 914578 5099
2427 0 83 4722 0 1412881037 916203 3108
2428 0 84 4142 0 1412881037 918023 2333
2429 0 85 2465 0 1412881037 919759 740
2435 In this example, transaction 82 was late, because its latency (6\&.173 ms) was over the 5 ms limit\&. The next two transactions were skipped, because they were already late before they were even started\&.
2437 The following example shows a snippet of a log file with failures and retries, with the maximum number of tries set to 10 (note the additional
2445 3 0 47423 0 1499414498 34501 3
2446 3 1 8333 0 1499414498 42848 0
2447 3 2 8358 0 1499414498 51219 0
2448 4 0 72345 0 1499414498 59433 6
2449 1 3 41718 0 1499414498 67879 4
2450 1 4 8416 0 1499414498 76311 0
2451 3 3 33235 0 1499414498 84469 3
2452 0 0 failed 0 1499414498 84905 9
2453 2 0 failed 0 1499414498 86248 9
2454 3 4 8307 0 1499414498 92788 0
2461 \fB\-\-failures\-detailed\fR
2462 option is used, the type of failure is reported in the
2470 3 0 47423 0 1499414498 34501 3
2471 3 1 8333 0 1499414498 42848 0
2472 3 2 8358 0 1499414498 51219 0
2473 4 0 72345 0 1499414498 59433 6
2474 1 3 41718 0 1499414498 67879 4
2475 1 4 8416 0 1499414498 76311 0
2476 3 3 33235 0 1499414498 84469 3
2477 0 0 serialization 0 1499414498 84905 9
2478 2 0 serialization 0 1499414498 86248 9
2479 3 4 8307 0 1499414498 92788 0
2485 When running a long test on hardware that can handle a lot of transactions, the log files can become very large\&. The
2486 \fB\-\-sampling\-rate\fR
2487 option can be used to log only a random sample of transactions\&.
2488 .SS "Aggregated Logging"
2491 \fB\-\-aggregate\-interval\fR
2492 option, a different format is used for the log files\&. Each log line describes one aggregation interval\&. It contains the following space\-separated fields:
2494 \fIinterval_start\fR
2496 start time of the interval, as a Unix\-epoch time stamp
2499 \fInum_transactions\fR
2501 number of transactions within the interval
2506 sum of transaction latencies
2511 sum of squares of transaction latencies
2516 minimum transaction latency
2521 maximum transaction latency
2526 sum of transaction start delays (zero unless
2533 sum of squares of transaction start delays (zero unless
2540 minimum transaction start delay (zero unless
2547 maximum transaction start delay (zero unless
2554 number of transactions skipped because they would have started too late (zero unless
2557 \fB\-\-latency\-limit\fR
2563 number of retried transactions (zero unless
2564 \fB\-\-max\-tries\fR
2565 is not equal to one)
2570 number of retries after serialization or deadlock errors (zero unless
2571 \fB\-\-max\-tries\fR
2572 is not equal to one)
2575 \fIserialization_failures\fR
2577 number of transactions that got a serialization error and were not retried afterwards (zero unless
2578 \fB\-\-failures\-detailed\fR
2582 \fIdeadlock_failures\fR
2584 number of transactions that got a deadlock error and were not retried afterwards (zero unless
2585 \fB\-\-failures\-detailed\fR
2589 Here is some example output generated with this option:
2595 \fBpgbench \-\-aggregate\-interval=10 \-\-time=20 \-\-client=10 \-\-log \-\-rate=1000 \-\-latency\-limit=10 \-\-failures\-detailed \-\-max\-tries=10 test\fR
2597 1650260552 5178 26171317 177284491527 1136 44462 2647617 7321113867 0 9866 64 7564 28340 4148 0
2598 1650260562 4808 25573984 220121792172 1171 62083 3037380 9666800914 0 9998 598 7392 26621 4527 0
2604 Notice that while the plain (unaggregated) log format shows which script was used for each transaction, the aggregated format does not\&. Therefore if you need per\-script data, you need to aggregate the data on your own\&.
2605 .SS "Per\-Statement Report"
2611 collects the following statistics for each statement:
2615 \h'-04'\(bu\h'+03'\c
2622 \(em elapsed transaction time for each statement\&.
2624 reports an average value of all successful runs of the statement\&.
2629 \h'-04'\(bu\h'+03'\c
2635 The number of failures in this statement\&. See
2636 Failures and Serialization/Deadlock Retries
2637 for more information\&.
2642 \h'-04'\(bu\h'+03'\c
2648 The number of retries after a serialization or a deadlock error in this statement\&. See
2649 Failures and Serialization/Deadlock Retries
2650 for more information\&.
2653 The report displays retry statistics only if the
2654 \fB\-\-max\-tries\fR
2655 option is not equal to 1\&.
2657 All values are computed for each statement executed by every client and are reported after the benchmark has finished\&.
2659 For the default script, the output will look similar to this:
2665 starting vacuum\&.\&.\&.end\&.
2666 transaction type: <builtin: TPC\-B (sort of)>
2669 number of clients: 10
2670 number of threads: 1
2671 maximum number of tries: 1
2672 number of transactions per client: 1000
2673 number of transactions actually processed: 10000/10000
2674 number of failed transactions: 0 (0\&.000%)
2675 number of transactions above the 50\&.0 ms latency limit: 1311/10000 (13\&.110 %)
2676 latency average = 28\&.488 ms
2677 latency stddev = 21\&.009 ms
2678 initial connection time = 69\&.068 ms
2679 tps = 346\&.224794 (without initial connection time)
2680 statement latencies in milliseconds and failures:
2681 0\&.012 0 \eset aid random(1, 100000 * :scale)
2682 0\&.002 0 \eset bid random(1, 1 * :scale)
2683 0\&.002 0 \eset tid random(1, 10 * :scale)
2684 0\&.002 0 \eset delta random(\-5000, 5000)
2686 0\&.834 0 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
2687 0\&.641 0 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
2688 11\&.126 0 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
2689 12\&.961 0 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
2690 0\&.634 0 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
2697 Another example of output for the default script using serializable default transaction isolation level (\fBPGOPTIONS=\*(Aq\-c default_transaction_isolation=serializable\*(Aq pgbench \&.\&.\&.\fR):
2703 starting vacuum\&.\&.\&.end\&.
2704 transaction type: <builtin: TPC\-B (sort of)>
2707 number of clients: 10
2708 number of threads: 1
2709 maximum number of tries: 10
2710 number of transactions per client: 1000
2711 number of transactions actually processed: 6317/10000
2712 number of failed transactions: 3683 (36\&.830%)
2713 number of transactions retried: 7667 (76\&.670%)
2714 total number of retries: 45339
2715 number of transactions above the 50\&.0 ms latency limit: 106/6317 (1\&.678 %)
2716 latency average = 17\&.016 ms
2717 latency stddev = 13\&.283 ms
2718 initial connection time = 45\&.017 ms
2719 tps = 186\&.792667 (without initial connection time)
2720 statement latencies in milliseconds, failures and retries:
2721 0\&.006 0 0 \eset aid random(1, 100000 * :scale)
2722 0\&.001 0 0 \eset bid random(1, 1 * :scale)
2723 0\&.001 0 0 \eset tid random(1, 10 * :scale)
2724 0\&.001 0 0 \eset delta random(\-5000, 5000)
2726 0\&.773 0 1 UPDATE pgbench_accounts SET abalance = abalance + :delta WHERE aid = :aid;
2727 0\&.624 0 0 SELECT abalance FROM pgbench_accounts WHERE aid = :aid;
2728 1\&.098 320 3762 UPDATE pgbench_tellers SET tbalance = tbalance + :delta WHERE tid = :tid;
2729 0\&.582 3363 41576 UPDATE pgbench_branches SET bbalance = bbalance + :delta WHERE bid = :bid;
2730 0\&.465 0 0 INSERT INTO pgbench_history (tid, bid, aid, delta, mtime) VALUES (:tid, :bid, :aid, :delta, CURRENT_TIMESTAMP);
2737 If multiple script files are specified, all statistics are reported separately for each script file\&.
2739 Note that collecting the additional timing information needed for per\-statement latency computation adds some overhead\&. This will slow average execution speed and lower the computed TPS\&. The amount of slowdown varies significantly depending on platform and hardware\&. Comparing average TPS values with and without latency reporting enabled is a good way to measure if the timing overhead is significant\&.
2740 .SS "Failures and Serialization/Deadlock Retries"
2743 pgbench, there are three main types of errors:
2747 \h'-04'\(bu\h'+03'\c
2753 Errors of the main program\&. They are the most serious and always result in an immediate exit from
2755 with the corresponding error message\&. They include:
2759 \h'-04'\(bu\h'+03'\c
2765 errors at the beginning of
2767 (e\&.g\&. an invalid option value);
2772 \h'-04'\(bu\h'+03'\c
2778 errors in the initialization mode (e\&.g\&. the query to create tables for built\-in scripts fails);
2783 \h'-04'\(bu\h'+03'\c
2789 errors before starting threads (e\&.g\&. could not connect to the database server, syntax error in the meta command, thread creation failure);
2794 \h'-04'\(bu\h'+03'\c
2802 errors (which are supposed to never occur\&.\&.\&.)\&.
2808 \h'-04'\(bu\h'+03'\c
2814 Errors when the thread manages its clients (e\&.g\&. the client could not start a connection to the database server / the socket for connecting the client to the database server has become invalid)\&. In such cases all clients of this thread stop while other threads continue to work\&. However,
2815 \fB\-\-exit\-on\-abort\fR
2816 is specified, all of the threads stop immediately in this case\&.
2821 \h'-04'\(bu\h'+03'\c
2827 Direct client errors\&. They lead to immediate exit from
2829 with the corresponding error message in the case of an internal
2831 error (which are supposed to never occur\&.\&.\&.) or when
2832 \fB\-\-exit\-on\-abort\fR
2833 is specified\&. Otherwise in the worst case they only lead to the abortion of the failed client while other clients continue their run (but some client errors are handled without an abortion of the client and reported separately, see below)\&. Later in this section it is assumed that the discussed errors are only the direct client errors and they are not internal
2838 A client\*(Aqs run is aborted in case of a serious error; for example, the connection with the database server was lost or the end of script was reached without completing the last transaction\&. In addition, if execution of an SQL or meta command fails for reasons other than serialization or deadlock errors, the client is aborted\&. Otherwise, if an SQL command fails with serialization or deadlock errors, the client is not aborted\&. In such cases, the current transaction is rolled back, which also includes setting the client variables as they were before the run of this transaction (it is assumed that one transaction script contains only one transaction; see
2839 What Is the "Transaction" Actually Performed in pgbench?
2840 for more information)\&. Transactions with serialization or deadlock errors are repeated after rollbacks until they complete successfully or reach the maximum number of tries (specified by the
2841 \fB\-\-max\-tries\fR
2842 option) / the maximum time of retries (specified by the
2843 \fB\-\-latency\-limit\fR
2844 option) / the end of benchmark (specified by the
2846 option)\&. If the last trial run fails, this transaction will be reported as failed but the client is not aborted and continues to work\&.
2852 .nr an-no-space-flag 1
2860 Without specifying the
2861 \fB\-\-max\-tries\fR
2862 option, a transaction will never be retried after a serialization or deadlock error because its default value is 1\&. Use an unlimited number of tries (\-\-max\-tries=0) and the
2863 \fB\-\-latency\-limit\fR
2864 option to limit only the maximum time of tries\&. You can also use the
2866 option to limit the benchmark duration under an unlimited number of tries\&.
2868 Be careful when repeating scripts that contain multiple transactions: the script is always retried completely, so successful transactions can be performed several times\&.
2870 Be careful when repeating transactions with shell commands\&. Unlike the results of SQL commands, the results of shell commands are not rolled back, except for the variable value of the
2876 The latency of a successful transaction includes the entire time of transaction execution with rollbacks and retries\&. The latency is measured only for successful transactions and commands but not for failed transactions or commands\&.
2878 The main report contains the number of failed transactions\&. If the
2879 \fB\-\-max\-tries\fR
2880 option is not equal to 1, the main report also contains statistics related to retries: the total number of retried transactions and total number of retries\&. The per\-script report inherits all these fields from the main report\&. The per\-statement report displays retry statistics only if the
2881 \fB\-\-max\-tries\fR
2882 option is not equal to 1\&.
2884 If you want to group failures by basic types in per\-transaction and aggregation logs, as well as in the main and per\-script reports, use the
2885 \fB\-\-failures\-detailed\fR
2886 option\&. If you also want to distinguish all errors and failures (errors without retrying) by type including which limit for retries was exceeded and how much it was exceeded by for the serialization/deadlock failures, use the
2887 \fB\-\-verbose\-errors\fR
2889 .SS "Table Access Methods"
2893 for the pgbench tables\&. The environment variable
2895 specifies database configuration options that are passed to PostgreSQL via the command line (See
2896 Section\ \&19.1.4)\&. For example, a hypothetical default Table Access Method for the tables that pgbench creates called
2898 can be specified with:
2904 PGOPTIONS=\*(Aq\-c default_table_access_method=wuzza\*(Aq
2910 .SS "Good Practices"
2912 It is very easy to use
2914 to produce completely meaningless numbers\&. Here are some guidelines to help you get useful results\&.
2918 believe any test that runs for only a few seconds\&. Use the
2922 option to make the run last at least a few minutes, so as to average out noise\&. In some cases you could need hours to get numbers that are reproducible\&. It\*(Aqs a good idea to try the test run a few times, to find out if your numbers are reproducible or not\&.
2924 For the default TPC\-B\-like test scenario, the initialization scale factor (\fB\-s\fR) should be at least as large as the largest number of clients you intend to test (\fB\-c\fR); else you\*(Aqll mostly be measuring update contention\&. There are only
2928 table, and every transaction wants to update one of them, so
2932 will undoubtedly result in lots of transactions blocked waiting for other transactions\&.
2934 The default test scenario is also quite sensitive to how long it\*(Aqs been since the tables were initialized: accumulation of dead rows and dead space in the tables changes the results\&. To understand the results you must keep track of the total number of updates and when vacuuming happens\&. If autovacuum is enabled it can result in unpredictable changes in measured performance\&.
2938 is that it can itself become the bottleneck when trying to test a large number of client sessions\&. This can be alleviated by running
2940 on a different machine from the database server, although low network latency will be essential\&. It might even be useful to run several
2942 instances concurrently, on several client machines, against the same database server\&.
2945 If untrusted users have access to a database that has not adopted a
2946 secure schema usage pattern, do not run
2950 uses unqualified names and does not manipulate the search path\&.