2 .\" Title: pg_test_timing
3 .\" Author: The PostgreSQL Global Development Group
4 .\" Generator: DocBook XSL Stylesheets vsnapshot <http://docbook.sf.net/>
6 .\" Manual: PostgreSQL 18.0 Documentation
7 .\" Source: PostgreSQL 18.0
10 .TH "PG_TEST_TIMING" "1" "2025" "PostgreSQL 18.0" "PostgreSQL 18.0 Documentation"
11 .\" -----------------------------------------------------------------
12 .\" * Define some portability stuff
13 .\" -----------------------------------------------------------------
14 .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
15 .\" http://bugs.debian.org/507673
16 .\" http://lists.gnu.org/archive/html/groff/2009-02/msg00013.html
17 .\" ~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
20 .\" -----------------------------------------------------------------
21 .\" * set default formatting
22 .\" -----------------------------------------------------------------
23 .\" disable hyphenation
25 .\" disable justification (adjust text to left margin only)
27 .\" -----------------------------------------------------------------
28 .\" * MAIN CONTENT STARTS HERE *
29 .\" -----------------------------------------------------------------
31 pg_test_timing \- measure timing overhead
33 .HP \w'\fBpg_test_timing\fR\ 'u
34 \fBpg_test_timing\fR [\fIoption\fR...]
38 is a tool to measure the timing overhead on your system and confirm that the system time never moves backwards\&. Systems that are slow to collect timing data can give less accurate
44 accepts the following command\-line options:
46 \fB\-d \fR\fB\fIduration\fR\fR
48 \fB\-\-duration=\fR\fB\fIduration\fR\fR
50 Specifies the test duration, in seconds\&. Longer durations give slightly better accuracy, and are more likely to discover problems with the system clock moving backwards\&. The default test duration is 3 seconds\&.
68 command line arguments, and exit\&.
71 .SS "Interpreting Results"
73 Good results will show most (>90%) individual timing calls take less than one microsecond\&. Average per loop overhead will be even lower, below 100 nanoseconds\&. This example from an Intel i7\-860 system using a TSC clock source shows excellent performance:
79 Testing timing overhead for 3 seconds\&.
80 Per loop time including overhead: 35\&.96 ns
81 Histogram of timing durations:
93 Note that different units are used for the per loop time than the histogram\&. The loop can have resolution within a few nanoseconds (ns), while the individual timing calls can only resolve down to one microsecond (us)\&.
94 .SS "Measuring Executor Timing Overhead"
96 When the query executor is running a statement using
97 \fBEXPLAIN ANALYZE\fR, individual operations are timed as well as showing a summary\&. The overhead of your system can be checked by counting rows with the
105 CREATE TABLE t AS SELECT * FROM generate_series(1,100000);
107 SELECT COUNT(*) FROM t;
108 EXPLAIN ANALYZE SELECT COUNT(*) FROM t;
114 The i7\-860 system measured runs the count query in 9\&.8 ms while the
115 \fBEXPLAIN ANALYZE\fR
116 version takes 16\&.6 ms, each processing just over 100,000 rows\&. That 6\&.8 ms difference means the timing overhead per row is 68 ns, about twice what pg_test_timing estimated it would be\&. Even that relatively small amount of overhead is making the fully timed count statement take almost 70% longer\&. On more substantial queries, the timing overhead would be less problematic\&.
117 .SS "Changing Time Sources"
119 On some newer Linux systems, it\*(Aqs possible to change the clock source used to collect timing data at any time\&. A second example shows the slowdown possible from switching to the slower acpi_pm time source, on the same system used for the fast results above:
125 # cat /sys/devices/system/clocksource/clocksource0/available_clocksource
127 # echo acpi_pm > /sys/devices/system/clocksource/clocksource0/current_clocksource
129 Per loop time including overhead: 722\&.92 ns
130 Histogram of timing durations:
131 < us % of total count
142 In this configuration, the sample
143 \fBEXPLAIN ANALYZE\fR
144 above takes 115\&.9 ms\&. That\*(Aqs 1061 ns of timing overhead, again a small multiple of what\*(Aqs measured directly by this utility\&. That much timing overhead means the actual query itself is only taking a tiny fraction of the accounted for time, most of it is being consumed in overhead instead\&. In this configuration, any
145 \fBEXPLAIN ANALYZE\fR
146 totals involving many timed operations would be inflated significantly by timing overhead\&.
148 FreeBSD also allows changing the time source on the fly, and it logs information about the timer selected during boot:
154 # dmesg | grep "Timecounter"
155 Timecounter "ACPI\-fast" frequency 3579545 Hz quality 900
156 Timecounter "i8254" frequency 1193182 Hz quality 0
157 Timecounters tick every 10\&.000 msec
158 Timecounter "TSC" frequency 2531787134 Hz quality 800
159 # sysctl kern\&.timecounter\&.hardware=TSC
160 kern\&.timecounter\&.hardware: ACPI\-fast \-> TSC
166 Other systems may only allow setting the time source on boot\&. On older Linux systems the "clock" kernel setting is the only way to make this sort of change\&. And even on some more recent ones, the only option you\*(Aqll see for a clock source is "jiffies"\&. Jiffies are the older Linux software clock implementation, which can have good resolution when it\*(Aqs backed by fast enough timing hardware, as in this example:
172 $ cat /sys/devices/system/clocksource/clocksource0/available_clocksource
174 $ dmesg | grep time\&.c
175 time\&.c: Using 3\&.579545 MHz WALL PM GTOD PIT/TSC timer\&.
176 time\&.c: Detected 2400\&.153 MHz processor\&.
178 Testing timing overhead for 3 seconds\&.
179 Per timing duration including loop overhead: 97\&.75 ns
180 Histogram of timing durations:
181 < us % of total count
182 1 90\&.23734 27694571
192 .SS "Clock Hardware and Timing Accuracy"
194 Collecting accurate timing information is normally done on computers using hardware clocks with various levels of accuracy\&. With some hardware the operating systems can pass the system clock time almost directly to programs\&. A system clock can also be derived from a chip that simply provides timing interrupts, periodic ticks at some known time interval\&. In either case, operating system kernels provide a clock source that hides these details\&. But the accuracy of that clock source and how quickly it can return results varies based on the underlying hardware\&.
196 Inaccurate time keeping can result in system instability\&. Test any change to the clock source very carefully\&. Operating system defaults are sometimes made to favor reliability over best accuracy\&. And if you are using a virtual machine, look into the recommended time sources compatible with it\&. Virtual hardware faces additional difficulties when emulating timers, and there are often per operating system settings suggested by vendors\&.
198 The Time Stamp Counter (TSC) clock source is the most accurate one available on current generation CPUs\&. It\*(Aqs the preferred way to track the system time when it\*(Aqs supported by the operating system and the TSC clock is reliable\&. There are several ways that TSC can fail to provide an accurate timing source, making it unreliable\&. Older systems can have a TSC clock that varies based on the CPU temperature, making it unusable for timing\&. Trying to use TSC on some older multicore CPUs can give a reported time that\*(Aqs inconsistent among multiple cores\&. This can result in the time going backwards, a problem this program checks for\&. And even the newest systems can fail to provide accurate TSC timing with very aggressive power saving configurations\&.
200 Newer operating systems may check for the known TSC problems and switch to a slower, more stable clock source when they are seen\&. If your system supports TSC time but doesn\*(Aqt default to that, it may be disabled for a good reason\&. And some operating systems may not detect all the possible problems correctly, or will allow using TSC even in situations where it\*(Aqs known to be inaccurate\&.
202 The High Precision Event Timer (HPET) is the preferred timer on systems where it\*(Aqs available and TSC is not accurate\&. The timer chip itself is programmable to allow up to 100 nanosecond resolution, but you may not see that much accuracy in your system clock\&.
204 Advanced Configuration and Power Interface (ACPI) provides a Power Management (PM) Timer, which Linux refers to as the acpi_pm\&. The clock derived from acpi_pm will at best provide 300 nanosecond resolution\&.
206 Timers used on older PC hardware include the 8254 Programmable Interval Timer (PIT), the real\-time clock (RTC), the Advanced Programmable Interrupt Controller (APIC) timer, and the Cyclone timer\&. These timers aim for millisecond resolution\&.