September 26, 2024: PostgreSQL 17 Released!
Supported Versions: Current (17) / 16 / 15 / 14 / 13 / 12
Development Versions: devel
Unsupported versions: 11 / 10 / 9.6 / 9.5 / 9.4 / 9.3 / 9.2 / 9.1 / 9.0 / 8.4 / 8.3 / 8.2 / 7.3 / 7.2
This documentation is for an unsupported version of PostgreSQL.
You may want to view the same page for the current version, or one of the other supported versions listed above instead.

23.1. Routine Vacuuming

PostgreSQL's VACUUM command has to run on a regular basis for several reasons:

  1. To recover or reuse disk space occupied by updated or deleted rows.

  2. To update data statistics used by the PostgreSQL query planner.

  3. To protect against loss of very old data due to transaction ID wraparound.

The standard form of VACUUM can run in parallel with production database operations. Commands such as SELECT, INSERT, UPDATE, and DELETE will continue to function as normal, though you will not be able to modify the definition of a table with commands such as ALTER TABLE ADD COLUMN while it is being vacuumed. Also, VACUUM requires a substantial amount of I/O traffic, which can cause poor performance for other active sessions. There are configuration parameters that can be adjusted to reduce the performance impact of background vacuuming — see Section 18.4.4.

Fortunately, The Auto-Vacuum Daemon monitors table activity and performs VACUUMs when necessary. Autovacuum works dynamically so it is often better administration-scheduled vacuuming.

23.1.1. Recovering Disk Space

In normal PostgreSQL operation, an UPDATE or DELETE of a row does not immediately remove the old version of the row. This approach is necessary to gain the benefits of multiversion concurrency control (see Chapter 13): the row versions must not be deleted while it is still potentially visible to other transactions. But eventually, an outdated or deleted row version is no longer of interest to any transaction. The space it occupies must be reclaimed for reuse by new rows, to avoid infinite growth of disk space requirements. This is done by running VACUUM.

There are two variants of the VACUUM command. The first form, known as "lazy vacuum" or just VACUUM, marks dead data in tables and indexes for future reuse; it does not attempt to reclaim the space used by this dead data unless the space is at the end of the table and an exclusive table lock can be easily obtained. Unused space at the start or middle of the file does not result in the file being shortened and space returned to the operating system. This variant of VACUUM can be run concurrently with normal database operations.

The second form is the VACUUM FULL command. This uses a more aggressive algorithm for reclaiming the space consumed by dead row versions. Any space that is freed by VACUUM FULL is immediately returned to the operating system, and the table data is physically compacted on the disk. Unfortunately, this variant of the VACUUM command acquires an exclusive lock on each table while VACUUM FULL is processing it. Therefore, frequently using VACUUM FULL can have an extremely negative effect on the performance of concurrent database queries.

Fortunately, The Auto-Vacuum Daemon monitors table activity and performs VACUUMs when necessary. This eliminates the need for administrators to worry about disk space recovery in all but the most unusual cases.

For administrators who want to control VACUUM themselves, the standard form of VACUUM is best used to maintain a steady-state usage of disk space. If you need to return disk space to the operating system, you can use VACUUM FULL, but this is unwise if the table will just grow again in the future. Moderately-frequent standard VACUUM runs are a better approach than infrequent VACUUM FULL runs for maintaining heavily-updated tables. However, if some heavily-updated tables have gone too long with infrequent VACUUM, you can use VACUUM FULL or CLUSTER to get performance back (it is much slower to scan a table containing almost only dead rows).

For those not using autovacuum, one approach is to schedule a database-wide VACUUM once a day during low-usage period, supplemented by more frequent vacuuming of heavily-updated tables if necessary. (Some installations with extremely high update rates vacuum their busiest tables as often as once every few minutes.) If you have multiple databases in a cluster, don't forget to VACUUM each one; the program vacuumdb might be helpful.

VACUUM FULL is recommended for cases where you know you have deleted the majority of rows in a table, so that the steady-state size of the table can be shrunk substantially with VACUUM FULL's more aggressive approach. Use plain VACUUM, not VACUUM FULL, for routine vacuuming for space recovery.

If you have a table whose entire contents are deleted on a periodic basis, consider doing it with TRUNCATE rather than using DELETE followed by VACUUM. TRUNCATE removes the entire content of the table immediately, without requiring a subsequent VACUUM or VACUUM FULL to reclaim the now-unused disk space.

23.1.2. Updating Planner Statistics

The PostgreSQL query planner relies on statistical information about the contents of tables in order to generate good plans for queries. These statistics are gathered by the ANALYZE command, which can be invoked by itself or as an optional step in VACUUM. It is important to have reasonably accurate statistics, otherwise poor choices of plans might degrade database performance.

As with vacuuming for space recovery, frequent updates of statistics are more useful for heavily-updated tables than for seldom-updated ones. But even for a heavily-updated table, there might be no need for statistics updates if the statistical distribution of the data is not changing much. A simple rule of thumb is to think about how much the minimum and maximum values of the columns in the table change. For example, a timestamp column that contains the time of row update will have a constantly-increasing maximum value as rows are added and updated; such a column will probably need more frequent statistics updates than, say, a column containing URLs for pages accessed on a website. The URL column might receive changes just as often, but the statistical distribution of its values probably changes relatively slowly.

It is possible to run ANALYZE on specific tables and even just specific columns of a table, so the flexibility exists to update some statistics more frequently than others if your application requires it. In practice, however, it is usually best to just analyze the entire database because it is a fast operation. It uses a statistical random sampling of the rows of a table rather than reading every single row.

Tip: Although per-column tweaking of ANALYZE frequency might not be very productive, you might well find it worthwhile to do per-column adjustment of the level of detail of the statistics collected by ANALYZE. Columns that are heavily used in WHERE clauses and have highly irregular data distributions might require a finer-grain data histogram than other columns. See ALTER TABLE SET STATISTICS.

Fortunately, The Auto-Vacuum Daemon monitors table activity and performs ANALYZEs when necessary. This eliminates the need for administrators to manually schedule ANALYZE.

For those not using autovacuum, one approach is to schedule a database-wide ANALYZE once a day at a low-usage time of day; this can usefully be combined with a nightly VACUUM. However, sites with relatively slowly changing table statistics might find that this is overkill, and that less-frequent ANALYZE runs are sufficient.

23.1.3. Preventing Transaction ID Wraparound Failures

PostgreSQL's MVCC transaction semantics depend on being able to compare transaction ID (XID) numbers: a row version with an insertion XID greater than the current transaction's XID is "in the future" and should not be visible to the current transaction. But since transaction IDs have limited size (32 bits at this writing) a cluster that runs for a long time (more than 4 billion transactions) would suffer transaction ID wraparound: the XID counter wraps around to zero, and all of a sudden transactions that were in the past appear to be in the future — which means their outputs become invisible. In short, catastrophic data loss. (Actually the data is still there, but that's cold comfort if you cannot get at it.) To avoid this, it is necessary to vacuum every table in every database at least once every two billion transactions.

The reason that periodic vacuuming solves the problem is that PostgreSQL distinguishes a special XID FrozenXID. This XID is always considered older than every normal XID. Normal XIDs are compared using modulo-231 arithmetic. This means that for every normal XID, there are two billion XIDs that are "older" and two billion that are "newer"; another way to say it is that the normal XID space is circular with no endpoint. Therefore, once a row version has been created with a particular normal XID, the row version will appear to be "in the past" for the next two billion transactions, no matter which normal XID we are talking about. If the row version still exists after more than two billion transactions, it will suddenly appear to be in the future. To prevent data loss, old row versions must be reassigned the XID FrozenXID sometime before they reach the two-billion-transactions-old mark. Once they are assigned this special XID, they will appear to be "in the past" to all normal transactions regardless of wraparound issues, and so such row versions will be good until deleted, no matter how long that is. This reassignment of old XIDs is handled by VACUUM.

VACUUM's behavior is controlled by the configuration parameter vacuum_freeze_min_age: any XID older than vacuum_freeze_min_age transactions is replaced by FrozenXID. Larger values of vacuum_freeze_min_age preserve transactional information longer, while smaller values increase the number of transactions that can elapse before the table must be vacuumed again.

The maximum time that a table can go unvacuumed is two billion transactions minus the vacuum_freeze_min_age that was used when it was last vacuumed. If it were to go unvacuumed for longer than that, data loss could result. To ensure that this does not happen, The Auto-Vacuum Daemon is invoked on any table that might contain XIDs older than the age specified by the configuration parameter autovacuum_freeze_max_age. (This will happen even if autovacuum is otherwise disabled.)

This implies that if a table is not otherwise vacuumed, autovacuum will be invoked on it approximately once every autovacuum_freeze_max_age minus vacuum_freeze_min_age transactions. For tables that are regularly vacuumed for space reclamation purposes, this is of little importance. However, for static tables (including tables that receive inserts, but no updates or deletes), there is no need for vacuuming for space reclamation, and so it can be useful to try to maximize the interval between forced autovacuums on very large static tables. Obviously one can do this either by increasing autovacuum_freeze_max_age or by decreasing vacuum_freeze_min_age.

The sole disadvantage of increasing autovacuum_freeze_max_age is that the pg_clog subdirectory of the database cluster will take more space, because it must store the commit status for all transactions back to the autovacuum_freeze_max_age horizon. The commit status uses two bits per transaction, so if autovacuum_freeze_max_age has its maximum allowed value of two billion, pg_clog can be expected to grow to about half a gigabyte. If this is trivial compared to your total database size, setting autovacuum_freeze_max_age to its maximum allowed value is recommended. Otherwise, set it depending on what you are willing to allow for pg_clog storage. (The default, 200 million transactions, translates to about 50MB of pg_clog storage.)

One disadvantage of decreasing vacuum_freeze_min_age is that it might cause VACUUM to do useless work: changing a table row's XID to FrozenXID is a waste of time if the row is modified soon thereafter (causing it to acquire a new XID). So the setting should be large enough that rows are not frozen until they are unlikely to change any more. Another disadvantage of decreasing this setting is that details about exactly which transaction inserted or modified a row will be lost sooner. This information sometimes comes in handy, particularly when trying to analyze what went wrong after a database failure. For these two reasons, decreasing this setting is not recommended except for completely static tables.

To track the age of the oldest XIDs in a database, VACUUM stores XID statistics in the system tables pg_class and pg_database. In particular, the relfrozenxid column of a table's pg_class row contains the freeze cutoff XID that was used by the last VACUUM for that table. All normal XIDs older than this cutoff XID are guaranteed to have been replaced by FrozenXID within the table. Similarly, the datfrozenxid column of a database's pg_database row is a lower bound on the normal XIDs appearing in that database — it is just the minimum of the per-table relfrozenxid values within the database. A convenient way to examine this information is to execute queries such as:

SELECT relname, age(relfrozenxid) FROM pg_class WHERE relkind = 'r';
SELECT datname, age(datfrozenxid) FROM pg_database;

The age column measures the number of transactions from the cutoff XID to the current transaction's XID. Immediately after a VACUUM, age(relfrozenxid) should be a little more than the vacuum_freeze_min_age setting that was used (more by the number of transactions started since the VACUUM started). If age(relfrozenxid) exceeds autovacuum_freeze_max_age, an autovacuum will soon be forced for the table.

If for some reason autovacuum fails to clear old XIDs from a table, the system will begin to emit warning messages like this when the database's oldest XIDs reach ten million transactions from the wraparound point:

WARNING:  database "mydb" must be vacuumed within 177009986 transactions
HINT:  To avoid a database shutdown, execute a full-database VACUUM in "mydb".

If these warnings are ignored, the system will shut down and refuse to execute any new transactions once there are fewer than 1 million transactions left until wraparound:

ERROR:  database is not accepting commands to avoid wraparound data loss in database "mydb"
HINT:  Stop the postmaster and use a standalone backend to VACUUM in "mydb".

The 1-million-transaction safety margin exists to let the administrator recover without data loss, by manually executing the required VACUUM commands. However, since the system will not execute commands once it has gone into the safety shutdown mode, the only way to do this is to stop the server and use a single-user backend to execute VACUUM. The shutdown mode is not enforced by a single-user backend. See the postgres reference page for details about using a single-user backend.

23.1.4. The Auto-Vacuum Daemon

Beginning in PostgreSQL 8.1, there is an optional feature called autovacuum, whose purpose is to automate the execution of VACUUM and ANALYZE commands. When enabled, autovacuum checks for tables that have had a large number of inserted, updated or deleted tuples. These checks use the statistics collection facility; therefore, autovacuum cannot be used unless track_counts is set to true. In the default configuration, autovacuuming is enabled and the related configuration parameters are appropriately set.

Beginning in PostgreSQL 8.3, autovacuum has a multiprocess architecture: There is a daemon process, called the autovacuum launcher, which is in charge of starting autovacuum worker processes for all databases. The launcher will distribute the work across time, but attempt to start one worker on each database every autovacuum_naptime seconds. One worker will be launched for each database, with a maximum of autovacuum_max_workers processes running at the same time. If there are more than autovacuum_max_workers databases to be processed, the next database will be processed as soon as the first worker finishes. The worker processes will check each table within its database and execute VACUUM and/or ANALYZE as needed.

The autovacuum_max_workers setting limits how many workers may be running at any time. If several large tables all become eligible for vacuuming in a short amount of time, all autovacuum workers may end up vacuuming those tables for a very long time. This would result in other tables and databases not being vacuumed until a worker became available. There is not a limit on how many workers might be in a single database, but workers do try to avoid repeating work that has already been done by other workers. Note that the number of running workers does not count towards the max_connections nor the superuser_reserved_connections limits.

Tables whose relfrozenxid value is more than autovacuum_freeze_max_age transactions old are always vacuumed. Otherwise, two conditions are used to determine which operation(s) to apply. If the number of obsolete tuples since the last VACUUM exceeds the "vacuum threshold", the table is vacuumed. The vacuum threshold is defined as:

vacuum threshold = vacuum base threshold + vacuum scale factor * number of tuples

where the vacuum base threshold is autovacuum_vacuum_threshold, the vacuum scale factor is autovacuum_vacuum_scale_factor, and the number of tuples is pg_class.reltuples. The number of obsolete tuples is obtained from the statistics collector; it is a semi-accurate count updated by each UPDATE and DELETE operation. (It is only semi-accurate because some information might be lost under heavy load.) For analyze, a similar condition is used: the threshold, defined as:

analyze threshold = analyze base threshold + analyze scale factor * number of tuples

is compared to the total number of tuples inserted or updated since the last ANALYZE.

The default thresholds and scale factors are taken from postgresql.conf, but it is possible to override them on a table-by-table basis by making entries in the system catalog pg_autovacuum. If a pg_autovacuum row exists for a particular table, the settings it specifies are applied; otherwise the global settings are used. See Section 18.9 for more details on the global settings.

Besides the base threshold values and scale factors, there are five more parameters that can be set for each table in pg_autovacuum. The first, pg_autovacuum.enabled, can be set to false to instruct the autovacuum daemon to skip that particular table entirely. In this case autovacuum will only touch the table if it must do so to prevent transaction ID wraparound. The next two parameters, the vacuum cost delay (pg_autovacuum.vac_cost_delay) and the vacuum cost limit (pg_autovacuum.vac_cost_limit), are used to set table-specific values for the Cost-Based Vacuum Delay feature. The last two parameters, (pg_autovacuum.freeze_min_age) and (pg_autovacuum.freeze_max_age), are used to set table-specific values for vacuum_freeze_min_age and autovacuum_freeze_max_age respectively.

If any of the values in pg_autovacuum are set to a negative number, or if a row is not present at all in pg_autovacuum for any particular table, the corresponding values from postgresql.conf are used.

There is not currently any support for making pg_autovacuum entries, except by doing manual INSERTs into the catalog. This feature will be improved in future releases, and it is likely that the catalog definition will change.

Caution

The contents of the pg_autovacuum system catalog are currently not saved in database dumps created by the tools pg_dump and pg_dumpall. If you want to preserve them across a dump/reload cycle, make sure you dump the catalog manually.

When multiple workers are running, the cost limit is "balanced" among all the running workers, so that the total impact on the system is the same, regardless of the number of workers actually running.