Re: Unable to Vacuum Large Defragmented Table

From: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>
To: Igal Sapir <igal(at)lucee(dot)org>
Cc: "Psql_General (E-mail)" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Unable to Vacuum Large Defragmented Table
Date: 2019-04-08 03:11:31
Message-ID: CAKJS1f8rT6tqgAkNAa35QooU_aQZB5nftf_j-rpT_Unic99hYQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, 8 Apr 2019 at 14:57, Igal Sapir <igal(at)lucee(dot)org> wrote:
> However, I have now deleted about 50,000 rows more and the table has only 119,688 rows. The pg_relation_size() still shows 31MB and pg_total_relation_size() still shows 84GB.
>
> It doesn't make sense that after deleting about 30% of the rows the values here do not change.

deleting rows does not make the table any smaller, it just creates
dead rows in the table. VACUUM tries to release the space used by
those dead rows and turns it back into free space. Normal vacuum (not
FULL) can only shrink the table if completely empty pages are found at
the end of the table.

> Attempting to copy the data to a different table results in the out of disk error as well, so that is in line with your assessment. But it actually just shows the problem. The new table to which the data was copied (though failed due to out of disk) shows 0 rows, but pg_total_relation_size() for that table shows 27GB. So now I have an "empty" table that takes 27GB of disk space.

I think the best thing to do is perform a normal VACUUM on the table
then CREATE EXTENSION IF NOT EXISTS pgstattuple; then do; SELECT *
FROM pgstattuple('<tablename>); and the same again on the toast table.
If your table still contains many dead rows then perhaps an open
transaction is stopping rows from being turned into free space. Once
pgstattuples reports that "tuple_len" from the table, its toast table
and all its indexes has been reduced to an acceptable value then you
should try a VACUUM FULL. Remember that VACUUM FULL must also write
WAL, so if WAL is on the same volume, then you'll need to consider
space required for that when deciding how much data to remove from the
table.

> This is mostly transient data, so I don't mind deleting rows, but if some day this could happen in production then I have to know how to deal with it without losing all of the data.

For the future, it would be better to delete more often than waiting
until the table grows too large. A normal VACUUM will turn space used
by dead tuples back into free space, so if done often enough there
won't be a need to vacuum full.

--
David Rowley http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message 김준형 2019-04-08 04:53:39 Re: Fwd: Postgresql with nextcloud in Windows Server
Previous Message Igal Sapir 2019-04-08 02:56:53 Re: Unable to Vacuum Large Defragmented Table