From: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> |
---|---|
To: | Siraj G <tosiraj(dot)g(at)gmail(dot)com>, Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: vacuuming taking long time for pg_attribute |
Date: | 2025-01-17 08:46:22 |
Message-ID: | 1bd0d7ac8f0ee3a38a55f960403e856ad0c9f82a.camel@cybertec.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Fri, 2025-01-17 at 11:45 +0530, Siraj G wrote:
> The PgSQL version is 16 and it runs in Cloud SQL managed by GCP.
>
> Problem was that we were unable to conveniently get the object details in the
> schema browser (within the Cloud SQL Studio) as it was getting timeout again
> and again. Several application jobs start to fail. Eventually we found that
> pg_catalog.pg_attribute was having tons of dead tuples. While vacuum on this
> took several hours, the problem got resolved eventually.
> Command ran: vacuum pg_attribute;
>
> I would like to understand how this issue can be prevented. We do have
> autovacuum ON and I could see the last vacuum on this table was just about
> 30hrs back.
>
> Appreciate any suggestions.
I can only guess, but my guess is that you are creating temporary tables
very frequently. Creating a temporary table insers the columns into
"pg_attribute", and they get deleted again when the session ends.
Perhaps autovacuum was just not fast enough, perhaps there were
long-running transactions or queries that prevented autovacuum from cleaning
up the dead rows in "pg_attribute".
Make sure your transactions are short and that autovacuum is configured
sufficiently aggressive.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | kasem adel | 2025-01-17 22:13:23 | Re: Advice Needed: Simultaneous Upgrade of Two-Node PostgreSQL 11 Cluster |
Previous Message | Siraj G | 2025-01-17 06:16:30 | Re: vacuuming taking long time for pg_attribute |