From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | "Muthusamy, Sivaraman" <sivaraman(dot)muthusamy(at)in(dot)verizon(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime? |
Date: | 2015-05-22 21:17:44 |
Message-ID: | 555F9CF8.8000708@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 5/11/15 4:55 AM, Muthusamy, Sivaraman wrote:
> Hi Group,
>
> Facing a problem where pg_catalog.pg_largetobject has been growing fast
> recently, in last two weeks. The actual data itself, in user tables, is
> about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please
> let me know how to clean/truncate this table without losing any user
> data in other table.
Autovacuum should be taking care of it for you, though you could also
try a manual vacuum (VACUUM pg_largeobject;).
> With regards to this pg_largeobject, I have the following questions:
>
> -What is this pg_largetobject ?
It stores large objects
http://www.postgresql.org/docs/9.4/static/lo-interfaces.html
> -what does it contain ? tried PostgreSQL documentation and lists, but
> could not get much from it.
>
> -why does it grow ?
>
> -Was there any configuration change that may have triggered this to
> grow? For last one year or so, there was no problem, but it started
> growing all of sudden in last two weeks. The only change we had in last
> two weeks was that we have scheduled night base-backup for it and
> auto-vacuum feature enabled.
Changes to autovacuum settings could certainly cause changes.
Long-running transactions would prevent cleanup, as would any prepared
transactions (which should really be disabled unless you explicitly need
them).
> -pg_largeobject contains so many duplicate rows (loid). Though there are
> only about 0.6 million rows (LOIDs), but the total number of rows
> including duplicates are about 59million records. What are all these ?
Each row can only be ~2KB wide, so any LO that's larger than that will
be split into multiple rows.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2015-05-22 21:23:19 | Re: Fastest way / best practice to calculate "next birthdays" |
Previous Message | Josh Berkus | 2015-05-22 17:34:03 | Re: PostgreSQL disk fragmentation causes performance problems on Windows |