From: | "Muthusamy, Sivaraman" <sivaraman(dot)muthusamy(at)in(dot)verizon(dot)com> |
---|---|
To: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime? |
Date: | 2015-05-11 09:55:06 |
Message-ID: | CE4383699009064E82F0D16438F7C4660418C79CAB@MS-BAN-E7MB01.intl1.one.verizon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Group,
Facing a problem where pg_catalog.pg_largetobject has been growing fast recently, in last two weeks. The actual data itself, in user tables, is about 60GB, but pg_catalog.pg_largeobject table is 200GB plues. Please let me know how to clean/truncate this table without losing any user data in other table.
With regards to this pg_largeobject, I have the following questions:
- What is this pg_largetobject ?
- what does it contain ? tried PostgreSQL documentation and lists, but could not get much from it.
- why does it grow ?
- Was there any configuration change that may have triggered this to grow? For last one year or so, there was no problem, but it started growing all of sudden in last two weeks. The only change we had in last two weeks was that we have scheduled night base-backup for it and auto-vacuum feature enabled.
- pg_largeobject contains so many duplicate rows (loid). Though there are only about 0.6 million rows (LOIDs), but the total number of rows including duplicates are about 59million records. What are all these ?
Kindly help getting this information and getting this issue cleared, and appreciate your quick help on this.
Thanks and Regards
M.Shiva
From | Date | Subject | |
---|---|---|---|
Next Message | Jeff Janes | 2015-05-13 21:07:47 | Re: PATCH: adaptive ndistinct estimator v4 |
Previous Message | Heikki Linnakangas | 2015-05-03 21:28:46 | Re: optimization join on random value |