From: | Francisco Olarte <folarte(at)peoplecall(dot)com> |
---|---|
To: | "Muthusamy, Sivaraman" <sivaraman(dot)muthusamy(at)in(dot)verizon(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: How to clean/truncate / VACUUM FULL pg_largeobject without (much) downtime? |
Date: | 2015-05-11 15:17:00 |
Message-ID: | CA+bJJbwdT14XG6okYn=2XRFXqKw5MNnZLzWDSND8e2i5AvCGGQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Muthusamy:
On Mon, May 11, 2015 at 11:48 AM, Muthusamy, Sivaraman
<sivaraman(dot)muthusamy(at)in(dot)verizon(dot)com> wrote:
> With regards to this pg_largeobject, I have the following questions:
> - What is this pg_largetobject ?
Just seeking it in the alpha index leads you here:
http://www.postgresql.org/docs/9.4/static/catalog-pg-largeobject.html
> - what does it contain ? tried PostgreSQL documentation and lists, but
> could not get much from it.
The large object data. Read the docs again, there is a chapter devoted
to it: http://www.postgresql.org/docs/9.4/static/largeobjects.html
where it tells you ( under implementation features,
http://www.postgresql.org/docs/9.4/static/lo-implementation.html ) why
you have serveral 'duplicate rows ( loid ) ( it is because large
objects are split in chunks )........
> - why does it grow ?
Because you modify or writ to it.....
> - Was there any configuration change that may have triggered this to
> grow? For last one year or so, there was no problem, but it started growing
> all of sudden in last two weeks. The only change we had in last two weeks
> was that we have scheduled night base-backup for it and auto-vacuum feature
> enabled.
You should have a look, specially if you modify it, as maybe it is
not getting vacuumed correctly. I do not recall the relevant commands,
but they are all in the manual.
> - pg_largeobject contains so many duplicate rows (loid). Though there
> are only about 0.6 million rows (LOIDs), but the total number of rows
> including duplicates are about 59million records. What are all these ?
As I commented above, LO are split in chunks, of 2K according to the
docs in one of the above links.
Your numbers hint at .6M objects of about 200k, 100 chunks each, for
about 60M records. At 1 chunk page, with very bad luck, this could
easily go to 480G, 120G if you've got them 4 to a page, so 200G does
not seem that big ( I do not know if TOAST compression is in play
here, but the numbers look ok to me if the data is not append-only.
Francisco Olarte.
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Begin | 2015-05-11 15:30:45 | Re: Restarting DB after moving to another drive |
Previous Message | Francisco Olarte | 2015-05-11 15:00:42 | Re: Restarting DB after moving to another drive |