From: | Sergey Konoplev <gray(dot)ru(at)gmail(dot)com> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Bloated tables and why is vacuum full the only option |
Date: | 2014-02-09 23:29:41 |
Message-ID: | CAL_0b1tsvhM77KkSzhY2BWrRZxyy8OxS1NRMK4H7Ae=DgaLurA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Sun, Feb 9, 2014 at 2:58 PM, Claudio Freire <klaussfreire(at)gmail(dot)com> wrote:
> On Sun, Feb 9, 2014 at 7:32 PM, Sergey Konoplev <gray(dot)ru(at)gmail(dot)com> wrote:
>> Try pgcompact, it was designed particularily for such cases like yours
>> https://github.com/grayhemp/pgtoolkit.
>
> It's a pity that that requires several sequential scans of the tables.
> For my case, that's probably as intrusive as the exclusive locks.
Probably you should run it with --no-pgstattuple if you are talking
about these seq scans. If your tables are not TOASTed then the
approximation method of gathering statistics would work pretty good
for you.
> I noticed I didn't mention, but the tables involved are around 20-50GB in size.
It is not the thing I would worry about. I regularly use it with even
bigger tables.
--
Kind regards,
Sergey Konoplev
PostgreSQL Consultant and DBA
http://www.linkedin.com/in/grayhemp
+1 (415) 867-9984, +7 (901) 903-0499, +7 (988) 888-1979
gray(dot)ru(at)gmail(dot)com
From | Date | Subject | |
---|---|---|---|
Next Message | M Putz | 2014-02-10 19:52:51 | Strange performance boost with random() |
Previous Message | Claudio Freire | 2014-02-09 22:58:57 | Re: Bloated tables and why is vacuum full the only option |