Re: VACUUM FULL pg_largeobject without (much) downtime?

From: Adam Hooper <adam(at)adamhooper(dot)com>
To: Bill Moran <wmoran(at)potentialtech(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: VACUUM FULL pg_largeobject without (much) downtime?
Date: 2015-02-03 19:17:03
Message-ID: CAMWjz6F_y5Sv2M6m837njJavqXBT-7HL+yqqZ66Q5V4sMwwYvQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Feb 3, 2015 at 12:58 PM, Bill Moran <wmoran(at)potentialtech(dot)com> wrote:
> On Tue, 3 Feb 2015 10:53:11 -0500
> Adam Hooper <adam(at)adamhooper(dot)com> wrote:
>
>> This plan won't work: Step 2 will be too slow because pg_largeobject
>> still takes 266GB. We tested `VACUUM FULL pg_largeobject` on our
>> staging database: it took two hours, during which pg_largeobject was
>> locked. When pg_largeobject is locked, lots of our website doesn't
>> work.
>
> Sometimes CLUSTER is faster than VACUUM FULL ... have you tested CLUSTERing
> of pg_largeobject on your test system to see if it's fast enough?

On the 30GB that's left on staging, it takes 50min. Unfortunately, our
staging database is now at 30GB because we already completed a VACUUM
FULL on it. It seems difficult to me to revert that operation. But I
need an orders-of-magnitude difference, and this clearly isn't it.

> How big is the non-lo data?

It's 65GB, but I've used pg_repack to move it to a separate tablespace
so it won't affect downtime.

Enjoy life,
Adam

--
Adam Hooper
+1-613-986-3339
http://adamhooper.com

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Bill Moran 2015-02-03 19:29:34 Re: VACUUM FULL pg_largeobject without (much) downtime?
Previous Message Adrian Klaver 2015-02-03 18:19:07 Re: postgres cust types