From: | Bill Moran <wmoran(at)potentialtech(dot)com> |
---|---|
To: | Adam Hooper <adam(at)adamhooper(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: VACUUM FULL pg_largeobject without (much) downtime? |
Date: | 2015-02-03 17:58:11 |
Message-ID: | 20150203125811.3f2e582080ed0ae51dc65a6e@potentialtech.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, 3 Feb 2015 10:53:11 -0500
Adam Hooper <adam(at)adamhooper(dot)com> wrote:
> Hi list,
>
> We run a website. We once stored all sorts of files in pg_largeobject,
> which grew to 266GB. This is on an m1.large on Amazon EC2 on a single,
> magnetic, non-provisioned-IO volume. In that context, 266GB is a lot.
>
> We've since moved all but 60GB of that data to S3. We plan to reduce
> that to 1GB by deleting old, unused data. Of course, pg_largeobject
> will still take up 266GB because autovacuum doesn't reduce disk space.
>
> We want to move our entire database to an SSD volume, with as little
> downtime as possible. My tentative plan:
>
> 1. Use CREATE TABLESPACE and pg_repack to move user tables to a temporary volume
> 2. Take down Postgres, copy system-table files to the new volume, and
> start up Postgres from the new volume
> 3. Use pg_repack to move everything to the new volume
>
> This plan won't work: Step 2 will be too slow because pg_largeobject
> still takes 266GB. We tested `VACUUM FULL pg_largeobject` on our
> staging database: it took two hours, during which pg_largeobject was
> locked. When pg_largeobject is locked, lots of our website doesn't
> work.
Sometimes CLUSTER is faster than VACUUM FULL ... have you tested CLUSTERing
of pg_largeobject on your test system to see if it's fast enough?
How big is the non-lo data?
--
Bill Moran
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2015-02-03 18:19:07 | Re: postgres cust types |
Previous Message | Adam Hooper | 2015-02-03 15:53:11 | VACUUM FULL pg_largeobject without (much) downtime? |