Re: vacuumdb question/problem

From: "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
To: "David Ondrejik" <David(dot)Ondrejik(at)noaa(dot)gov>, "pgsql-admin" <pgsql-admin(at)postgresql(dot)org>
Subject: Re: vacuumdb question/problem
Date: 2011-07-21 20:03:09
Message-ID: 4E283FAD020000250003F66D@gw.wicourts.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

David Ondrejik <David(dot)Ondrejik(at)noaa(dot)gov> wrote:

> I think I see a (my) fatal flaw that will cause the cluster to
> fail.

> Kevin Grittner stated:
>> If you have room for a second copy of your data, that is almost
>> always much faster, and less prone to problems.
>
> I looked at the sizes for the tables in the database and the table
> I am trying to run the cluster on is 275G and I only have 57G
> free. I don't know how much of that 275G has data in it and how
> much is empty to allow for a second copy of the data. I am
> guessing the cluster would fail due to lack of space.

Seems quite likely although not certain, depending (as you said) on
the level of bloat.

> Are there any other options??
>
> If I unload the table to a flat file; then drop the table from
> the database; then recreate the table; and finally reload the data
> - will that reclaim the space?

Yeah, but you don't necessarily need to go quite that far. You can
pg_dump an individual table with the -t option (to some medium where
you have room), and then drop and restore the table. It would be
prudent to make very sure of your dump of the table before dropping
it, of course. I'm just paranoid enough to probably make sure I
have an up-to-date PITR-style dump, too, before issuing the DROP
TABLE command.

-Kevin

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message A J 2011-07-21 20:14:15 replication_timeout does not seem to be working
Previous Message Bob Lunney 2011-07-21 19:55:26 Re: vacuumdb question/problem