Re: vacuumdb question/problem

From: David Ondrejik <David(dot)Ondrejik(at)noaa(dot)gov>
To: pgsql-admin <pgsql-admin(at)postgresql(dot)org>
Subject: Re: vacuumdb question/problem
Date: 2011-07-21 19:12:12
Message-ID: 4E287A0C.5090501@noaa.gov
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I think I see a (my) fatal flaw that will cause the cluster to fail.

>> From the info I received from previous posts, I am going to change
>> my game plan. If anyone has thoughts as to different process or
>> can confirm that I am on the right track, I would appreciate your
>> input.
>>
>> 1. I am going to run a CLUSTER on the table instead of a VACUUM
>> FULL.
Kevin Grittner stated:
> If you have room for a second copy of your data, that is almost
> always much faster, and less prone to problems.

I looked at the sizes for the tables in the database and the table I am
trying to run the cluster on is 275G and I only have 57G free. I don't
know how much of that 275G has data in it and how much is empty to allow
for a second copy of the data. I am guessing the cluster would fail due
to lack of space.

Are there any other options??

If I unload the table to a flat file; then drop the table from the
database; then recreate the table; and finally reload the data - will
that reclaim the space?

Kevin - thanks for the book recommendation. Will order it tomorrow.

Thanks again for all the technical help!

Dave

Attachment Content-Type Size
david_ondrejik.vcf text/x-vcard 309 bytes

In response to

Responses

Browse pgsql-admin by date

  From Date Subject
Next Message Bob Lunney 2011-07-21 19:55:26 Re: vacuumdb question/problem
Previous Message A J 2011-07-21 16:29:01 Followup on 'Standby promotion does not work'