From: | Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com> |
---|---|
To: | Greg Stark <gsstark(at)mit(dot)edu> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Simon Riggs <simon(at)2ndquadrant(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: remove flatfiles.c |
Date: | 2009-09-01 23:56:52 |
Message-ID: | 4A9DB4C4.2060503@cheapcomplexdevices.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Greg Stark wrote:
>
> That's what I want to believe. But picture if you have, say a
> 1-terabyte table which is 50% dead tuples and you don't have a spare
> 1-terabytes to rewrite the whole table.
Could one hypothetically do
update bigtable set pk = pk where ctid in (select ctid from bigtable order by ctid desc limit 100);
vacuum;
and repeat until max(ctid) is small enough?
Sure, it'll take longer than vacuum full; but at first glance
it seems lightweight enough to do even on a live, heavily accessed
table.
IIRC I tried something like this once, and it worked to some extent,
but after a few loops didn't shrink the table as much as I had expected.
From | Date | Subject | |
---|---|---|---|
Next Message | Itagaki Takahiro | 2009-09-02 01:26:27 | make installcheck is broken in HEAD on mingw |
Previous Message | Peter Eisentraut | 2009-09-01 23:51:06 | Re: Linux LSB init script |