From: | Stephan Szabo <sszabo(at)megazone23(dot)bigpanda(dot)com> |
---|---|
To: | Dmitry Tkach <dmitry(at)openratings(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Lincoln Yeoh <lyeoh(at)pop(dot)jaring(dot)my>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Large table update/vacuum PLEASE HELP! |
Date: | 2002-04-17 16:51:01 |
Message-ID: | 20020417094539.I62182-100000@megazone23.bigpanda.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, 17 Apr 2002, Dmitry Tkach wrote:
> >In the 10% case, you should be within the realm where the table's steady
> >state size is around that much more with reasonable frequency normal
> >VACUUMs and an appropriately sized free space map.
> >
> Are you saying that, if I, say, update 1000 tuples today, and another
> 1000 tomorow, it will reuse the today's dead tuples, and not create new
> ones, so that I end up with just 1000 of them, not 2000?
>
> Just making sure...
The expectation is that if you update 1000 tuples today, do a normal
vacuum when no transaction is left that can see the old state of those
tuples, then update 1000 tuples tomorrow, it'll attempt to reuse as
much of that "dead" space as possible which may very well mean you
end up with 1200 of them say, but no less than 1000 and almost certainly
not 2000.
For 1000 that should work, for much larger numbers you may need to play
with settings to get an appropriate effect (you may see that as the number
updated grows in order of magnitude that the wasted space approaches 2x as
you the map of free space isn't large enough unless you up those
settings).
From | Date | Subject | |
---|---|---|---|
Next Message | Dmitry Tkach | 2002-04-17 18:06:10 | Re: Large table update/vacuum PLEASE HELP! |
Previous Message | Alexis Maldonado | 2002-04-17 16:50:56 | SQL question.. |