From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
Cc: | c k <shreeseva(dot)learning(at)gmail(dot)com>, Richard Huxton <dev(at)archonet(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: UPDATE |
Date: | 2009-02-19 16:06:56 |
Message-ID: | 8607.1235059616@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> writes:
> Tom Lane wrote:
>> This is not correct; PG *never* overwrites an existing record (at least
>> not in any user-accessible code paths).
> That's what I always thought, but I encountered some odd behaviour while
> trying to generate table bloat that made me think otherwise. I generated
> a large table full of dummy data then repeatedly UPDATEd it. To my
> surprise, though, it never grew beyond the size it had at creation time
> ... if the transaction running the UPDATE was the only one active.
> If there were other transactions active too, the table grew as I'd expect.
> Is there another explanation for this that I've missed?
In 8.3 that's not unexpected: once you have two entries in a HOT chain
then a later update can reclaim the dead one and re-use its space.
(HOT can do that without any intervening VACUUM because only within-page
changes are needed.) However, that only works when the older one is in
fact dead to all observers; otherwise it has to be kept around, so the
update chain grows.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2009-02-19 16:09:23 | Re: postgres wish list |
Previous Message | John R Pierce | 2009-02-19 15:56:51 | Re: When adding millions of rows at once, getting out of disk space errors |