From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | "Pavel Stehule" <pavel(dot)stehule(at)gmail(dot)com>, "PostgreSQL Hackers" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: hot update doesn't work? |
Date: | 2010-05-12 15:47:46 |
Message-ID: | 6205.1273679266@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
"Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov> writes:
> You're updating the row 100000 times within a single transaction. I
> don't *think* HOT will reclaim a version of a row until the
> transaction which completed it is done and no other transactions can
> see that version any longer. It does raise the question, though --
> couldn't a HOT update of a tuple *which was written by the same
> transaction* do an "update in place"?
Well ... in the first place there is not, ever, any such thing as
"update in place". The correct question to ask is whether we could
vacuum away the older elements of the HOT chain on the grounds that they
are no longer of interest. What we would see is tuples with xmin equal
to xmax and cmin different from cmax. The problem then is to determine
whether there are any live snapshots with curcid between cmin and cmax.
There is 0 hope of doing that from outside the originating backend.
Now if heap_page_prune() is being run by the same backend that generated
the in-doubt tuples, which I will agree is likely in a case like this,
in principle we could do it. Not sure if it's really worth the trouble
and nonorthogonal behavior.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2010-05-12 15:52:01 | Re: [COMMITTERS] pgsql: Add PGFILEDESC description to Makefiles for all /contrib |
Previous Message | Merlin Moncure | 2010-05-12 15:39:43 | Re: hot update doesn't work? |