From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Matt Burke <mattblists(at)icritical(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slowing UPDATEs inside a transaction |
Date: | 2011-03-04 14:20:46 |
Message-ID: | AANLkTikCadoAC64-R8NH+RMGtJCvDPtk59ejFLm=_d+W@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, Mar 4, 2011 at 4:21 AM, Matt Burke <mattblists(at)icritical(dot)com> wrote:
> Robert Haas wrote:
>> Old row versions have to be kept around until they're no longer of
>> interest to any still-running transaction.
>
> Thanks for the explanation.
>
> Regarding the snippet above, why would the intermediate history of
> multiply-modified uncommitted rows be of interest to anything, or is the
> current behaviour simply "cheaper" overall in terms of cpu/developer time?
Because in theory you could have a cursor open. You could open a
cursor, start to read from it, then make an update. Now the cursor
needs to see things as they were before the update.
We might be able to do some optimization here if we had some
infrastructure to detect when a backend has no registered snapshots
with a command-ID less than the command-ID of the currently active
snapshot, but nobody's put in the effort to figure out exactly what's
involved and whether it makes sense. It's a problem I'm interested
in, but #(needed round-tuits) > #(actual round-tuits).
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2011-03-04 14:51:30 | Re: Slowing UPDATEs inside a transaction |
Previous Message | Robert Haas | 2011-03-04 14:14:25 | Re: Vacuum problem due to temp tables |