From: | Robert Treat <xzilla(at)users(dot)sourceforge(dot)net> |
---|---|
To: | Richard Huxton <dev(at)archonet(dot)com> |
Cc: | Tambet Matiisen <t(dot)matiisen(at)aprote(dot)ee>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: One tuple per transaction |
Date: | 2005-03-15 21:52:30 |
Message-ID: | 200503151652.30266.xzilla@users.sourceforge.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tuesday 15 March 2005 04:37, Richard Huxton wrote:
> Tambet Matiisen wrote:
> > Now, if typical inserts into your most active table occur in batches of
> > 3 rows, in one transaction, then row count for this table is updated 3
> > times during transaction. 3 updates generate 3 tuples, while 2 of them
> > are dead from the very start. You effectively commit 2 useless tuples.
> > After millions of inserts you end up with rowcounts table having 2/3 of
> > dead tuples and queries start to slow down.
> >
> > Current solution is to vacuum often. My proposal was to create new tuple
> > only with first update. The next updates in the same transaction would
> > update the existing tuple, not create a new.
>
> How do you roll back to a savepoint with this model?
>
You can't, but you could add the caveat to just do this auto-reuse within any
given nested transaction. Then as long as you aren't using savepoints you
get to reclaim all the space/
On a similar note I was just wondering if it would be possible to mark any of
these dead tuples as ready to be reused at transaction commit time, since we
know that they are dead to any and all other transactions currently going on.
This would save you from having to vacuum to get the tuples marked ready for
reuse. In the above scenario this could be a win, whether it would be
overall is hard to say.
--
Robert Treat
Build A Brighter Lamp :: Linux Apache {middleware} PostgreSQL
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Kirkwood | 2005-03-15 22:44:06 | Re: Changing the random_page_cost default (was: |
Previous Message | David Gagnon | 2005-03-15 21:24:17 | Performance problem on delete from for 10k rows. May takes 20 minutes through JDBC interface |