Re: One tuple per transaction

From: Richard Huxton <dev(at)archonet(dot)com>
To: Tambet Matiisen <t(dot)matiisen(at)aprote(dot)ee>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: One tuple per transaction
Date: 2005-03-15 09:37:41
Message-ID: 4236ACE5.1040004@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Tambet Matiisen wrote:
>>
>>Not exactly. The dead tuple in the index will be scanned the
>>first time (and its pointed heap tuple as well), then we will
>>mark it dead, then next time we came here, we will know that
>>the index tuple actually points to a uesless tuple, so we
>>will not scan its pointed heap tuple.
>>
>
>
> But the dead index tuple will still be read from disk next time? Maybe
> really the performance loss will be neglible, but if most of tuples in
> your table/index are dead, then it might be significant.

When a block is read from disk, any dead tuples in that block will be
read in. Vacuum recovers these.

> Consider the often suggested solution for speeding up "select count(*)
> from table" query: make another table rowcounts and for each of the
> original tables add insert and delete triggers to update row count in
> rowcounts table. Actually this is standard denormalization technique,
> which I use often. For example to ensure that order.total =
> sum(order_line.total).

This does of course completely destroy concurrency. Since you need to
lock the summary table, other clients have to wait until you are done.

> Now, if typical inserts into your most active table occur in batches of
> 3 rows, in one transaction, then row count for this table is updated 3
> times during transaction. 3 updates generate 3 tuples, while 2 of them
> are dead from the very start. You effectively commit 2 useless tuples.
> After millions of inserts you end up with rowcounts table having 2/3 of
> dead tuples and queries start to slow down.
>
> Current solution is to vacuum often. My proposal was to create new tuple
> only with first update. The next updates in the same transaction would
> update the existing tuple, not create a new.

How do you roll back to a savepoint with this model?

--
Richard Huxton
Archonet Ltd

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Tambet Matiisen 2005-03-15 10:24:49 Re: One tuple per transaction
Previous Message Tambet Matiisen 2005-03-15 09:01:22 Re: One tuple per transaction