From: | "Tambet Matiisen" <t(dot)matiisen(at)aprote(dot)ee> |
---|---|
To: | "Richard Huxton" <dev(at)archonet(dot)com> |
Cc: | <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: One tuple per transaction |
Date: | 2005-03-15 10:24:49 |
Message-ID: | A66A11DBF5525341AEF6B8DE39CDE770088055@black.aprote.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> -----Original Message-----
> From: Richard Huxton [mailto:dev(at)archonet(dot)com]
> Sent: Tuesday, March 15, 2005 11:38 AM
> To: Tambet Matiisen
> Cc: pgsql-performance(at)postgresql(dot)org
> Subject: Re: [PERFORM] One tuple per transaction
>
...
>
> > Consider the often suggested solution for speeding up
> "select count(*)
> > from table" query: make another table rowcounts and for each of the
> > original tables add insert and delete triggers to update
> row count in
> > rowcounts table. Actually this is standard denormalization
> technique,
> > which I use often. For example to ensure that order.total =
> > sum(order_line.total).
>
> This does of course completely destroy concurrency. Since you need to
> lock the summary table, other clients have to wait until you are done.
>
Yes, it does for rowcounts table. But consider the orders example - it
only locks the order which I add lines. As there is mostly one client
dealing with one order, but possibly thousands dealing with different
orders, it should not pose any concurrency restrictions.
> > Now, if typical inserts into your most active table occur
> in batches
> > of 3 rows, in one transaction, then row count for this table is
> > updated 3 times during transaction. 3 updates generate 3
> tuples, while
> > 2 of them are dead from the very start. You effectively commit 2
> > useless tuples. After millions of inserts you end up with rowcounts
> > table having 2/3 of dead tuples and queries start to slow down.
> >
> > Current solution is to vacuum often. My proposal was to create new
> > tuple only with first update. The next updates in the same
> transaction
> > would update the existing tuple, not create a new.
>
> How do you roll back to a savepoint with this model?
>
Every savepoint initiates a new (sub)transaction.
Tambet
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Sabino Mullane | 2005-03-15 12:35:19 | Re: Changing the random_page_cost default (was: cpu_tuple_cost) |
Previous Message | Richard Huxton | 2005-03-15 09:37:41 | Re: One tuple per transaction |