From: | Dennis Bjorklund <db(at)zigo(dot)dhs(dot)org> |
---|---|
To: | Adam Pritchard <vesper76(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: many updates to single row in single transaction |
Date: | 2005-07-09 11:18:11 |
Message-ID: | Pine.LNX.4.44.0507091311040.4501-100000@zigo.dhs.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 8 Jul 2005, Adam Pritchard wrote:
> so I have a singleton table that keeps track of where we are in the
> pseudo-sequence. The table is very simple:
>
> CREATE TABLE t ( next BIGINT );
> INSERT INTO t ( next ) VALUES ( 0 );
>
> UPDATE t SET next = $1; -- $1 = next+1
>
> The problem is that the performance of those operations is pretty slow
>
> 1000: 891ms
> 2000: 1296ms
> 3000: 1735ms
> 4000: 2312ms
> 5000: 2844ms
> 6000: 3328ms
> 7000: 3875ms
> 8000: 4531ms
> 9000: 4875ms
Looks like you need to vacuum that table very, very often. Each time you
update the counter you get a dead row that needs to be vacuumed. You can't
vacuum inside a transaction, but maybe that is not a problem in your case.
For bulk loading in a transaction you might also be able to lock the table
to prevent others from using it and then insert 1000 rows and do:
UPDATE t SET next = next + 1000;
--
/Dennis Björklund
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Fuhr | 2005-07-09 12:06:06 | Re: PL/pgGRESQL, SHA, BYTEA - Creating SHA1 hash for Bytea Value in stored procedure |
Previous Message | Geert Jansen | 2005-07-09 11:18:09 | Transaction isolation levels |