From: | Andrew Sullivan <ajs(at)crankycanuck(dot)ca> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: A long-running transaction |
Date: | 2007-04-11 13:34:07 |
Message-ID: | 20070411133407.GB22510@phlogiston.dyndns.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
On Wed, Apr 11, 2007 at 05:54:45AM +0800, John Summerfield wrote:
> Linux caches writes, I don't think it should be hitting disk at all. The
I _sure hope_ you don't care about this data, then. That's not a
real safe way to work. But. . .
> table being updated contains records 7482 (658K raw data) of which
> probably fewer than 2000 are being updated, and typically the same ones
> all the time: we're updating the date of the latest trade.
. . . this is likely your problem. The updates probably get slower
and slower. What's happening is that you're expiring a row _for each
update_, which means it _isn't_ the same row every time. This is
approximately the worst use model for PostgreSQL's MVCC approach.
Worse, though it's not the same row, you have to grovel through all
the dead rows to find the actually live one. So that's probably
what's killing you.
> Laptop (1.25 Gbytes)
> shared_buffers = 1000 # min 16 or max_connections*2,
> 8KB each
so you have 8000 K configured as your shared buffers there. That's
as much as you'll ever use for shared memory by Postgres. You can
probably bump a little in this case. Your other config seems ok to
me. But I don't think this is your problem -- the update pattern is.
A
--
Andrew Sullivan | ajs(at)crankycanuck(dot)ca
If they don't do anything, we don't need their acronym.
--Josh Hamilton, on the US FEMA
From | Date | Subject | |
---|---|---|---|
Next Message | John Summerfield | 2007-04-11 14:23:34 | Re: A long-running transaction |
Previous Message | Tomasz Myrta | 2007-04-11 07:06:46 | update from and left join |