From: | Robert Klemme <shortcutter(at)googlemail(dot)com> |
---|---|
To: | "ktm(at)rice(dot)edu" <ktm(at)rice(dot)edu> |
Cc: | lars <lhofhansl(at)yahoo(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: UPDATEDs slowing SELECTs in a fully cached database |
Date: | 2011-07-11 15:26:49 |
Message-ID: | CAM9pMnMG42aUFbcn6LzSRrNrGX7_k5o8dS-DgExwXXKa-HZGEA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Jul 11, 2011 at 3:13 PM, ktm(at)rice(dot)edu <ktm(at)rice(dot)edu> wrote:
> I do not know if this makes sense in PostgreSQL and that readers
> do not block writers and writes do not block readers. Are your
> UPDATEs to individual rows, each in a separate transaction, or
> do you UPDATE multiple rows in the same transaction? If you
> perform multiple updates in a single transaction, you are
> synchronizing the changes to that set of rows and that constraint
> is causing other readers that need to get the correct values post-
> transaction to wait until the COMMIT completes. This means that
> the WAL write must be completed.
What readers should that be? Docs explicitly state that readers are
never blocked by writers:
http://www.postgresql.org/docs/9.0/interactive/mvcc-intro.html
http://www.postgresql.org/docs/9.0/interactive/mvcc.html
From what I understand about this issue the observed effect must be
caused by the implementation and not by a conceptual issue with
transactions.
> Have you tried disabling synchronous_commit? If this scenario
> holds, you should be able to reduce the slowdown by un-batching
> your UPDATEs, as counter-intuitive as that is. This seems to
> be similar to a problem that I have been looking at with using
> PostgreSQL as the backend to a Bayesian engine. I am following
> this thread with interest.
I don't think this will help (see above). Also, I would be very
cautious to do this because although the client might get a faster
acknowledge the DB still has to do the same work as without
synchronous_commit (i.e. WAL, checkpointing etc.) but it still has to
do significantly more transactions than in the batched version.
Typically there is an optimum batch size: if batch size is too small
(say, one row) the ratio of TX overhead to "work" is too bad. If
batch size is too large (say, millions of rows) you hit resource
limitations (memory) which inevitable force the RDBMS to do additional
disk IO.
Kind regards
robert
--
remember.guy do |as, often| as.you_can - without end
http://blog.rubybestpractices.com/
From | Date | Subject | |
---|---|---|---|
Next Message | ktm@rice.edu | 2011-07-11 16:14:58 | Re: UPDATEDs slowing SELECTs in a fully cached database |
Previous Message | ktm@rice.edu | 2011-07-11 13:13:48 | Re: UPDATEDs slowing SELECTs in a fully cached database |