From: | "Pierre C" <lists(at)peufeu(dot)com> |
---|---|
To: | "Jesper Krogh" <jesper(at)krogh(dot)cc>, "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Robert Haas" <robertmhaas(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pessimal trivial-update performance |
Date: | 2010-07-05 10:11:38 |
Message-ID: | op.vfc7xobzeorkce@apollo13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> The problem can generally be written as "tuples seeing multiple
> updates in the same transaction"?
>
> I think that every time PostgreSQL is used with an ORM, there is
> a certain amount of multiple updates taking place. I have actually
> been reworking clientside to get around multiple updates, since they
> popped up in one of my profiling runs. Allthough the time I optimized
> away ended being both "roundtrip time" + "update time", but having
> the database do half of it transparently, might have been sufficient
> to get me to have had a bigger problem elsewhere..
>
> To sum up. Yes I think indeed it is a real-world case.
>
> Jesper
On the Python side, elixir and sqlalchemy have an excellent way of
handling this, basically when you start a transaction, all changes are
accumulated in a "session" object and only flushed to the database on
session commit (which is also generally the transaction commit). This has
multiple advantages, for instance it is able to issue multiple-line
statements, updates are only done once, you save a lot of roundtrips, etc.
Of course it is most of the time not compatible with database triggers, so
if there are triggers the ORM needs to be told about them.
From | Date | Subject | |
---|---|---|---|
Next Message | Jesper Krogh | 2010-07-05 10:26:23 | Re: pessimal trivial-update performance |
Previous Message | Martin Pihlak | 2010-07-05 09:58:09 | Re: log files and permissions |