| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Michal Szymanski <szymanskim(at)datera(dot)pl> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Big problem with sql update operation |
| Date: | 2007-05-25 14:28:38 |
| Message-ID: | 8751.1180103318@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Michal Szymanski <szymanskim(at)datera(dot)pl> writes:
> CREATE OR REPLACE FUNCTION test()
> RETURNS void AS
> $BODY$
> DECLARE
> BEGIN
> FOR v_i IN 1..4000 LOOP
> UPDATE group_fin_account_tst SET
> credit = v_i
> WHERE group_fin_account_tst_id = 1; -- for real procedure I
> update different rows
Does updating the *same* record 4000 times per transaction reflect the
real behavior of your application? If not, this is not a good
benchmark. If so, consider redesigning your app to avoid so many
redundant updates.
(For the record, the reason you see nonlinear degradation is the
accumulation of tentatively-dead versions of the row, each of which has
to be rechecked by each later update.)
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Peter T. Breuer | 2007-05-25 15:06:10 | Re: general PG network slowness (possible cure) (repost) |
| Previous Message | Tom Lane | 2007-05-25 14:24:40 | Re: How PostgreSQL handles multiple DDBB instances? |