Re: Performance slowing down when doing same UPDATE many times

From: Chris Mair <chris(at)1006(dot)org>
To: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Cc: Jan Strube <js(at)deriva(dot)de>
Subject: Re: Performance slowing down when doing same UPDATE many times
Date: 2015-03-09 08:14:41
Message-ID: 7edca6cca738292abb8d6562c52900fa@smtp.hushmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

> Hi,
>
>
> does no one have an idea?
>
> It may be a rare case doing the same UPDATE a thousand times. But I´m really interested why this is not happening when doing DIFFERENT updates. And, of course, if something could be done on the database side to prevent this behavior in case some application developer does the same “mistake” again.
>
>
> Thanks
>
> Jan

Hi,

in an UPDATE operation PostgreSQL has to create a new tuple and marking the old as unread.

A long time ago, what you see here was very common: for a heavy update load frequent vaccum
was recommended. Then, in 8.3, a feature called HOT (heap-only tuples) was introduced that
made away with this problem.

I'm not 100% sure what happens in your case, but I think the problem is the updates
are all in the *same* transaction. That is indeed a rare situation.

Bye,
Chris.

>
>
>
> From: Jan Strube
> Sent: Tuesday, February 10, 2015 12:03 PM
> To: 'pgsql-general(at)postgresql(dot)org'
> Subject: Performance slowing down when doing same UPDATE many times
>
>
> Hi,
>
>
> we recently found a bug in one of our applications which was doing exactly the same UPDATE operation a few thousand times inside a transaction. This caused the UPDATEs to become slower and slower from some milliseconds to some seconds. We already fixed the application but I am wondering if this might be a PostgreSQL bug, too.
>
>
> Here is a simple test case that performs and benchmarks 100,000 UPDATEs (benchmarking only every 10,000th to reduce output):
>
>
> BEGIN;
>
> CREATE TEMP TABLE test (id integer PRIMARY KEY, flag boolean DEFAULT false);
>
> INSERT INTO test (id) SELECT generate_series(1, 100000);
>
>
> DO $$
>
> DECLARE
>
> s timestamp;
>
> e timestamp;
>
> BEGIN
>
> FOR i IN 1..100000 LOOP
>
> SELECT clock_timestamp() INTO s;
>
> UPDATE test SET flag = true WHERE id = 12345;
>
> SELECT clock_timestamp() INTO e;
>
>
> IF i%10000 = 0 THEN
>
> RAISE NOTICE '%', e-s;
>
> END IF;
>
> END LOOP;
>
> END $$;
>
> ROLLBACK;
>
>
> The output looks like this:
>
>
> NOTICE: 00:00:00.000525
>
> NOTICE: 00:00:00.000992
>
> NOTICE: 00:00:00.001404
>
> NOTICE: 00:00:00.001936
>
> NOTICE: 00:00:00.002374
>
> NOTICE: 00:00:00.002925
>
> NOTICE: 00:00:00.003525
>
> NOTICE: 00:00:00.004015
>
> NOTICE: 00:00:00.00453
>
> NOTICE: 00:00:00.004976
>
>
> The problem only occurs inside a transaction and if the same dataset is updated. I´m using PostgreSQL 9.1.15.
>
>
> Jan
>
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message wambacher 2015-03-09 08:56:45 Re: autovacuum worker running amok - and me too ;)
Previous Message Pavel Stehule 2015-03-09 08:02:58 Re: Performance slowing down when doing same UPDATE many times