Re: Performance slowing down when doing same UPDATE many times

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Jan Strube <js(at)deriva(dot)de>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Performance slowing down when doing same UPDATE many times
Date: 2015-03-09 08:02:58
Message-ID: CAFj8pRAyXKXzik5Sb91o--qKkg2twcnmTAAbVXp7qvTaZPOObQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi

it is side effect of MVCC implementation of Postgres. There is not possible
vacuum inside open transaction.

If you need it, then you should to use a different database - Postgres
doesn't work well when one record is highly often used in transaction.
Usual solution for Postgres is some proxy, that work like write cache.

Regards

Pavel

2015-03-09 8:47 GMT+01:00 Jan Strube <js(at)deriva(dot)de>:

> Hi,
>
>
>
> does no one have an idea?
>
> It may be a rare case doing the same UPDATE a thousand times. But I´m
> really interested why this is not happening when doing DIFFERENT updates.
> And, of course, if something could be done on the database side to prevent
> this behavior in case some application developer does the same “mistake”
> again.
>
>
>
> Thanks
>
> Jan
>
>
>
>
>
> *From:* Jan Strube
> *Sent:* Tuesday, February 10, 2015 12:03 PM
> *To:* 'pgsql-general(at)postgresql(dot)org'
> *Subject:* Performance slowing down when doing same UPDATE many times
>
>
>
> Hi,
>
>
>
> we recently found a bug in one of our applications which was doing exactly
> the same UPDATE operation a few thousand times inside a transaction. This
> caused the UPDATEs to become slower and slower from some milliseconds to
> some seconds. We already fixed the application but I am wondering if this
> might be a PostgreSQL bug, too.
>
>
>
> Here is a simple test case that performs and benchmarks 100,000 UPDATEs
> (benchmarking only every 10,000th to reduce output):
>
>
>
> BEGIN;
>
> CREATE TEMP TABLE test (id integer PRIMARY KEY, flag boolean DEFAULT
> false);
>
> INSERT INTO test (id) SELECT generate_series(1, 100000);
>
>
>
> DO $$
>
> DECLARE
>
> s timestamp;
>
> e timestamp;
>
> BEGIN
>
> FOR i IN 1..100000 LOOP
>
> SELECT clock_timestamp() INTO s;
>
> UPDATE test SET flag = true WHERE id = 12345;
>
> SELECT clock_timestamp() INTO e;
>
>
>
> IF i%10000 = 0 THEN
>
> RAISE NOTICE '%', e-s;
>
> END IF;
>
> END LOOP;
>
> END $$;
>
> ROLLBACK;
>
>
>
> The output looks like this:
>
>
>
> NOTICE: 00:00:00.000525
>
> NOTICE: 00:00:00.000992
>
> NOTICE: 00:00:00.001404
>
> NOTICE: 00:00:00.001936
>
> NOTICE: 00:00:00.002374
>
> NOTICE: 00:00:00.002925
>
> NOTICE: 00:00:00.003525
>
> NOTICE: 00:00:00.004015
>
> NOTICE: 00:00:00.00453
>
> NOTICE: 00:00:00.004976
>
>
>
> The problem only occurs inside a transaction and if the same dataset is
> updated. I´m using PostgreSQL 9.1.15.
>
>
>
> Jan
>
>
>

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Chris Mair 2015-03-09 08:14:41 Re: Performance slowing down when doing same UPDATE many times
Previous Message Anushka Chandrababu 2015-03-09 07:57:30 pg_conndefaults Returning empty string