Re: postgres getting slow under heavy load though autivacuum is enabled

From: Dimitri Fontaine <dfontaine(at)hi-media(dot)com>
To: Bill Moran <wmoran(at)potentialtech(dot)com>
Cc: "tamanna madaan" <tamanna(dot)madan(at)globallogic(dot)com>, <pgsql-general(at)postgresql(dot)org>
Subject: Re: postgres getting slow under heavy load though autivacuum is enabled
Date: 2009-06-09 15:45:29
Message-ID: 87my8hjtra.fsf@hi-media-techno.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Bill Moran <wmoran(at)potentialtech(dot)com> writes:

> In response to "tamanna madaan" <tamanna(dot)madan(at)globallogic(dot)com>:
>>
>> I am using postgres 8.1.2 with slony 1.1.5 used for replication
>> between two nodes. Very high number of db operations like (2.8
>> million inserts, 1.4 million update and 4.5 lakhs deletes.) are being
>> done on db in one transaction and this is repeated for 5-6 times a
>> day at an interval of let say 2 hours. This process is runnning for 5
>> consective days. It is obeserved that db is getting very slow with
>> time. The number of dead tuples getting increased in pg_listener,
>> sl_log_1 and sl_seqlog tables with time though I have autovacuum
>> enabled and slony related tables like (sl_log_1 , sl_seqlog etc) are
>> configured not to be processed by autovacuum . Please let me know
>> what could be the reason of increasing dead tuples in these tables
>> and postgres getting slow.
>
> The slony docs state, and I quote:
> "Long running transactions are Evil"
> http://slony.info/documentation/slonyadmin.html#BESTPRACTICES
>
> I'm going to guess that a single transaction with multi millions of
> tuple changes is about as evil as it gets.
>
> This is a known shortcoming of Slony. You're going to need carefully
> tuned vacuum, well designed schema, adequate hardware, and clever
> schema design to keep a system like that healthy. That is, if you can't
> figure out a way to avoid the huge transactions.

Ok, if we're going to ask about the environment leading to the problem,
I have to point this:
http://wiki.postgresql.org/wiki/Londiste_Tutorial#Londiste_is_eating_all_my_CPU_and_lag_is_raising

With Skytools (Londiste), you can set pgq_lazy_fetch such as the
replicas are using cursors to consume big batches of events, and it runs
smoothly.

HTH, Regards,
--
dim

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Brandon Metcalf 2009-06-09 15:45:52 Re: limit table to one row
Previous Message Grzegorz Jaśkiewicz 2009-06-09 15:37:48 Re: limit table to one row