From: | "Kevin Grittner" <kgrittn(at)mail(dot)com> |
---|---|
To: | "Greg Williamson" <gwilliamson39(at)yahoo(dot)com> |
Cc: | "pgsql" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Enabling Autovacuum Postgres 9.1 (was Unexpectedly high disk space usage) |
Date: | 2012-11-13 14:28:26 |
Message-ID: | 20121113142826.90170@gmx.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Greg Williamson wrote:
>>>> running transactions can cause autovacuum processes to stall
>>>> out or be autocancelled. "Long running transactions" - is now
>>>> long? In our system it's rare to have a transaction (even a
>>>> prepared transaction) last much longer than a few minutes. Is
>>>> that enough time to cause problems with AutoVacuum?
>>
>> The only situation where I would expect that to be a problem is in
>> a very small table which is updated hundreds of times per second.
>
> Could you elaborate on this, or point me to a previous thread ?
I had a situation where there was a need for consecutive numbers
(i.e., no gaps, to satisfy financial auditors), so we needed
assignment of these numbers to be transactional rather than using
SEQUENCE or SERIAL objects. There was a very small table for
assigning these numbers, which was very frequently updated. In a
quiescent state all rows in the table would fit in one page. Before
tuning autovacuum to be more aggressive, the table bloated, causing
performance to fall off. Then autovacuum would kick in and it would
get even worse. So naturally, my first response was to make
autovacuum less aggressive, which caused the table to bloat even more
under normal load, and caused autovacuum to have an even bigger
impact when it did kick in. The table bloated to thousands of pages.
Then I tried the opposite approach: I cranked up autovacuum to be
very aggressive. Under normal load the table settled in at five to
ten pages and performance was great. However, any long-running
transaction could cause some bloat, so a big report could still cause
this one table to become a performance problem. I found that a
CLUSTER ran sub-second, because autovacuum did pare the index down to
just the needed entries, so I set up a crontab job to CLUSTER this
one table a few times per day. That worked out great for me.
I think the trick is to try to make autovacuum keep up as much as
possible, identify any issues it is not handling, and narrowly target
those particular areas with extraordinary maintenance.
-Kevin
From | Date | Subject | |
---|---|---|---|
Next Message | salah jubeh | 2012-11-13 14:58:02 | Detect the side effect of 'using' clause and adding coulms |
Previous Message | Merlin Moncure | 2012-11-13 14:25:34 | Re: Out of Shared Memory: max_locks_per_transaction |