Re: autovacuum big table taking hours and sometimes seconds

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Mariel Cherkassky <mariel(dot)cherkassky(at)gmail(dot)com>, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: autovacuum big table taking hours and sometimes seconds
Date: 2019-02-15 08:06:13
Message-ID: 53d097b669514cd2628678149a87f85b7ef3db5f.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Mariel Cherkassky wrote:
> Lets focus for example on one of the outputs :
> postgresql-Fri.log:2019-02-08 05:05:53 EST 24776 LOG: automatic vacuum of table "myDB.pg_toast.pg_toast_1958391": index scans: 8
> postgresql-Fri.log- pages: 2253 removed, 13737828 remain
> postgresql-Fri.log- tuples: 21759258 removed, 27324090 remain
> postgresql-Fri.log- buffer usage: 15031267 hits, 21081633 misses, 19274530 dirtied
> postgresql-Fri.log- avg read rate: 2.700 MiB/s, avg write rate: 2.469 MiB/s
>
> The cost_limit is set to 200 (default) and the cost_delay is set to 20ms.
> The calculation I did : (1*15031267+10*21081633+20*19274530)/200*20/1000 = 61133.8197 seconds ~ 17H
> So autovacuum was laying down for 17h ? I think that I should increase the cost_limit to max specifically on the toasted table. What do you think ? Am I wrong here ?

Increasing cost_limit or reducing cost_delay improves the situation.

cost_delay = 0 makes autovacuum as fast as possible.

Yours,
Laurenz Albe

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Alvaro Herrera 2019-02-15 13:39:55 Re: ERROR: unrecognized parameter "autovacuum_analyze_scale_factor"
Previous Message Mariel Cherkassky 2019-02-15 06:23:28 Re: autovacuum big table taking hours and sometimes seconds