Re: Efficiently Triggering Autovacuum Analyze?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Cory Tucker <cory(dot)tucker(at)gmail(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Efficiently Triggering Autovacuum Analyze?
Date: 2015-12-30 19:20:55
Message-ID: 22295.1451503255@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Cory Tucker <cory(dot)tucker(at)gmail(dot)com> writes:
> This table is almost always queried using a combination of (account_id,
> record_id) and is generally pretty fast. However, under certain loads, the
> query becomes slower and slower as time goes on. The workload that causes
> this to happen is when data for a new account_id is being inserted into the
> table. This will happen in rapid succession and may insert millions of
> rows over the course of several hours.

Are those insertions happening in one enormous transaction, or even just
a few very large ones?

> The pattern that I notice when this happens is that the CPU on DB will be
> pegged much higher than usual, and the query to lookup records for the
> (account_id, record_id) combo will steadily rise from <1ms to more then 2
> or 3 seconds over time.

I'm suspicious that this is not autovacuum's fault but reflects the cost
of checking uncommitted tuples to see if they've become committed yet.
If so, there may be little you can do about it except break the insertion
into smaller transactions ... which might or might not be all right from
a data consistency standpoint.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Andy Colson 2015-12-30 19:21:07 Re: cannot get stable function to use index
Previous Message Cory Tucker 2015-12-30 19:09:16 Efficiently Triggering Autovacuum Analyze?