From: | Cory Tucker <cory(dot)tucker(at)gmail(dot)com> |
---|---|
To: | |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Efficiently Triggering Autovacuum Analyze? |
Date: | 2015-12-30 19:35:44 |
Message-ID: | CAG_=8kDJosq=MVLRvoPW_CLc3noRNNLPqv9OTGrAmD-T63aM2Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Dec 30, 2015 at 11:20 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Cory Tucker <cory(dot)tucker(at)gmail(dot)com> writes:
> > This table is almost always queried using a combination of (account_id,
> > record_id) and is generally pretty fast. However, under certain loads,
> the
> > query becomes slower and slower as time goes on. The workload that
> causes
> > this to happen is when data for a new account_id is being inserted into
> the
> > table. This will happen in rapid succession and may insert millions of
> > rows over the course of several hours.
>
> Are those insertions happening in one enormous transaction, or even just
> a few very large ones?
>
No, one transaction per row insert.
>
> > The pattern that I notice when this happens is that the CPU on DB will be
> > pegged much higher than usual, and the query to lookup records for the
> > (account_id, record_id) combo will steadily rise from <1ms to more then 2
> > or 3 seconds over time.
>
> I'm suspicious that this is not autovacuum's fault but reflects the cost
> of checking uncommitted tuples to see if they've become committed yet.
> If so, there may be little you can do about it except break the insertion
> into smaller transactions ... which might or might not be all right from
> a data consistency standpoint.
>
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2015-12-30 19:55:10 | Re: cannot get stable function to use index |
Previous Message | Joe Conway | 2015-12-30 19:31:27 | Re: Efficiently Triggering Autovacuum Analyze? |