From: | Natalie Wenz <nataliewenz(at)ebureau(dot)com> |
---|---|
To: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Cc: | Kevin Grittner <kgrittn(at)gmail(dot)com> |
Subject: | Re: Database 'template1' vacuum |
Date: | 2016-08-08 14:43:24 |
Message-ID: | 0FF1E841-EC1C-473A-8A43-E696D8F247A2@ebureau.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Ok, I have one more question (for now!) about vacuum behavior:
How are static tables handled? Once every row in a table is marked as frozen, and there are no further inserts, deletes, updates, etc to the table, does the vacuum have to do anything to that table when advancing the relfrozenxid? Is there a way that the vacuum can see "Oh, this whole table is frozen, my work here is done!" or does it still have to scan every page (or worse, every row)?
> On Jul 26, 2016, at 2:50 PM, Kevin Grittner <kgrittn(at)gmail(dot)com> wrote:
>
> On Tue, Jul 26, 2016 at 2:36 PM, Natalie Wenz <nataliewenz(at)ebureau(dot)com> wrote:
>
>> Now that the first enormous vacuum freeze is complete, we are
>> "unspooling" the data that was buffered while the database
>> vacuumed. Between the hardware and postgres software, we easily see
>> this hit 65 thousand inserts per second. (Wooo, modern hardware!
>> Excellent software!) Of course, that means that we've run out of
>> xids again in the span of about 10 hours; no tuning of the auto
>> vacuum that we've tried is able to keep pace with that. So. I'm
>> currently suffering through a cycle of unspool buffered
>> transactions, vacuum freeze while transactions buffer, unspool . . .
>
> There are many reasons to batch inserts, this being one of them.
> You might see a big further boost in the insert rate, even with
> moderate batch sizes. If you think about it, a batch size of 100
> should get you to somewhere around 99% of the benefit of larger
> batch sizes. Of course, you need some way to handle an error in a
> row or two out of the batch....
>
>> At this point, I have just left the auto vacuum defaults in place
>> for this database, and it generally keeps up with our typical 6k
>> insert/second workload.
>
> I would increase autovacuum_max_workers (to maybe 10 or 12) and
> autovacuum_vacuum_cost_limit (to several thousand).
>
>> Anyway, template1: this is expected behavior?
>
> Yeah. I would just connect to that database as the database
> superuser (in single-user mode only if necessary), and run the
> command `VACUUM`. No options, just the bare command.
>
>> I'm surprised that the auto vacuum wouldn't have already have
>> vacuumed that very small database, or that haven't been prompted to
>> vacuum template1 prior to this, if the only transactions in this
>> database would have occurred during the initdb for the postgres
>> database.
>
> The logic for picking what to vacuum isn't, shall we say, ideal.
> It may keep running into the same tables to vacuum whenever it
> scans, and never gets to others. A higher worker count should help
> it get to more (hopefully all) of the tables. The cost limit is
> share among all the active workers, so raising the maximum doesn't
> tend to have much direct impact on performance.
>
>> Does it get used for other things during normal database
>> operations?
>
> No -- it is normally used just for CREATE DATABASE.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
>
> --
> Sent via pgsql-admin mailing list (pgsql-admin(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-admin
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2016-08-08 15:00:03 | Re: Database 'template1' vacuum |
Previous Message | Gerardo Herzig | 2016-08-08 11:21:00 | Re: [PERFORM] Create language plperlu Error |