From: | Kevin Grittner <kgrittn(at)gmail(dot)com> |
---|---|
To: | Natalie Wenz <nataliewenz(at)ebureau(dot)com> |
Cc: | pgsql-admin <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Database 'template1' vacuum |
Date: | 2016-07-26 19:50:46 |
Message-ID: | CACjxUsMUetfqxGvdbQdXL_k_RMtYOAJ0fWo2KRqUSJyDA2-psw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Tue, Jul 26, 2016 at 2:36 PM, Natalie Wenz <nataliewenz(at)ebureau(dot)com> wrote:
> Now that the first enormous vacuum freeze is complete, we are
> "unspooling" the data that was buffered while the database
> vacuumed. Between the hardware and postgres software, we easily see
> this hit 65 thousand inserts per second. (Wooo, modern hardware!
> Excellent software!) Of course, that means that we've run out of
> xids again in the span of about 10 hours; no tuning of the auto
> vacuum that we've tried is able to keep pace with that. So. I'm
> currently suffering through a cycle of unspool buffered
> transactions, vacuum freeze while transactions buffer, unspool . . .
There are many reasons to batch inserts, this being one of them.
You might see a big further boost in the insert rate, even with
moderate batch sizes. If you think about it, a batch size of 100
should get you to somewhere around 99% of the benefit of larger
batch sizes. Of course, you need some way to handle an error in a
row or two out of the batch....
> At this point, I have just left the auto vacuum defaults in place
> for this database, and it generally keeps up with our typical 6k
> insert/second workload.
I would increase autovacuum_max_workers (to maybe 10 or 12) and
autovacuum_vacuum_cost_limit (to several thousand).
> Anyway, template1: this is expected behavior?
Yeah. I would just connect to that database as the database
superuser (in single-user mode only if necessary), and run the
command `VACUUM`. No options, just the bare command.
> I'm surprised that the auto vacuum wouldn't have already have
> vacuumed that very small database, or that haven't been prompted to
> vacuum template1 prior to this, if the only transactions in this
> database would have occurred during the initdb for the postgres
> database.
The logic for picking what to vacuum isn't, shall we say, ideal.
It may keep running into the same tables to vacuum whenever it
scans, and never gets to others. A higher worker count should help
it get to more (hopefully all) of the tables. The cost limit is
share among all the active workers, so raising the maximum doesn't
tend to have much direct impact on performance.
> Does it get used for other things during normal database
> operations?
No -- it is normally used just for CREATE DATABASE.
--
Kevin Grittner
EDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Campbell, Lance | 2016-07-26 20:22:04 | data model one large and many small columns |
Previous Message | Natalie Wenz | 2016-07-26 19:36:11 | Re: Database 'template1' vacuum |