| From: | Michael Lewis <mlewis(at)entrata(dot)com> |
|---|---|
| To: | Kevin Brannen <KBrannen(at)efji(dot)com> |
| Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: how to slow down parts of Pg |
| Date: | 2020-04-21 22:17:19 |
| Message-ID: | CAHOFxGp3r9XEat8Q+_PqVU0bniuKOQz9-876yTTqDbaeGfhknA@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Reviewing pg_stat_user_tables will give you an idea of how often autovacuum
is cleaning up those tables that "need" that vacuum full on a quarterly
basis. You can tune individual tables to have a lower threshold ratio of
dead tuples so the system isn't waiting until you have 20% dead rows before
vacuuming a table with millions of rows that occupies a GB or more on disk.
You might consider changing your nightly analyze to a nightly vacuum
analyze, at least for the tables you know can be problematic. The more
dense a table is packed, the better cache_hits and other such metrics. Like
making dinner, cleanup as you go.
One thing that I think is interesting is that the default cost_delay has
been updated with PG12 from 20ms down to 2ms such that all things being
equal, much much more work is done by autovacuum in a given second. It may
be worth taking a look at.
Another great thing coming to you in PG12 is the option to do reindex
concurrently. Then there's no need for pg_repack on indexes.
Good luck sir.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Adrian Klaver | 2020-04-21 22:54:09 | Re: how to slow down parts of Pg |
| Previous Message | Kevin Brannen | 2020-04-21 21:44:44 | RE: how to slow down parts of Pg |