From: | Jim Nasby <Jim(dot)Nasby(at)BlueTreble(dot)com> |
---|---|
To: | David Gould <daveg(at)sonic(dot)net>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pg Bugs <pgsql-bugs(at)postgresql(dot)org>, Josh Berkus <josh(at)agliodbs(dot)com> |
Subject: | Re: BUG #13750: Autovacuum slows down with large numbers of tables. More workers makes it slower. |
Date: | 2016-03-18 07:15:50 |
Message-ID: | 56EBAB26.4090905@BlueTreble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On 3/15/16 4:28 PM, David Gould wrote:
> The more I learned about autovacuum
> scheduling the less it made sense. Really, there should be some sort of
> priority order for vacuuming based on some metric of need and tables should be
> processed in that order.
+1. What's there now is incredibly braindead.
I actually wonder if instead of doing all the the hard way in C whether
we should just use SPI for each worker to build it's list of tables. The
big advantage that would provide is the ability for users to customize
the scheduling, but I suspect it'd make the code simpler too.
The same is also true for deciding what database needs to be vacuumed next.
--
Jim Nasby, Data Architect, Blue Treble Consulting, Austin TX
Experts in Analytics, Data Architecture and PostgreSQL
Data in Trouble? Get it in Treble! http://BlueTreble.com
From | Date | Subject | |
---|---|---|---|
Next Message | Dmitriy Sarafannikov | 2016-03-18 08:12:44 | Re: Too many files in pg_replslot folder |
Previous Message | cjthmp | 2016-03-18 03:50:10 | BUG #14031: Failed to Write to History File |