From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | Andrew Dunstan <andrew(dot)dunstan(at)2ndquadrant(dot)com>, Jeremy Schneider <schnjere(at)amazon(dot)com>, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, Joe Conway <mail(at)joeconway(dot)com>, Peter Geoghegan <pg(at)bowt(dot)ie>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Should we increase the default vacuum_cost_limit? |
Date: | 2019-03-08 18:10:30 |
Message-ID: | 21034.1552068630@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jeff Janes <jeff(dot)janes(at)gmail(dot)com> writes:
> Now that this is done, the default value is only 5x below the hard-coded
> maximum of 10,000.
> This seems a bit odd, and not very future-proof. Especially since the
> hard-coded maximum appears to have no logic to it anyway, at least none
> that is documented. Is it just mindless nannyism?
Hm. I think the idea was that rather than setting it to "something very
large", you'd want to just disable the feature via vacuum_cost_delay.
But I agree that the threshold for what is ridiculously large probably
ought to be well more than 5x the default, and maybe it is just mindless
nannyism to have a limit less than what the implementation can handle.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2019-03-08 18:13:23 | Re: PostgreSQL vs SQL/XML Standards |
Previous Message | Tomas Vondra | 2019-03-08 18:08:29 | Re: [Proposal] Table-level Transparent Data Encryption (TDE) and Key Management Service (KMS) |