From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | Ayub M <hiayub(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: postgres vacuum memory limits |
Date: | 2021-08-01 06:12:39 |
Message-ID: | CAKFQuwYeDYxvrm4JrdpH=EUQw=+u10gvN0h7Z0MBFAUHK_qxqA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Saturday, July 31, 2021, Ayub M <hiayub(at)gmail(dot)com> wrote:
> But when default_statistics_target is increased to 3000, the session usage
> is 463mb
>
IIUC, the analyze process doesn’t consult maintenance_work_mem. It simply
creates an array, in memory, to hold the random sample of rows needed for
computing the requested statistics.
I skimmed the docs but didn’t get a firm answer beyond the fact that vacuum
is an example command that consult maintenance_work_mem and analyze is not
mentioned in the same list. I did find:
‘The largest statistics target among the columns being analyzed determines
the number of table rows sampled to prepare the statistics. Increasing the
target causes a proportional increase in the time and space needed to do
ANALYZE.”
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Alban Hertroys | 2021-08-01 10:32:20 | Re: Help with writing a generate_series(tsmultirange, interval) |
Previous Message | Ayub M | 2021-08-01 04:56:50 | postgres vacuum memory limits |