From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> |
Cc: | Jorge Daniel <elgaita(at)hotmail(dot)com>, "pgsql-admin(at)lists(dot)postgresql(dot)org" <pgsql-admin(at)lists(dot)postgresql(dot)org>, "pgsql-admin(at)postgresql(dot)org" <pgsql-admin(at)postgresql(dot)org>, "dyuryeva(at)medallia(dot)com" <dyuryeva(at)medallia(dot)com> |
Subject: | Re: OOM Killing on Docker while ANALYZE running |
Date: | 2018-01-25 20:48:26 |
Message-ID: | 29102.1516913306@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org> writes:
> Jorge Daniel wrote:
>> Hi guys , I'm dealing with OOM killing on Postgresql 9.4.8 running on docker ,
> What is your statistics target? analyze is supposed to acquire samples
> of the data, not the whole table ...
>> pgbench=# analyze verbose pgbench_accounts;
>> INFO: analyzing "public.pgbench_accounts"
>> INFO: "pgbench_accounts": scanned 1639345 of 1639345 pages, containing 100000000 live rows and 0 dead rows; 3000000 rows in sample, 100000000 estimated total rows
The standard setting of default_statistics_target would only result in
trying to acquire a 30000-row sample, so the problem here is having
cranked that up beyond what the machine can sustain. AFAIR we do not
consider maintenance_work_mem as imposing a limit on the sample size.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Jorge Daniel | 2018-01-25 20:58:44 | Re: OOM Killing on Docker while ANALYZE running |
Previous Message | Alvaro Herrera | 2018-01-25 19:47:20 | Re: OOM Killing on Docker while ANALYZE running |