Re: Bad planning data resulting in OOM killing of postgres

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: David Hinkle <hinkle(at)cipafilter(dot)com>
Cc: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: Bad planning data resulting in OOM killing of postgres
Date: 2017-02-14 21:38:50
Message-ID: 28429.1487108330@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

David Hinkle <hinkle(at)cipafilter(dot)com> writes:
> Thanks guys, here's the information you requested:
> psql:postgres(at)cipafilter = show work_mem;
> work_mem
> ──────────
> 10MB
> (1 row)

[ squint... ] It should absolutely not have tried to hash a 500M-row
table if it thought work_mem was only 10MB. I wonder if there's an
integer-overflow problem or something like that.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Patrick B 2017-02-14 22:55:05 get inserted id from transaction - PG 9.2
Previous Message Nikolai Zhubr 2017-02-14 21:06:11 Re: Re: Causeless CPU load waves in backend, on windows, 9.5.5 (EDB binary).