Re: Prevent out of memory errors by reducing work_mem?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Jan Strube <js(at)deriva(dot)de>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Prevent out of memory errors by reducing work_mem?
Date: 2013-01-25 14:42:27
Message-ID: 10644.1359124947@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Jan Strube <js(at)deriva(dot)de> writes:
> I'm getting an out of memory error running the following query over 6
> tables (the *BASE* tables have over 1 million rows each) on Postgresql
> 9.1. The machine has 4GB RAM:

It looks to me like you're suffering an executor memory leak that's
probably unrelated to the hash joins as such. The leak is in the
ExecutorState context:

> ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16
> chunks); 3437811560 used

while the subsidiary HashXYZ contexts don't look like they're going
beyond what they've been told to.

So the first question is 9.1.what? We've fixed execution-time memory
leaks as recently as 9.1.7.

If you're on 9.1.7, or if after updating you can still reproduce the
problem, please see if you can create a self-contained test case.
My guess is it would have to do with the specific data types and
operators being used in the query, but not so much with the specific
data, so you probably could create a test case that just uses tables
filled with generated random data.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Electric Boy 2013-01-25 15:02:22 Postfresql 8.4 Problem
Previous Message Cliff de Carteret 2013-01-25 14:18:51 Re: Throttling Streamming Replication