From: | Jan Strube <js(at)deriva(dot)de> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Prevent out of memory errors by reducing work_mem? |
Date: | 2013-01-28 07:47:53 |
Message-ID: | 51062D29.2040207@deriva.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
you are right.
We were running 9.1.4 and after upgrading to 9.1.7 the error disappeared.
Thanks a lot,
JanStrube
>> I'm getting an out of memory error running the following query over 6
>> tables (the *BASE* tables have over 1 million rows each) on Postgresql
>> 9.1. The machine has 4GB RAM:
> It looks to me like you're suffering an executor memory leak that's
> probably unrelated to the hash joins as such. The leak is in the
> ExecutorState context:
>
>> ExecutorState: 3442985408 total in 412394 blocks; 5173848 free (16
>> chunks); 3437811560 used
> while the subsidiary HashXYZ contexts don't look like they're going
> beyond what they've been told to.
>
> So the first question is 9.1.what? We've fixed execution-time memory
> leaks as recently as 9.1.7.
>
> If you're on 9.1.7, or if after updating you can still reproduce the
> problem, please see if you can create a self-contained test case.
> My guess is it would have to do with the specific data types and
> operators being used in the query, but not so much with the specific
> data, so you probably could create a test case that just uses tables
> filled with generated random data.
>
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | Alexander Farber | 2013-01-28 07:57:39 | Re: Optimizing select count query which often takes over 10 seconds |
Previous Message | Jeff Janes | 2013-01-27 19:41:30 | Re: Optimizing select count query which often takes over 10 seconds |