Re: ERROR: out of memory DETAIL: Failed on request of size ???

From: bricklen <bricklen(at)gmail(dot)com>
To: Brian Wong <bwong64(at)hotmail(dot)com>
Cc: "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org>
Subject: Re: ERROR: out of memory DETAIL: Failed on request of size ???
Date: 2013-11-19 03:25:08
Message-ID: CAGrpgQ9L3knTrgBXHqpLmh+LB8jgkLoTt6Ub87_nGEGH6-07xA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Mon, Nov 18, 2013 at 12:40 PM, Brian Wong <bwong64(at)hotmail(dot)com> wrote:

> We'd like to seek out your expertise on postgresql regarding this error
> that we're getting in an analytical database.
>
> Some specs:
> proc: Intel Xeon X5650 @ 2.67Ghz dual procs 6-core, hyperthreading on.
> memory: 48GB
> OS: Oracle Enterprise Linux 6.3
> postgresql version: 9.1.9
> shared_buffers: 18GB
>
> After doing a lot of googling, I've tried setting FETCH_COUNT on psql
> AND/OR setting work_mem. I'm just not able to work around this issue,
> unless if I take most of the MAX() functions out but just one.
>

What is your work_mem set to?
Did testing show that shared_buffers set to 18GB was effective? That seems
about 2 to 3 times beyond what you probably want.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message David Johnston 2013-11-19 03:59:24 Re: Suggestion: pg_dump self-cleanup if out-of-disk
Previous Message Sergey Konoplev 2013-11-19 02:09:22 Re: Primary Key Index Bloat?