From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Jeremy Palmer <JPalmer(at)linz(dot)govt(dot)nz> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Out of memory |
Date: | 2011-03-25 04:04:30 |
Message-ID: | AANLkTim9xQSDeygYJiUwt8mDxSR_8E7gzFdEFZfjJ6Lr@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Mar 24, 2011 at 9:23 PM, Jeremy Palmer <JPalmer(at)linz(dot)govt(dot)nz> wrote:
> I’ve been getting database out of memory failures with some queries which
> deal with a reasonable amount of data.
>
> I was wondering what I should be looking at to stop this from happening.
>
> The typical messages I been getting are like this:
> http://pastebin.com/Jxfu3nYm
> The OS is:
>
> Linux TSTLHAPP01 2.6.32-29-server #58-Ubuntu SMP Fri Feb 11 21:06:51 UTC
> 2011 x86_64 GNU/Linux.
>
> It’s a running on VMWare and, has 2 CPU’s and 8GB of RAM. This VM is
> dedicated to PostgreSQL. The main OS parameters I have tuned are:
>
> work_mem = 200MB
That's a really big work_mem. I have mainline db servers with 128G of
ram that have work_mem set to 16M and that is still considered a
little high in my book. If you drop work_mem down to 1MB does the out
of memory go away? work_mem is how much memory EACH sort can use on
its own, if you have a plpgsql procedure that keeps running query
after query, it could use a LOT of memory really fast.
From | Date | Subject | |
---|---|---|---|
Next Message | Shiv | 2011-03-25 04:56:40 | Re: foreign data wrappers |
Previous Message | Jeremy Palmer | 2011-03-25 03:23:13 | Out of memory |