Re: configurability of OOM killer

From: Simon Riggs <simon(at)2ndquadrant(dot)com>
To: Dawid Kuroczko <qnex42(at)gmail(dot)com>
Cc: Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com>, Decibel! <decibel(at)decibel(dot)org>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jeff Davis <pgsql(at)j-davis(dot)com>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: configurability of OOM killer
Date: 2008-02-08 06:40:40
Message-ID: 1202452840.4247.41.camel@ebony.site
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

On Thu, 2008-02-07 at 20:22 +0100, Dawid Kuroczko wrote:
> On Feb 5, 2008 10:54 PM, Ron Mayer <rm_pg(at)cheapcomplexdevices(dot)com> wrote:
> > Decibel! wrote:
> > >
> > > Yes, this problem goes way beyond OOM. Just try and configure
> > > work_memory aggressively on a server that might see 50 database
> > > connections, and do it in such a way that you won't swap. Good luck.
> >
> > That sounds like an even broader and more difficult problem
> > than managing memory.
> >
> > If you have 50 connections that all want to perform large sorts,
> > what do you want to have happen?
> >
> > a) they each do their sorts in parallel with small amounts
> > of memory for each; probably all spilling to disk?
> > b) they each get a big chunk of memory but some have to
> > wait for each other?
> > c) something else?
>
> Something else. :-)
>
> I think there could be some additional parameter which would
> control how much memory there is in total, say:
> process_work_mem = 128MB # Some other name needed...
> process_work_mem_percent = 20% # Yeah, defenately some other name...
> total_work_mem = 1024MB # how much there is for you in total.
>
>
> Your postgres spawns 50 processes which initially don't
> use much work_mem. They would all register their current
> work_mem usage, in shared memory.
>
> Each process, when it expects largish sort tries to determine
> how much memory there is for the taking, to calculate is own
> work_mem. work_mem should not exceed process_work_mem,
> and not exceed 20% of total available free mem.
>
> So, one backend needs to make a huge sort. Determines the
> limit for it is 128MB and allocates it.
>
> Another backend starts sorting. Deletermines the current free
> mem is about (1024-128)*20% =~ 179MB. Takes 128MB
>
> Some time passes, 700MB of total_work_mem is used, and
> another backend decides it needs much memory.
> It determines its current free mem to be not more than
> (1024-700) * 20% =~ 64MB, so it sets it work_mem to 64MB
> and sorts away.
>
> Noooow, I know work_mem is not "total per process limit", but
> rather per sort/hash/etc operation. I know the scheme is a bit
> sketchy, but I think this would allow more memory-greedy
> operations to use memory, while taking in consideration that
> they are not the only ones out there. And that these settings
> would be more like hints than the actual limits.

I like the sketch and I think we need to look for a solution along those
lines.

> ....while we are at it -- one feature would be great for 8.4, an
> ability to shange shared buffers size "on the fly". I expect
> it is not trivial, but would help fine-tuning running database.
> I think DBA would need to set maximum shared buffers size
> along the normal setting.

Perhaps we might go for a mechanism that allows us to increase but not
decrease memory. Perhaps that might be easier.

--
Simon Riggs
2ndQuadrant http://www.2ndQuadrant.com

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Simon Riggs 2008-02-08 06:42:07 Re: configurability of OOM killer
Previous Message Jaime Casanova 2008-02-08 05:27:23 Re: 2WRS [WIP]