From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | Claudio Freire <klaussfreire(at)gmail(dot)com> |
Cc: | Peter van Hardenberg <pvh(at)pvh(dot)ca>, postgres performance list <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: PG as in-memory db? How to warm up and re-populate buffers? How to read in all tuples into memory? |
Date: | 2012-03-02 01:17:14 |
Message-ID: | 4F501F9A.4080905@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 03/01/2012 07:58 PM, Claudio Freire wrote:
> On Thu, Mar 1, 2012 at 9:28 PM, Peter van Hardenberg<pvh(at)pvh(dot)ca> wrote:
>>> Setting work_mem to hundreds of MB in a 4G system is suicide. Tens
>>> even is dangerous.
>>>
>> Why do you say that? We've had work_mem happily at 100MB for years. Is
>> there a particular degenerate case you're concerned about?
> Me too.
>
> But I've analyzed the queries I'll be sending to the database and I've
> carefully bound the effective amount of memory used given the load
> I'll be experiencing.
>
> Saying that it should be set to 100M without consideration for those
> matters is the suicide part. work_mem applies to each sort operation.
> Suppose, just for the sake of argument, that each connection is
> performing 5 such sorts (ie, 5 joins of big tables - not unthinkable),
> then suppose you have your max_connections to the default of 100, then
> the system could request as much as 50G of ram.
>
> I set work_mem higher in my database system since I *know* most of the
> connections will not perform any merge or hash joins, nor will they
> sort the output, so they won't use work_mem even once. The ones that
> will, I have limited on the application side to a handful, hence I
> *know* that 50G theoretical maximum will not be reached.
>
> Can the OP say that? I have no reason to think so. Hence I don't
> suggest 100M is OK on a 4G system.
Well, obviously you need to know your workload. Nobody said otherwise.
cheers
andrew