From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | Sandeep Srinivasa <sss(at)clearsenses(dot)com>, Ma Sivakumar <masivakumar(at)gmail(dot)com>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: MySQL versus Postgres |
Date: | 2010-08-12 07:07:35 |
Message-ID: | AANLkTinuuC_bWSmjp1RPLsZna5xcogrUKbWb3ackWTy1@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Aug 11, 2010 at 11:41 PM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> Sandeep Srinivasa wrote:
>>
>> Maybe a tabular form would be nice - "work_mem" under...
>
> The problem with work_mem in particular is that the useful range depends
> quite a bit on how complicated you expect the average query running to be.
And it's very dependent on max connections. A machine with 512GB that
runs batch processes for one or two import processes and then has
another two or three used to query it can run much higher work_mem
than a machine with 32G set to handle hundreds of concurrent accesses.
Don't forget that when you set work_mem to high it has a very sharp
dropoff in performance as swapping starts to occur. If work_mem is a
little low, queries run 2 or 3 times slower. If it's too high the
machine can grind to a halt.
From | Date | Subject | |
---|---|---|---|
Next Message | Georgi Ivanov | 2010-08-12 07:43:06 | Is there a way too speed up Limit with high OFFSET ? |
Previous Message | Ma Sivakumar | 2010-08-12 06:29:13 | Re: MySQL versus Postgres |