From: | david(at)lang(dot)hm |
---|---|
To: | Craig James <craig_james(at)emolecules(dot)com> |
Cc: | PostgreSQL Performance <pgsql-performance(at)postgresql(dot)org>, Matthew Wakeling <matthew(at)flymine(dot)org> |
Subject: | Re: select on 22 GB table causes "An I/O error occured while sending to the backend." exception |
Date: | 2008-08-28 18:27:27 |
Message-ID: | alpine.DEB.1.10.0808281118420.2713@asgard.lang.hm |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, 28 Aug 2008, Craig James wrote:
> Matthew Wakeling wrote:
>> On Thu, 28 Aug 2008, Steve Atkins wrote:
>>>> Probably the best solution is to just tell the kernel somehow to never
>>>> kill the postmaster.
>>>
>>> Or configure adequate swap space?
>>
>> Oh yes, that's very important. However, that gives the machine the
>> opportunity to thrash.
>
> No, that's where the whole argument for allowing overcommitted memory falls
> flat.
>
> The entire argument for allowing overcommitted memory hinges on the fact that
> processes *won't use the memory*. If they use it, then overcommitting causes
> problems everywhere, such as a Postmaster getting arbitrarily killed.
>
> If a process *doesn't* use the memory, then there's no problem with
> thrashing, right?
>
> So it never makes sense to enable overcommitted memory when Postgres, or any
> server, is running.
>
> Allocating a big, fat terabyte swap disk is ALWAYS better than allowing
> overcommitted memory. If your usage is such that overcommitted memory would
> never be used, then the swap disk will never be used either. If your
> processes do use the memory, then your performance goes into the toilet, and
> you know it's time to buy more memory or a second server, but in the mean
> time your server processes at least keep running while you kill the rogue
> processes.
there was a misunderstanding (for me if nobody else) that without
overcommit it was actual ram that was getting allocated, which could push
things out to swap even if the memory ended up not being needed later.
with the clarification that this is not the case and the allocation is
just reducing the virtual memory available it's now clear that it is just
as efficiant to run with overcommit off.
so the conclusion is:
no performance/caching/buffer difference between the two modes.
the differencees between the two are:
with overcommit
when all ram+swap is used OOM killer is activated.
for the same amount of ram+swap more allocations can be done before it
is all used up (how much more is unpredicable)
without overcommit
when all ram+swap is allocated programs (not nessasarily the memory
hog) start getting memory allocation errors.
David Lang
From | Date | Subject | |
---|---|---|---|
Next Message | cluster | 2008-08-28 19:22:38 | Best hardware/cost tradoff? |
Previous Message | david | 2008-08-28 18:17:18 | Re: select on 22 GB table causes "An I/O error occured while sending to the backend." exception |