From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Josh Berkus <josh(at)agliodbs(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Magnus Hagander <magnus(at)hagander(dot)net>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Claudio Freire <klaussfreire(at)gmail(dot)com>, Merlin Moncure <mmoncure(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: proposal: Set effective_cache_size to greater of .conf value, shared_buffers |
Date: | 2014-05-15 18:58:10 |
Message-ID: | CA+TgmoZJjcij5BA_dWVWNiv2ZXvd88ObkEb220XmH4HBCVx-kw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, May 15, 2014 at 8:06 AM, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> On Tue, May 6, 2014 at 11:15:17PM +0100, Simon Riggs wrote:
>> > Well, for what it's worth, I've encountered systems where setting
>> > effective_cache_size too low resulted in bad query plans, but I've
>> > never encountered the reverse situation.
>>
>> I agree with that.
>>
>> Though that misses my point, which is that you can't know that all of
>> that memory is truly available on a server with many concurrent users.
>> Choosing settings that undercost memory intensive plans are not the
>> best choice for a default strategy in a mixed workload when cache may
>> be better used elsewhere, even if such settings make sense for some
>> individual users.
>
> This is the same problem we had with auto-tuning work_mem, in that we
> didn't know what other concurrent activity was happening. Seems we need
> concurrent activity detection before auto-tuning work_mem and
> effective_cache_size.
I think it's worse than that: we don't even know what else is
happening *in the same query*. For example, look at this:
http://www.postgresql.org/message-id/16161.1324414006@sss.pgh.pa.us
That's pretty awful, and it's just one example of a broader class of
problems that we haven't even tried to solve. We really need a way to
limit memory usage on a per-query basis rather than a per-node basis.
For example, consider a query plan that needs to do four sorts. If
work_mem = 64MB, we'll happily use 256MB total, 64MB for each sort.
Now, that might cause the system to swap: since there are four sorts,
maybe we ought to have used only 16MB per sort, and switched to a heap
sort if that wasn't enough. But it's even subtler than that: if we
had known when building the query plan that we only had 16MB per sort
rather than 64MB per sort, we would potentially have estimated higher
costs for those sorts in the first place, which might have led to a
different plan that needed fewer sorts to begin with.
When you start to try to balance memory usage across multiple
backends, things get even more complicated. If the first query that
starts up is allowed to use all the available memory, and we respond
to that by lowering the effective value of work_mem to something very
small, a second query that shows up a bit later might choose a very
inefficient plan as a result. That in turn might cause heavy I/O load
on the system for a long time, making the first query run very slowly.
We might have been better off just letting the first query finish,
and the running the second one (with a much better plan) after it was
done. Or, maybe we should have only let the first query take a
certain fraction (half? 10%?) of the available memory, so that there
was more left for the second guy. But that could be wrong too - it
might cause the first plan to be unnecessarily inefficient when nobody
was planning to run any other queries anyway. Plus, DBAs hate it when
plans change on them unexpectedly, so anything that involves a
feedback loop between current utilization and query plans will be
unpopular with some people for that reason.
These are hard problems.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2014-05-15 19:02:06 | Re: Race condition between PREPARE TRANSACTION and COMMIT PREPARED (was Re: Problem with txid_snapshot_in/out() functionality) |
Previous Message | Bruce Momjian | 2014-05-15 18:56:14 | Re: gettimeofday is at the end of its usefulness? |