From: | Gregory Stark <stark(at)enterprisedb(dot)com> |
---|---|
To: | Grzegorz Jaśkiewicz <gryzman(at)gmail(dot)com> |
Cc: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov>, "pgsql-performance\(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>, Scott Carey <scott(at)richrelevance(dot)com>, "Jignesh K(dot) Shah" <J(dot)K(dot)Shah(at)sun(dot)com> |
Subject: | Re: Proposal of tunable fix for scalability of 8.4 |
Date: | 2009-03-12 17:09:49 |
Message-ID: | 87k56u652a.fsf@oxford.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Grzegorz Jaśkiewicz <gryzman(at)gmail(dot)com> writes:
> So please, don't say that this doesn't make sense because he tested it
> against ram disc. That was precisely the point of exercise.
What people are tip-toeing around saying, which I'll just say right out in the
most provocative way, is that Jignesh has simply *misconfigured* the system.
He's contrived to artificially create a lot of unnecessary contention.
Optimizing the system to reduce the cost of that artificial contention at the
expense of a properly configured system would be a bad idea.
It's misconfigured because there are more runnable threads than there are
cpus. A lot more. 15 times as many as necessary. If users couldn't run
connection poolers on their own the right approach for us to address this
contention would be to build one into Postgres, not to re-engineer the
internals around the misuse.
Ram-resident use cases are entirely valid and worth testing, but in those use
cases you would want to have about as many processes as you have processes.
The use case where having larger number of connections than processors makes
sense is when they're blocked on disk i/o (or network i/o or whatever else
other than cpu).
And having it be configurable doesn't mean that it has no cost. Having a test
of a user-settable dynamic variable in the middle of a low-level routine could
very well have some cost. Just the extra code would have some cost in reduced
cache efficiency. It could be that loop prediction and so on save us but that
remains to be proven.
And as always the question would be whether the code designed for this
misconfigured setup is worth the maintenance effort if it's not helping
properly configured setups. Consider for example any work with dtrace to
optimize locks under properly configured setups would lead us to make changes
which would have to be tested twice, once with and once without this option.
What do we do if dtrace says some unrelated change helps systems with this
option disabled but hurts systems with it enabled?
--
Gregory Stark
EnterpriseDB http://www.enterprisedb.com
Ask me about EnterpriseDB's RemoteDBA services!
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2009-03-12 17:39:05 | Re: Proposal of tunable fix for scalability of 8.4 |
Previous Message | Scott Carey | 2009-03-12 17:09:31 | Re: Proposal of tunable fix for scalability of 8.4 |