From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Admission Control |
Date: | 2010-06-25 20:42:09 |
Message-ID: | AANLkTin0t4lsvYt12Q9hJeETTyfVbYldYySLdm2K9V-G@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jun 25, 2010 at 4:10 PM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> On 6/25/10 12:15 PM, Robert Haas wrote:
>> I think a good admission control system for memory would be huge for
>> us. There are innumerable threads on pgsql-performance where we tell
>> people to set work_mem to a tiny value (like 4MB or 16MB) because any
>> higher value risks driving the machine into swap in the event that
>> they get an unusually large number of connections or those connections
>> issue queries that require an unusual number of hashes or sorts.
>
> Greenplum did this several years ago with the Bizgres project; it had a
> resource control manager which was made available for PostgreSQL core.
> However, it would have required a large and unpredictable amount of work
> to make it compatible with OLTP workloads.
>
> The problem with centralized resource control is the need for
> centralized locking on requests for resources. That forces transactions
> to be serialized in order to make sure resources are not
> double-allocated. This isn't much of a problem in a DW application, but
> in a web app with thousands of queries per second it's deadly.
> Performance engineering for PostgreSQL over the last 7 years has been
> partly about eliminating centralized locking; we don't want to add new
> locking.
I haven't seen the Greenplum code - how did it actually work? The
mechanism I just proposed would (except in the case of an overloaded
system) only require holding a lock for long enough to test and update
a single integer in shared memory, which doesn't seem like it would
cause a serious serialization problem. I might be missing something,
or it might suck for lots of other reasons, but if we already know
that then let's try to be more specific about what the problems are.
> That means that a realistic admissions control mechanism would need to
> operate based on projections and estimates and "best effort" policies.
> Not only is this mathematically more complex, it's an open question
> whether it puts us ahead of where we are now vis-a-vis underallocation
> of memory. Realistically, a lot of tuning and testing would be required
> before such a tool was actually an improvement.
Before today, that's the only approach I'd ever considered, but this
article made me rethink that. If you have a stream of queries that
can be run quickly with 1GB of memory and much more slowly with any
lesser amount, the only sensible thing to do is wait until there's a
GB of memory available for you to grab. What projection or estimate
of "best effort" would arrive at even approximately the same result?
> Or, to put it another way: the "poor man's admission control" is a waste
> of time because it doesn't actually help performance. We're basically
> facing doing the hard version, or not bothering.
I think it's an oversimplification to group all approaches as "easy"
and "hard", and even more of an oversimplification to say that all of
the easy ones suck.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-06-25 20:44:10 | Re: Admission Control |
Previous Message | Kevin Grittner | 2010-06-25 20:34:21 | Re: Admission Control |