From: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
---|---|
To: | Greg Smith <greg(at)2ndQuadrant(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Hope for a new PostgreSQL era? |
Date: | 2011-12-11 02:28:00 |
Message-ID: | 4EE41530.4000908@ringerc.id.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 12/10/2011 09:54 AM, Greg Smith wrote:
> I'm planning to put that instrumentation into the database directly,
> which is what people with Oracle background are asking for.
FWIW, even for folks like me who've come from a general OSS DB
background with a smattering of old Sybase and other primitive stuff,
it's still a concern. I know enough about Linux's guts, I/O and memory
behaviour to be able to track down many issues but it takes *time*, time
I can't spend on any of the numerous other things I have to also be
doing. Right now Pg performance fine-tuning is more of an expert skill
(as you know!) and for those with a less strong background in Linux/UNIX
systems and tuning it's a bit of a barrier.
Thankfully my workload is so light I don't need to care; manual EXPLAIN
ANALYSE and auto_explain along with vmstat/iotop is enough for me.
> -How can the overhead of collecting the timing data be kept down?
> It's really high in some places. This is being worked out right now
> on pgsql-hackers, see "Timing overhead and Linux clock sources"
One thing I think would be interesting for this would be to identify
slow queries (without doing detailed plan timing) and flag them for more
detailed timing if they're run again within <x> time. I suspect this
would only be practical with parameterised prepared statements where the
query string remained the same, but that'd still be interesting -
essentially automatically upgrading the log level for problem queries
from slow query logging to auto_explain with explain analyse.
The main issue would be exempting queries that're expected to take
longer than the slow query threshold, like reporting queries, where you
wouldn't want to pay that overhead. That should be handled by forgetting
about slow queries that aren't run again too soon, so they get flagged
for EXPLAIN ANALYZE next run but forgotten about before they're next run.
I don't actually need this myself, it's just something I've been
thinking about as a way to reduce the admin load of identifying and
tuning problem queries.
> I feel this will increasingly be the top blocker for performance
> sensitive deployments in the coming year, people used to having these
> tools in Oracle cannot imagine how they would operate without them.
Yep, especially since there's nothing in Pg to manage user/query
priorities for I/O or CPU, so the ability to actively manage performance
problems from the outside is particularly crucial. You'll always want to
do that of course, and it's probably _better_ than relying on work
priorities, especially since it sounds from recent comments like even on
Oracle those priority features aren't what you'd call friendly.
Personally I'd choose good performance monitoring over user/query
priorities any day. With good perf monitoring I can script from the
outside I have a lot more control, can send alerts, etc etc.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2011-12-11 03:29:33 | Re: Hope for a new PostgreSQL era? |
Previous Message | Andrew Armstrong | 2011-12-11 01:58:44 | Improve replication usability |