| From: | Greg Stark <gsstark(at)mit(dot)edu> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Robert Lor <Robert(dot)Lor(at)Sun(dot)COM>, Theo Schlossnagle <jesus(at)omniti(dot)com>, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Generic Monitoring Framework Proposal |
| Date: | 2006-06-20 15:31:19 |
| Message-ID: | 87d5d34zeg.fsf@stark.xeocode.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
> > What would be useful is instrumenting high level calls that can't be traced
> > without application guidance. For example, inserting a dtrace probe for each
> > SQL and each plan node. That way someone could get the same info as EXPLAIN
> > ANALYZE from a production server without having to make application
> > modifications (or suffer the gettimeofday overhead).
>
> My bogometer just went off again. How is something like dtrace going to
> magically get realtime information without reading the clock?
Sorry, I meant get the same info as EXPLAIN ANALYZE minus the timing.
I'm not familiar with DTrace first-hand but I did have the impression it was
possible to get timing information though. I don't know how much overhead it
has but I wouldn't be surprised if it was lower for a kernel-based profiling
elapsed time counter on Sun hardware than a general purpose gettimeofday call
on commodity PC hardware.
For example it could use a cpu instruction counter and have hooks in the
scheduler for saving and restoring the counter to avoid the familiar gotchas
with being rescheduled across processors.
--
greg
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Wade Klaver | 2006-06-20 16:11:39 | Initdb segfaults during "initializing pg_authid" |
| Previous Message | Tom Lane | 2006-06-20 14:34:13 | Re: Generic Monitoring Framework Proposal |