From: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
---|---|
To: | "Martijn van Oosterhout" <kleptog(at)svana(dot)org>, "Rocco Altier" <RoccoA(at)Routescape(dot)com> |
Cc: | pgsql-patches(at)postgresql(dot)org, "Simon Riggs" <simon(at)2ndquadrant(dot)com> |
Subject: | Re: [PATCH] Improve EXPLAIN ANALYZE overhead by |
Date: | 2006-05-11 04:16:43 |
Message-ID: | C08808BB.232A6%llonergan@greenplum.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-patches |
Nice one Martijn - we have immediate need for this, as one of our sizeable
queries under experimentation took 3 hours without EXPLAIN ANALYZE, then
over 20 hours with it...
- Luke
On 5/9/06 2:38 PM, "Martijn van Oosterhout" <kleptog(at)svana(dot)org> wrote:
> On Tue, May 09, 2006 at 05:16:57PM -0400, Rocco Altier wrote:
>>> - To get this close it needs to get an estimate of the sampling
>>> overhead. It does this by a little calibration loop that is run
>>> once per backend. If you don't do this, you end up assuming all
>>> tuples take the same time as tuples with the overhead, resulting in
>>> nodes apparently taking longer than their parent nodes. Incidently,
>>> I measured the overhead to be about 3.6us per tuple per node on my
>>> (admittedly slightly old) machine.
>>
>> Could this be deferred until the first explain analyze? So that we
>> aren't paying the overhead of the calibration in all backends, even the
>> ones that won't be explaining?
>
> If you look it's only done on the first call to InstrAlloc() which
> should be when you run EXPLAIN ANALYZE for the first time.
>
> In any case, the calibration is limited to half a millisecond (that's
> 500 microseconds), and it'll be a less on fast machines.
>
> Have a nice day,
From | Date | Subject | |
---|---|---|---|
Next Message | Dennis Bjorklund | 2006-05-11 04:41:35 | Re: BEGIN inside transaction should be an error |
Previous Message | Jaime Casanova | 2006-05-11 02:43:32 | Re: BEGIN inside transaction should be an error |