From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
---|---|
To: | Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: poor performance when recreating constraints on large tables |
Date: | 2011-06-08 19:57:48 |
Message-ID: | BANLkTim5GO2k0m7E2=KevdZrMPwH-9aCDg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Jun 8, 2011 at 12:53 PM, Kevin Grittner <Kevin(dot)Grittner(at)wicourts(dot)gov
> wrote:
> Samuel Gendler <sgendler(at)ideasculptor(dot)com> wrote:
>
> > The planner knows how many rows are expected for each step of the
> > query plan, so it would be theoretically possible to compute how
> > far along it is in processing a query based on those estimates,
> > wouldn't it?
>
> And it is sometimes off by orders of magnitude. How much remaining
> time do you report when the number of rows actually processed so far
> is five times the estimated rows that the step would process? How
> about after it chugs on from there to 20 time she estimated row
> count? Of course, on your next query it might finish after
> processing only 5% of the estimated rows....
>
Sure, but if it is a query that is slow enough for a time estimate to be
useful, odds are good that stats that are that far out of whack would
actually be interesting to whoever is looking at the time estimate, so
showing some kind of 'N/A' response once things have gotten out of whack
wouldn't be unwarranted. Not that I'm suggesting that any of this is a
particularly useful exercise. I'm just playing with the original thought
experiment suggestion.
>
> -Kevin
>
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2011-06-08 21:57:37 | Re: poor performance when recreating constraints on large tables |
Previous Message | Tony Capobianco | 2011-06-08 19:55:17 | Re: Oracle v. Postgres 9.0 query performance |