From: | "Richard Huxton" <dev(at)archonet(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-general(at)postgresql(dot)org>, "Bruce Momjian" <pgman(at)candle(dot)pha(dot)pa(dot)us> |
Subject: | Re: Re: Query not using index |
Date: | 2001-05-11 08:46:27 |
Message-ID: | 006601c0d9f6$df3134a0$1001a8c0@archonet.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
From: "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>
> "Richard Huxton" <dev(at)archonet(dot)com> writes:
> > Why doesn't PG (or any other system afaik) just have a first guess, run
the
> > query and then if the costs are horribly wrong cache the right result.
>
> ?? Knowing that your previous guess was wrong doesn't tell you what the
> right answer is, especially not for the somewhat-different question that
> the next query is likely to provide.
Surely if you used a seqscan on "where x=1" and only got 2 rows rather than
the 3000 you were expecting the only alternative is to try an index?
> The real problem here is simply that PG hasn't been keeping adequately
> detailed statistics. I'm currently working on improving that for 7.2...
> see discussions over in pghackers if you are interested.
Thinking about it (along with Bruce's reply posted to the list) I guess the
difference is whether you gather the statistics up-front during a vacuum, or
build them as queries are used. You're always going to need *something* to
base your first guess on anyway - the "learning" would only help you in
those cases where the distribution of values wasn't a normal curve.
Anyway, given that I'm up to my neck in work at the moment and I don't
actually know what I'm talking about, I'll shut up and get back to keeping
clients happy :-)
- Richard Huxton
From | Date | Subject | |
---|---|---|---|
Next Message | Karel Zak | 2001-05-11 09:45:52 | Re: formatting a date |
Previous Message | ryan | 2001-05-11 08:42:11 | Too Many Open Files PG 7.1 |