From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
---|---|
To: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
Cc: | Jean-Christophe Boggio <cat(at)thefreecat(dot)org>, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: vacuum analyze again... |
Date: | 2001-02-20 17:55:02 |
Message-ID: | 200102201755.MAA10089@candle.pha.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> Bruce Momjian writes:
>
> > No, we have no ability to randomly pick rows to use for estimating
> > statistics. Should we have this ability?
>
> How's reading a sufficiently large fraction of random rows going to be
> significantly faster than reading all rows? If you're just going to read
> the first n rows then that isn't really random, is it?
Ingres did this too, I thought. You could specify a certain number of
random rows, perhaps 10%. On a large table, that is often good enough
and much faster. Often 2% is enough.
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2001-02-20 18:02:19 | Re: vacuum analyze again... |
Previous Message | Rod Taylor | 2001-02-20 17:45:16 | Re: strategies for keeping an audit trail of UPDATEs |