From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | josh(at)agliodbs(dot)com, Greg Stark <gsstark(at)mit(dot)edu>, Marko Ristola <marko(dot)ristola(at)kolumbus(dot)fi>, pgsql-perform <pgsql-performance(at)postgresql(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [HACKERS] Bad n_distinct estimation; hacks suggested? |
Date: | 2005-04-25 15:23:00 |
Message-ID: | 19276.1114442580@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers pgsql-performance |
Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> My suggested hack for PostgreSQL is to have an option to *not* sample,
> just to scan the whole table and find n_distinct accurately.
> ...
> What price a single scan of a table, however large, when incorrect
> statistics could force scans and sorts to occur when they aren't
> actually needed ?
It's not just the scan --- you also have to sort, or something like
that, if you want to count distinct values. I doubt anyone is really
going to consider this a feasible answer for large tables.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Dave Held | 2005-04-25 16:15:22 | Re: [PERFORM] Bad n_distinct estimation; hacks suggested? |
Previous Message | Tom Lane | 2005-04-25 15:11:45 | Re: How to make lazy VACUUM of one table run in several transactions ? |
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas F.O'Connell | 2005-04-25 15:44:24 | Re: pgbench Comparison of 7.4.7 to 8.0.2 |
Previous Message | Merlin Moncure | 2005-04-25 14:13:34 | Re: Joel's Performance Issues WAS : Opteron vs Xeon |