From: | Chris Bowlby <chris(at)pgsql(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Row sampling.. |
Date: | 2004-03-29 15:07:44 |
Message-ID: | 1080572864.17419.10.camel@morpheus.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi All,
I'm trying to gain a good understanding of how PostgreSQL determines
what to sample when doing a stats analysis on a table. Using PostgreSQL
7.4's pg_stats table I can get a good overall understanding of
variations in the table, but I need to know how PostgreSQL makes it's
choices on what rows to sample. The other thing I also noted, is that I
can change the stats of a column to be as high as 1000, but PostgreSQL
still may not sample all 1000 elements..
can someone help me gain a good understanding of that area of Postgres
so that I can make better choices on optimizing?
--
Chris Bowlby <chris(at)pgsql(dot)com>
PostgreSQL Inc.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-03-29 15:17:06 | Re: GIST code doesn't build on strict 64-bit machines |
Previous Message | Bruce Momjian | 2004-03-29 15:00:48 | Re: Fuzzy cost comparison to eliminate redundant planning |