From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Greg Stark <stark(at)mit(dot)edu> |
Cc: | Peter Geoghegan <pg(at)heroku(dot)com>, Jim Nasby <jim(at)nasby(dot)net>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ANALYZE sampling is too good |
Date: | 2013-12-11 00:58:12 |
Message-ID: | CA+U5nM+3M7PfwrZs3ivt_oCuF-yTRaaA1Wq=u4GDTjKJxr2Kpg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 11 December 2013 00:44, Greg Stark <stark(at)mit(dot)edu> wrote:
> On Wed, Dec 11, 2013 at 12:40 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
>> When we select a block we should read all rows on that block, to help
>> identify the extent of clustering within the data.
>
> So how do you interpret the results of the sample read that way that
> doesn't introduce bias?
Yes, it is not a perfect statistical sample. All sampling is subject
to an error that is data dependent.
I'm happy that we have an option to select this/or not and a default
that maintains current behaviour, since otherwise we might expect some
plan instability.
I would like to be able to
* allow ANALYZE to run faster in some cases
* increase/decrease sample size when it matters
* have the default sample size vary according to the size of the
table, i.e. a proportional sample
--
Simon Riggs http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Claudio Freire | 2013-12-11 01:09:19 | Re: Why we are going to have to go DirectIO |
Previous Message | Andres Freund | 2013-12-11 00:56:21 | Re: [COMMITTERS] pgsql: Add a new reloption, user_catalog_table. |