From: | Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <stark(at)mit(dot)edu> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Peter Geoghegan <pg(at)heroku(dot)com>, Jim Nasby <jim(at)nasby(dot)net>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ANALYZE sampling is too good |
Date: | 2013-12-11 18:22:59 |
Message-ID: | 52A8AD83.7000608@archidevsys.co.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/12/13 06:22, Tom Lane wrote:
> I wrote:
>> Hm. You can only take N rows from a block if there actually are at least
>> N rows in the block. So the sampling rule I suppose you are using is
>> "select up to N rows from each sampled block" --- and that is going to
>> favor the contents of blocks containing narrower-than-average rows.
> Oh, no, wait: that's backwards. (I plead insufficient caffeine.)
> Actually, this sampling rule discriminates *against* blocks with
> narrower rows. You previously argued, correctly I think, that
> sampling all rows on each page introduces no new bias because row
> width cancels out across all sampled pages. However, if you just
> include up to N rows from each page, then rows on pages with more
> than N rows have a lower probability of being selected, but there's
> no such bias against wider rows. This explains why you saw smaller
> values of "i" being undersampled.
>
> Had you run the test series all the way up to the max number of
> tuples per block, which is probably a couple hundred in this test,
> I think you'd have seen the bias go away again. But the takeaway
> point is that we have to sample all tuples per page, not just a
> limited number of them, if we want to change it like this.
>
> regards, tom lane
>
>
Surely we want to sample a 'constant fraction' (obviously, in practice
you have to sample an integral number of rows in a page!) of rows per
page? The simplest way, as Tom suggests, is to use all the rows in a page.
However, if you wanted the same number of rows from a greater number of
pages, you could (for example) select a quarter of the rows from each
page. In which case, when this is a fractional number: take the
integral number of rows, plus on extra row with a probability equal to
the fraction (here 0.25).
Either way, if it is determined that you need N rows, then keep
selecting pages at random (but never use the same page more than once)
until you have at least N rows.
Cheers,
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Gavin Flower | 2013-12-11 18:26:46 | Re: ANALYZE sampling is too good |
Previous Message | Atri Sharma | 2013-12-11 18:13:20 | Re: stats for network traffic WIP |