From: | Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> |
---|---|
To: | Kevin Grittner <kgrittn(at)ymail(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Greg Stark <stark(at)mit(dot)edu> |
Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Peter Geoghegan <pg(at)heroku(dot)com>, Jim Nasby <jim(at)nasby(dot)net>, Josh Berkus <josh(at)agliodbs(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ANALYZE sampling is too good |
Date: | 2013-12-11 19:39:31 |
Message-ID: | 52A8BF73.3050808@archidevsys.co.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/12/13 08:31, Kevin Grittner wrote:
> Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz> wrote:
>
>> For example, assume 1000 rows of 200 bytes and 1000 rows of 20 bytes,
>> using 400 byte pages. In the pathologically worst case, assuming
>> maximum packing density and no page has both types: the large rows would
>> occupy 500 pages and the smaller rows 50 pages. So if one selected 11
>> pages at random, you get about 10 pages of large rows and about one for
>> small rows!
> With 10 * 2 = 20 large rows, and 1 * 20 = 20 small rows.
>
> --
> Kevin Grittner
> EDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
Sorry, I've simply come up with well argued nonsense!
Kevin, you're dead right.
Cheers,
Gavin
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2013-12-11 19:42:13 | Re: ANALYZE sampling is too good |
Previous Message | Simon Riggs | 2013-12-11 19:37:19 | Re: autovacuum_work_mem |