From: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
---|---|
To: | Jim Nasby <jim(at)nasby(dot)net> |
Cc: | Greg Stark <stark(at)mit(dot)edu>, Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>, Josh Berkus <josh(at)agliodbs(dot)com>, Robert Haas <robertmhaas(at)gmail(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: ANALYZE sampling is too good |
Date: | 2013-12-09 21:47:38 |
Message-ID: | 52A63A7A.6090606@vmware.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12/09/2013 11:35 PM, Jim Nasby wrote:
> On 12/8/13 1:49 PM, Heikki Linnakangas wrote:
>> On 12/08/2013 08:14 PM, Greg Stark wrote:
>>> The whole accounts table is 1.2GB and contains 10 million rows. As
>>> expected with rows_per_block set to 1 it reads 240MB of that
>>> containing nearly 2 million rows (and takes nearly 20s -- doing a full
>>> table scan for select count(*) only takes about 5s):
>>
>> One simple thing we could do, without or in addition to changing the
>> algorithm, is to issue posix_fadvise() calls for the blocks we're
>> going to read. It should at least be possible to match the speed of a
>> plain sequential scan that way.
>
> Hrm... maybe it wouldn't be very hard to use async IO here either? I'm
> thinking it wouldn't be very hard to do the stage 2 work in the callback
> routine...
Yeah, other than the fact we have no infrastructure to do asynchronous
I/O anywhere in the backend. If we had that, then we could easily use it
here. I doubt it would be much better than posix_fadvising the blocks,
though.
- Heikki
From | Date | Subject | |
---|---|---|---|
Next Message | Nigel Heron | 2013-12-09 21:56:33 | Re: stats for network traffic WIP |
Previous Message | Kevin Grittner | 2013-12-09 21:43:16 | Re: [RFC] Shouldn't we remove annoying FATAL messages from server log? |