From: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
---|---|
To: | Gavin Sherry <swm(at)linuxworld(dot)com(dot)au> |
Cc: | David Boreham <david_list(at)boreham(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: ice-broker scan thread |
Date: | 2005-11-29 22:19:27 |
Message-ID: | 1133302767.2906.462.camel@localhost.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, 2005-11-30 at 08:30 +1100, Gavin Sherry wrote:
> On Tue, 29 Nov 2005, David Boreham wrote:
>
> >
> > >By default when you use aio you get the version in libc (-lrt IIRC)
> > >which has the issue I mentioned, probably because it's probably
> > >optimised for the lots-of-network-connections type program where
> > >multiple outstanding requests on a single fd are not meaningful. You
> > >can however link in some other library which gives you kernel support.
> > >However, I don't have a new enough kernel to have the kernel support so
> > >I havn't tested that.
> > >
> > >
> > Actually, after reading up on the current state of things, I'm not sure you
> > can even get POSIX aio on top of kernel aio in Linux. There are also a
> > few limitations in the 2.6 aio implementation that might prove troublesome:
> > for example it only works with O_DIRECT.
> >
> > libaio gives userland access to the kernel aio api (which is different
> > from POSIX aio).
>
> Yes. The O_DIRECT issue is my biggest concern about Linux at the moment.
> That being said, the plan is to only pre-fetch the next N blocks, where N
> < 32, and to read them into the local buffer cache. In a situation where
> space in the cache low (and prefetched pages might be pushed out before we
> even get to read them), we need to provide such information to the
> readahead mechanism so that it can reduce the number of blocks which it
> prefetches.
My understanding was that Linux at least has a reasonable readahead
mechanism that works on the scale you suggest.
I think its fair to assume that anybody that wants this can afford
sufficient memory to make it worthwhile. Multiple processes per scan
implies (low numbers of users or I/O overkill).
Best Regards, Simon Riggs
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2005-11-29 22:21:28 | Re: slow IN() clause for many cases |
Previous Message | Simon Riggs | 2005-11-29 22:07:06 | Re: slow IN() clause for many cases |