Gerhard Wiesinger <lists(at)wiesinger(dot)com> writes:
> I've one idea, which is not ideal, but may work and shouldn't be much
> effort to implement:
> As in the example above we read B1-B5 and B7-B10 on a higher level outside
> of normal buffer management with large request sizes (e.g. where hash
> index scans and sequential scans are done). As the blocks are now in cache
> normal buffer management is very fast:
> 1.) B1-B5: 5*8k=40k
> 2.) B7-B10: 4*8k=32k
> So we are reading for 1.):
> B1-B5 in one 40k block (typically from disk), afterwards we read B1, B2,
> B3, B4, B5 in 8k chunks from cache again.
Is this really different from, or better than, telling the OS we'll need
those blocks soon via posix_fadvise?
regards, tom lane