| From: | Bruce Momjian <pgman(at)candle(dot)pha(dot)pa(dot)us> |
|---|---|
| To: | Kenneth Marshall <ktm(at)is(dot)rice(dot)edu> |
| Cc: | Simon Riggs <simon(at)2ndquadrant(dot)com>, Qingqing Zhou <zhouqq(at)cs(dot)toronto(dot)edu>, pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Warm-cache prefetching |
| Date: | 2005-12-09 15:37:25 |
| Message-ID: | 200512091537.jB9FbPN07379@candle.pha.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Kenneth Marshall wrote:
> The main benefit of pre-fetching optimization is to allow just-
> in-time data delivery to the processor. There are numerous papers
> illustrating the dramatic increase in data throughput by using
> datastructures designed to take advantage of prefetching. Factors
> of 3-7 can be realized and this can greatly increase database
> performance. The first step needed to take advantage of the ability
> of pre-fetching to reduce memory latency is to design the index
> page layout with an internal blocking of the cache-line size.
> Then issue pre-fetch instructions for the memory you are going
> to need to process the index page far enough in advance to allow
> it to be in a cache-line by the time it is needed.
I can see that being useful for a single-user application that doesn't
have locking or I/O bottlenecks, and doesn't have a multi-stage design
like a database. Do we do enough of such processing that we will _see_
an improvement, or will our code become more complex and it will be
harder to make algorithmic optimizations to our code?
--
Bruce Momjian | http://candle.pha.pa.us
pgman(at)candle(dot)pha(dot)pa(dot)us | (610) 359-1001
+ If your life is a hard drive, | 13 Roberts Road
+ Christ can be your backup. | Newtown Square, Pennsylvania 19073
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Kenneth Marshall | 2005-12-09 15:59:48 | Re: Warm-cache prefetching |
| Previous Message | Kenneth Marshall | 2005-12-09 15:26:00 | Re: Warm-cache prefetching |