From: | "Bucky Jordan" <bjordan(at)lumeta(dot)com> |
---|---|
To: | "Luke Lonergan" <llonergan(at)greenplum(dot)com>, "Markus Schaber" <schabi(at)logix-tt(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Large tables (was: RAID 0 not as fast as |
Date: | 2006-09-21 19:13:55 |
Message-ID: | 78ED28FACE63744386D68D8A9D1CF5D4209A6E@MAIL.corp.lumeta.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> > Do you think that adding some posix_fadvise() calls to the backend
to
> > pre-fetch some blocks into the OS cache asynchroneously could
improve
> > that situation?
>
> Nope - this requires true multi-threading of the I/O, there need to be
> multiple seek operations running simultaneously. The current executor
> blocks on each page request, waiting for the I/O to happen before
> requesting
> the next page. The OS can't predict what random page is to be
requested
> next.
>
> We can implement multiple scanners (already present in MPP), or we
could
> implement AIO and fire off a number of simultaneous I/O requests for
> fulfillment.
So this might be a dumb question, but the above statements apply to the
cluster (e.g. postmaster) as a whole, not per postgres
process/transaction correct? So each transaction is blocked waiting for
the main postmaster to retrieve the data in the order it was requested
(i.e. not multiple scanners/aio)?
In this case, the only way to take full advantage of larger hardware
using normal postgres would be to run multiple instances? (Which might
not be a bad idea since it would set your application up to be able to
deal with databases distributed on multiple servers...)
- Bucky
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Lewis | 2006-09-21 19:41:54 | Re: Large tables (was: RAID 0 not as fast as |
Previous Message | Jeff Davis | 2006-09-21 16:45:56 | Re: PostgreSQL and sql-bench |