From: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
---|---|
To: | mark(at)mark(dot)mielke(dot)cc |
Cc: | "Bucky Jordan" <bjordan(at)lumeta(dot)com>, "Spiegelberg, Greg" <gspiegelberg(at)cranel(dot)com>, "Joshua Drake" <jd(at)commandprompt(dot)com>, "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Large tables (was: RAID 0 not as fast as |
Date: | 2006-09-19 04:01:45 |
Message-ID: | C134B9B9.31741%llonergan@greenplum.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Mark,
On 9/18/06 8:45 PM, "mark(at)mark(dot)mielke(dot)cc" <mark(at)mark(dot)mielke(dot)cc> wrote:
> Does a tool exist yet to time this for a particular configuration?
We're considering building this into ANALYZE on a per-table basis. The
basic approach times sequential access in page rate, then random seeks as
page rate and takes the ratio of same.
Since PG's heap scan is single threaded, the seek rate is equivalent to a
single disk (even though RAID arrays may have many spindles), the typical
random seek rates are around 100-200 seeks per second from within the
backend. That means that as sequential scan performance increases, such as
happens when using large RAID arrays, the random_page_cost will range from
50 to 300 linearly as the size of the RAID array increases.
- Luke
From | Date | Subject | |
---|---|---|---|
Next Message | Markus Schaber | 2006-09-19 09:53:25 | Re: High CPU Load |
Previous Message | mark | 2006-09-19 03:45:47 | Re: Large tables (was: RAID 0 not as fast as expected) |