Re: Reliability recommendations

From: Mark Kirkwood <markir(at)paradise(dot)net(dot)nz>
To: Luke Lonergan <llonergan(at)greenplum(dot)com>
Cc: Dan Gorman <dgorman(at)hi5(dot)com>, "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com>, "Craig A(dot) James" <cjames(at)modgraph-usa(dot)com>, pgsql-performance(at)postgresql(dot)org
Subject: Re: Reliability recommendations
Date: 2006-02-25 06:10:55
Message-ID: 43FFF4EF.6060601@paradise.net.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Luke Lonergan wrote:

>
> OK, how about some proof?
>
> In a synthetic test that writes 32GB of sequential 8k pages on a machine
> with 16GB of RAM:
> ========================= Write test results ==============================
> time bash -c "dd if=/dev/zero of=/dbfast1/llonergan/bigfile bs=8k
> count=2000000 && sync" &
> time bash -c "dd if=/dev/zero of=/dbfast3/llonergan/bigfile bs=8k
> count=2000000 && sync" &
>
> 2000000 records in
> 2000000 records out
> 2000000 records in
> 2000000 records out
>
> real 1m0.046s
> user 0m0.270s
> sys 0m30.008s
>
> real 1m0.047s
> user 0m0.287s
> sys 0m30.675s
>
> So that's 32,000 MB written in 60.05 seconds, which is 533MB/s sustained
> with two threads.
>

Well, since this is always fun (2G memory, 3Ware 7506, 4xPATA), writing:

$ dd if=/dev/zero of=/data0/dump/bigfile bs=8k count=500000
500000records in
500000records out
4096000000 bytes transferred in 32.619208 secs (125570185 bytes/sec)

> Now to read the same files in parallel:
> ========================= Read test results ==============================
> sync
> time dd of=/dev/null if=/dbfast1/llonergan/bigfile bs=8k &
> time dd of=/dev/null if=/dbfast3/llonergan/bigfile bs=8k &
>
> 2000000 records in
> 2000000 records out
>
> real 0m39.849s
> user 0m0.282s
> sys 0m22.294s
> 2000000 records in
> 2000000 records out
>
> real 0m40.410s
> user 0m0.251s
> sys 0m22.515s
>
> And that's 32,000MB in 40.4 seconds, or 792MB/s sustained from disk (not
> memory).
>

Reading:

$ dd of=/dev/null if=/data0/dump/bigfile bs=8k count=500000
500000records in
500000records out
4096000000 bytes transferred in 24.067298 secs (170189442 bytes/sec)

Ok - didn't quite get my quoted 175MB/s, (FWIW if bs=32k I get exactly
175MB/s).

Hmmm - a bit humbled by Luke's machinery :-), however, mine is probably
competitive on (MB/s)/$....

It would be interesting to see what Dan's system would do on a purely
sequential workload - as 40-50MB of purely random IO is high.

Cheers

Mark

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Luke Lonergan 2006-02-25 06:22:49 Re: Reliability recommendations
Previous Message Luke Lonergan 2006-02-25 04:41:52 Re: Reliability recommendations