From: | "Alex Turner" <armtuk(at)gmail(dot)com> |
---|---|
To: | "Luke Lonergan" <llonergan(at)greenplum(dot)com> |
Cc: | "Mikael Carneholm" <Mikael(dot)Carneholm(at)wirelesscar(dot)com>, "Ron Peacetree" <rjpeace(at)earthlink(dot)net>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: RAID stripe size question |
Date: | 2006-07-18 19:27:42 |
Message-ID: | 33c6269f0607181227g7c6eea1av5b8dbd9787bfd1c7@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
This is a great testament to the fact that very often software RAID will
seriously outperform hardware RAID because the OS guys who implemented it
took the time to do it right, as compared with some controller manufacturers
who seem to think it's okay to provided sub-standard performance.
Based on the bonnie++ numbers comming back from your array, I would also
encourage you to evaluate software RAID, as you might see significantly
better performance as a result. RAID 10 is also a good candidate as it's
not so heavy on the cache and CPU as RAID 5.
Alex.
On 7/18/06, Luke Lonergan <llonergan(at)greenplum(dot)com> wrote:
>
> Mikael,
>
>
> On 7/18/06 6:34 AM, "Mikael Carneholm" <Mikael(dot)Carneholm(at)WirelessCar(dot)com>
> wrote:
>
> > However, what's more important is the seeks/s - ~530/s on a 28 disk
> > array is quite lousy compared to the 1400/s on a 12 x 15Kdisk array
>
> I'm getting 2500 seeks/second on a 36 disk SATA software RAID (ZFS,
> Solaris 10) on a Sun X4500:
>
> =========== Single Stream ============
>
> With a very recent update to the zfs module that improves I/O scheduling
> and prefetching, I get the following bonnie++ 1.03a results with a 36
> drive RAID10, Solaris 10 U2 on an X4500 with 500GB Hitachi drives (zfs
> checksumming is off):
>
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> thumperdw-i-1 32G 120453 99 467814 98 290391 58 109371 99 993344 94
> 1801 4
>
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> /sec %CP
> 16 +++++ +++ +++++ +++ +++++ +++ 30850 99 +++++ +++
> +++++ +++
>
> =========== Two Streams ============
>
> Bumping up the number of concurrent processes to 2, we get about 1.5xspeed reads of RAID10 with a concurrent workload (you have to add the rates
> together):
>
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> thumperdw-i-1 32G 111441 95 212536 54 171798 51 106184 98 719472 88
> 1233 2
>
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> /sec %CP
> 16 26085 90 +++++ +++ 5700 98 21448 97 +++++ +++
> 4381 97
>
>
> Version 1.03 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
> /sec %CP
> thumperdw-i-1 32G 116355 99 212509 54 171647 50 106112 98 715030 87
> 1274 3
>
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
> /sec %CP
> 16 26082 99 +++++ +++ 5588 98 21399 88 +++++ +++
> 4272 97
>
> So that's 2500 seeks per second, 1440MB/s sequential block read, 212MB/s
> per character sequential read.
> =======================
>
> - Luke
>
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2006-07-18 19:37:27 | Re: RAID stripe size question |
Previous Message | Luke Lonergan | 2006-07-18 18:56:45 | Re: RAID stripe size question |