From: | "Bucky Jordan" <bjordan(at)lumeta(dot)com> |
---|---|
To: | "Scott Marlowe" <smarlowe(at)g2switchworks(dot)com>, "Merlin Moncure" <mmoncure(at)gmail(dot)com> |
Cc: | "Jeff Davis" <pgsql(at)j-davis(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: PowerEdge 2950 questions |
Date: | 2006-08-24 19:50:45 |
Message-ID: | 78ED28FACE63744386D68D8A9D1CF5D4104B1F@MAIL.corp.lumeta.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Here's benchmarks of RAID5x4 vs RAID10x4 on a Dell Perc5/I with 300 GB
10k RPM SAS drives. I know these are bonnie 1.9 instead of the older
version, but maybe it might still make for useful analysis of RAID5 vs.
RAID10.
Also, unfortunately I don't have the exact numbers, but RAID10x6
performed really poorly on the sequential IO (dd) tests- worse than the
4 disk RAID5, something around 120 MB/s. I'm currently running the
system as a RAID5x6, but would like to go back and do some further
testing if I get the chance to tear the box down again.
These tests were run on FreeBSD 6.1 amd64 RELEASE with UFS + soft
updates. For comparison, the dd for RAID5x6 was 255 MB/s so I think the
extra disks really help out with RAID5 write performance, as Scott
pointed out. (I'm using a 128k stripe size with a 256MB writeback
cache).
Personally, I'm not yet convinced that RAID10 offers dramatically better
performance than RAID5 for 6 disks (at least on the Dell PERC
controller), and available storgae is a significant factor for my
particular application. But I do feel the need to do more testing, so
any suggestions are appreciated. (and yes, I'll be using bonnie 1.03 in
the future, along with pgbench).
------ RAID5x4
# /usr/local/sbin/bonnie++ -d bonnie -s 1000:8k -u root
Version 1.93c ------Sequential Output------ --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
1000M 587 99 158889 30 127859 32 1005 99 824399 99
+++++ +++
Latency 14216us 181ms 48765us 56241us 1687us
47997us
Version 1.93c ------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
+++++ +++
Latency 40365us 25us 35us 20030us 36us
52us
1.93c,1.93c,beast.corp.lumeta.com,1,1155204369,1000M,,587,99,158889,30,1
27859,32,1005,99,824399,99,+++++,+++,16,,,,,+++++,+++,+++++,+++,+++++,++
+,+++++,+++,+++++,+++,+++++,+++,14216us,181ms,48765us,56241us,1687us,479
97us,40365us,25us,35us,20030us,36us,52us
# time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k && sync)"
125000+0 records in
125000+0 records out
1024000000 bytes transferred in 6.375067 secs (160625763 bytes/sec)
0.037u 1.669s 0:06.42 26.3% 29+211k 30+7861io 0pf+0w
------ RAID10 x 4
bash-2.05b$ bonnie++ -d bonnie -s 1000:8k
Version 1.93c ------Sequential Output------ --Sequential Input-
--Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
--Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP
/sec %CP
1000M 585 99 21705 4 28560 9 1004 99 812997 98 5436
454
Latency 14181us 81364us 50256us 57720us 1671us
1059ms
Version 1.93c ------Sequential Create------ --------Random
Create--------
-Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
/sec %CP
16 4712 10 +++++ +++ +++++ +++ 4674 10 +++++ +++
+++++ +++
Latency 807ms 21us 36us 804ms 110us
36us
1.93c,1.93c,beast.corp.lumeta.com,1,1155207445,1000M,,585,99,21705,4,285
60,9,1004,99,812997,98,5436,454,16,,,,,4712,10,+++++,+++,+++++,+++,4674,
10,+++++,+++,+++++,+++,14181us,81364us,50256us,57720us,1671us,1059ms,807
ms,21us,36us,804ms,110us,36us
bash-2.05b$ time bash -c "(dd if=/dev/zero of=bigfile count=125000 bs=8k
&& sync)"
125000+0 records in
125000+0 records out
1024000000 bytes transferred in 45.565848 secs (22472971 bytes/sec)
- Bucky
-----Original Message-----
From: Scott Marlowe [mailto:smarlowe(at)g2switchworks(dot)com]
Sent: Thursday, August 24, 2006 3:38 PM
To: Merlin Moncure
Cc: Jeff Davis; Bucky Jordan; pgsql-performance(at)postgresql(dot)org
Subject: Re: [PERFORM] PowerEdge 2950 questions
On Thu, 2006-08-24 at 13:57, Merlin Moncure wrote:
> On 8/24/06, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
> > On Thu, 2006-08-24 at 09:21 -0400, Merlin Moncure wrote:
> > > On 8/22/06, Jeff Davis <pgsql(at)j-davis(dot)com> wrote:
> > > > On Tue, 2006-08-22 at 17:56 -0400, Bucky Jordan wrote:
> > > it's not the parity, it's the seeking. Raid 5 gives you great
> > > sequential i/o but random is often not much better than a single
> > > drive. Actually it's the '1' in raid 10 that plays the biggest
role
> > > in optimizing seeks on an ideal raid controller. Calculating
parity
> > > was boring 20 years ago as it inolves one of the fastest
operations in
> > > computing, namely xor. :)
> >
> > Here's the explanation I got: If you do a write on RAID 5 to
something
> > that is not in the RAID controllers cache, it needs to do a read
first
> > in order to properly recalculate the parity for the write.
>
> it's worse than that. if you need to read something that is not in
> the o/s cache, all the disks except for one need to be sent to a
> physical location in order to get the data.
Ummmm. No. Not in my experience. If you need to read something that's
significantly larger than your stripe size, then yes, you'd need to do
that. With typical RAID 5 stripe sizes of 64k to 256k, you could read 8
to 32 PostgreSQL 8k blocks from a single disk before having to move the
heads on the next disk to get the next part of data. A RAID 5, being
read, acts much like a RAID 0 with n-1 disks.
It's the writes that kill performance, since you've got to read two
disks and write two disks for every write, at a minimum. This is why
small RAID 5 arrays bottleneck so quickly. a 4 disk RAID 4 with two
writing threads is likely already starting to thrash.
Or did you mean something else by that?
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2006-08-24 20:03:39 | Re: PowerEdge 2950 questions |
Previous Message | Scott Marlowe | 2006-08-24 19:38:28 | Re: PowerEdge 2950 questions |