From: | Scott Carey <scott(at)richrelevance(dot)com> |
---|---|
To: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>, Rajesh Kumar Mallah <mallah(dot)rajesh(at)gmail(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: suggestions for postgresql setup on Dell 2950 , PERC6i controller |
Date: | 2009-02-05 04:24:31 |
Message-ID: | C5AFA9FF.2338%scott@richrelevance.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Sorry for the top posts, I don't have a client that is inline post friendly.
Most PERCs are rebranded LSI's lately. The difference between the 5 and 6 is PCIX versus PCIe LSI series, relatively recent ones. Just look at the OpenSolaris drivers for the PERC cards for a clue to what is what.
Bonnie ++ is a horrible benchmark IMO (for server disk performance checks beyond very basic sanity). I've tried iozone, dd, fio, and manual shell script stuff...
Fio is very good, there's one quirk with how it does random writes (sparsely) that can make XFS freak out, don't test it with sparse random writes - Postgres doesn't do this, it writes random re-writes and only appends to files to grow.
FIO is also good because you can make useful profiles, such as multiple concurrent readers of different types, or mix in some writes. A real postgres benchmark may be better, but some more sophisticated synthetic ones were able to show how far from the ideal the PERC falls with more sophisticated load than a better card.
My experience with 12 nearline-SAS 7200 RPM drives and a Perc 6e, then the same system with another card:
Ext3, out of the box, 12 drives raid 10: ~225MB/sec.
ext3, os readahead tuned up: 350MB/sec.
XFS, out of the box, 12 drives raid 10: ~300MB/sec.
Tune OS readahead (24576 or so), with xfs, 410MB/sec.
Higher Linux device readahead did not impact the random access performance, and the defaults are woeful for the PERC cards.
10 disk and 8 disk setups performed the same. PERC did not really scale past 8 disks in raid 10, I did not try 6 disks. Each disk can do 115MB/sec or so at the front of the drive with JBOD tests tuned with the right readahead Linux filesystem value.
All tests were done on the first 20% or so carved out, to limit the effects of transfer rate decrease on higher LBA's and be fair between file systems (otherwise, ext3 looks worse, as it is more likely than xfs to allocate you some stuff way out on the disk in a somewhat empty partition).
Adaptec card (5445), untuned readahead, 500MB/sec +
Tuned readahead, 600MB/sec (and xfs now the same as dd, with 100GB files+), at the maximum expectation for this sort of raid 10 (that can't use all drives for reading, like zfs).
I did not get much higher random IOPS out of smaller block sizes than the default. 15K SAS drives will be more likely to benefit from smaller blocks, but I don't have experience with that on a PERC. General experience says that going below 64K on any setup is a waste of time with today's hardware. Reading 64K takes less than 1ms.
Do not bother with the PERC BIOS' read-ahead setting, it just makes things worse, the Linux block device readahead is far superior.
Best performance achieved on a set of 20 drives in my testing was to use two Adaptec cards, each with moderate sized raid 10 sets (adaptec 10 drives) and software linux 'md' raid 0 on top of that. It takes at least two concurrent sequential readers to max the I/O in this case, and 1000MB/sec to 1150MB/sec is the peak depending on the mix of sequential readers. In the real world, that only happens when writes are low and there are about 4 concurrent sequential scans on large (multi-GB) tables. Most people will be optimizing for much higher random access rates rather than sequential scans mixed with random access.
Placing the xlogs on a separate volume helped quite a bit in the real world postgres tests with mixed load.
On 2/4/09 12:09 PM, "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> wrote:
On Wed, Feb 4, 2009 at 11:45 AM, Rajesh Kumar Mallah
<mallah(dot)rajesh(at)gmail(dot)com> wrote:
> Hi,
>
> I am going to get a Dell 2950 with PERC6i with
> 8 * 73 15K SAS drives +
> 300 GB EMC SATA SAN STORAGE,
>
> I seek suggestions from users sharing their experience with
> similar hardware if any. I have following specific concerns.
>
> 1. On list i read that RAID10 function in PERC5 is not really
> striping but spanning and does not give performance boost
> is it still true in case of PERC6i ?
I have little experience with the 6i. I do have experience with all
the Percs from the 3i/3c series to the 5e series. My experience has
taught me that a brand new, latest model $700 Dell RAID controller is
about as good as a $150 LSI, Areca, or Escalade/3Ware controller.
I.e. a four or five year old design. And that's being generous.
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Carey | 2009-02-05 05:04:23 | Re: suggestions for postgresql setup on Dell 2950 , PERC6i controller |
Previous Message | Scott Marlowe | 2009-02-04 21:36:46 | Re: suggestions for postgresql setup on Dell 2950 , PERC6i controller |