From: | Michael Stone <mstone+postgres(at)mathom(dot)us> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: What's the best hardver for PostgreSQL 8.1? |
Date: | 2005-12-27 13:35:27 |
Message-ID: | 20051227133526.GA6811@mathom.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Dec 26, 2005 at 12:32:19PM -0500, Alex Turner wrote:
>It's irrelavent what controller, you still have to actualy write the
>parity blocks, which slows down your write speed because you have to
>write n+n/2 blocks. instead of just n blocks making the system write
>50% more data.
>
>RAID 5 must write 50% more data to disk therefore it will always be
>slower.
At this point you've drifted into complete nonsense mode.
On Mon, Dec 26, 2005 at 10:11:00AM -0800, David Lang wrote:
>what slows down raid 5 is that to modify a block you have to read blocks
>from all your drives to re-calculate the parity. this interleaving of
>reads and writes when all you are logicly doing is writes can really hurt.
>(this is why I asked the question that got us off on this tangent, when
>doing new writes to an array you don't have to read the blocks as they are
>blank, assuming your cacheing is enough so that you can write blocksize*n
>before the system starts actually writing the data)
Correct; there's no reason for the controller to read anything back if
your write will fill a complete stripe. That's why I said that there
isn't a "RAID 5 penalty" assuming you've got a reasonably fast
controller and you're doing large sequential writes (or have enough
cache that random writes can be batched as large sequential writes).
On Mon, Dec 26, 2005 at 06:04:40PM -0500, Alex Turner wrote:
>Yes, but those blocks in RAID 10 are largely irrelevant as they are to
>independant disks. In RAID 5 you have to write parity to an 'active'
>drive that is part of the stripe.
Once again, this doesn't make any sense. Can you explain which parts of
a RAID 10 array are inactive?
>I agree totally that the read+parity-calc+write in the worst case is
>totaly bad, which is why I alway recommend people should _never ever_
>use RAID 5. In this day and age of large capacity chassis, and large
>capacity SATA drives, RAID 5 is totally inapropriate IMHO for _any_
>application least of all databases.
So I've got a 14 drive chassis full of 300G SATA disks and need at least
3.5TB of data storage. In your mind the only possible solution is to buy
another 14 drive chassis? Must be nice to never have a budget. Must be a
hard sell if you've bought decent enough hardware that your benchmarks
can't demonstrate a difference between a RAID 5 and a RAID 10
configuration on that chassis except in degraded mode (and the customer
doesn't want to pay double for degraded mode performance).
>In reality I have yet to benchmark a system where RAID 5 on the same
>number of drives with 8 drives or less in a single array beat a RAID
>10 with the same number of drives.
Well, those are frankly little arrays, probably on lousy controllers...
Mike Stone
From | Date | Subject | |
---|---|---|---|
Next Message | Albert Cervera Areny | 2005-12-27 16:09:28 | Performance problems with 8.1.1 compared to 7.4.7 |
Previous Message | Ivan Voras | 2005-12-27 13:19:55 | Re: Bitmap indexes etc. |