From: | Mark Kirkwood <markir(at)paradise(dot)net(dot)nz> |
---|---|
To: | Mark Wong <markwkm(at)gmail(dot)com> |
Cc: | greg(at)tcscs(dot)com, david(at)lang(dot)hm, pgsql-performance(at)postgresql(dot)org, Gabrielle Roth <gorthx(at)gmail(dot)com> |
Subject: | Re: file system and raid performance |
Date: | 2008-08-05 23:53:46 |
Message-ID: | 4898E80A.3020809@paradise.net.nz |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Mark Wong wrote:
> On Mon, Aug 4, 2008 at 10:56 PM, Gregory S. Youngblood <greg(at)tcscs(dot)com> wrote:
>
>> I recently ran some tests on Ubuntu Hardy Server (Linux) comparing JFS, XFS,
>> and ZFS+FUSE. It was all 32-bit and on old hardware, plus I only used
>> bonnie++, so the numbers are really only useful for my hardware.
>>
>> What parameters were used to create the XFS partition in these tests? And,
>> what options were used to mount the file system? Was the kernel 32-bit or
>> 64-bit? Given what I've seen with some of the XFS options (like lazy-count),
>> I am wondering about the options used in these tests.
>>
>
> The default (no arguments specified) parameters were used to create
> the XFS partitions. Mount options specified are described in the
> table. This was a 64-bit OS.
>
> Regards,
> Mark
>
>
I think it is a good idea to match the raid stripe size and give some
indication of how many disks are in the array. E.g:
For a 4 disk system with 256K stripe size I used:
$ mkfs.xfs -d su=256k,sw=2 /dev/mdx
which performed about 2-3 times quicker than the default (I did try sw=4
as well, but didn't notice any difference compared to sw=4).
regards
Mark
From | Date | Subject | |
---|---|---|---|
Next Message | Gregory S. Youngblood | 2008-08-06 00:03:23 | Re: file system and raid performance |
Previous Message | Fernando Ike | 2008-08-05 16:51:44 | Re: file system and raid performance |