From: | Samuel Gendler <sgendler(at)ideasculptor(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: raid array seek performance |
Date: | 2011-09-13 23:27:26 |
Message-ID: | CAEV0TzA5p=3s9AoDE5xAPNw4AfZgsFyRgVPRzkew6ijtoTt4vA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Sep 13, 2011 at 12:13 PM, Samuel Gendler
<sgendler(at)ideasculptor(dot)com>wrote:
> I'm just beginning the process of benchmarking and tuning a new server.
> Something I really haven't done before. I'm using Greg's book as a guide.
> I started with bonnie++ (1.96) and immediately got anomalous results (I
> think).
>
> Hardware is as follows:
>
> 2x quad core xeon 5504 2.0Ghz, 2x4MB cache
> 192GB DDR3 1066 RAM
> 24x600GB 15K rpm SAS drives
> adaptec 52445 controller
>
> The default config, being tested at the moment, has 2 volumes, one 100GB
> and one 3.2TB, both are built from a stripe across all 24 disks, rather than
> splitting some spindles out for one volume and another set for the other
> volume. At the moment, I'm only testing against the single 3.2TB volume.
>
> The smaller volume is partitioned into /boot (ext2 and tiny) and / (ext4
> and 91GB). The larger volume is mounted as xfs with the following options
> (cribbed from an email to the list earlier this week, I
> think): logbufs=8,noatime,nodiratime,nobarrier,inode64,allocsize=16m
>
> Bonnie++ delivered the expected huge throughput for sequential read and
> write. It seems in line with other benchmarks I found online. However, we
> are only seeing 180 seeks/sec, but seems quite low. I'm hoping someone
> might be able to confirm that and. hopefully, make some suggestions for
> tracking down the problem if there is one.
>
> Results are as follows:
>
>
> 1.96,1.96,newbox,1,1315935572,379G,,1561,99,552277,46,363872,34,3005,90,981924,49,179.1,56,16,,,,,19107,69,+++++,+++,20006,69,19571,72,+++++,+++,20336,63,7111us,10666ms,14067ms,65528us,592ms,170ms,949us,107us,160us,383us,31us,130us
>
>
> Version 1.96 ------Sequential Output------ --Sequential Input-
> --Random-
> -Per Chr- --Block-- -Rewrite- -Per Chr- --Block--
> --Seeks--
> Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec
> %CP
> newzonedb.z1.p 379G 1561 99 552277 46 363872 34 3005 90 981924 49
> 179.1 56
> Latency 7111us 10666ms 14067ms 65528us 592ms
> 170ms
> ------Sequential Create------ --------Random
> Create--------
> -Create-- --Read--- -Delete-- -Create-- --Read---
> -Delete--
> files:max:min /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec
> %CP
> newbox 16 19107 69 +++++ +++ 20006 69 19571 72 +++++ +++
> 20336 63
> Latency 949us 107us 160us 383us 31us
> 130us
>
>
My seek times increase when I reduce the size of the file, which isn't
surprising, since once everything fits into cache, seeks aren't dependent on
mechanical movement. However, I am seeing lots of bonnie++ results in
google which appear to be for a file size that is 2x RAM which show numbers
closer to 1000 seeks/sec (compared to my 180). Usually, I am seeing 16GB
file for 8GB hosts. So what is an acceptable random seeks/sec number for a
file that is 2x memory? And does file size make a difference independent of
available RAM such that the enormous 379GB file that is created on my host
is skewing the results to the low end?
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2011-09-14 00:04:36 | Re: Hash index use presently(?) discouraged since 2005: revive or bury it? |
Previous Message | Stefan Keller | 2011-09-13 23:04:27 | Hash index use presently(?) discouraged since 2005: revive or bury it? |