From: | Ivan Voras <ivoras(at)freebsd(dot)org> |
---|---|
To: | CSS <css(at)morefoo(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: rough benchmarks, sata vs. ssd |
Date: | 2012-02-13 22:12:01 |
Message-ID: | CAF-QHFV5Z+vAQPYGurXTEtxtKCswkq24W8LxN3iRtb6YD=nrxg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 13 February 2012 22:49, CSS <css(at)morefoo(dot)com> wrote:
> For the top-post scanners, I updated the ssd test to include
> changing the zfs recordsize to 8k.
> Well now I did, added the results to
> http://ns.morefoo.com/bench.html and it looks like there's
> certainly an improvement. That's with the only change from the
> previous test being to copy the postgres data dir, wipe the
> original, set the zfs recordsize to 8K (default is 128K), and then
> copy the data dir back.
This makes sense simply because it reduces the amount of data read
and/or written for non-sequential transactions.
> Things that stand out on first glance:
>
> -at a scaling factor of 10 or greater, there is a much more gentle
> decline in TPS than with the default zfs recordsize
> -on the raw *disk* IOPS graph, I now see writes peaking at around
> 11K/second compared to 1.5K/second.
> -on the zpool iostat graph, I do not see those huge write peaks,
> which is a bit confusing
Could be that "iostat" and "zpool iostat" average raw data differently.
> -on both iostat graphs, I see the datapoints look more scattered
> with the 8K recordsize
As an educated guess, it could be that smaller transaction sizes can
"fit in" (in buffers or controller processing paths) where large
didn't allowing more bursts of performance.
From | Date | Subject | |
---|---|---|---|
Next Message | Zhou Han | 2012-02-15 03:59:36 | client performance v.s. server statistics |
Previous Message | CSS | 2012-02-13 21:49:38 | Re: rough benchmarks, sata vs. ssd |