Re: HDD vs SSD without explanation

From: Nicolas Charles <nicolas(dot)charles(at)normation(dot)com>
To: Neto pr <netopr9(at)gmail(dot)com>, Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
Cc: Fernando Hevia <fhevia(at)gmail(dot)com>, "Georg H(dot)" <georg-h(at)silentrunner(dot)de>, pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: HDD vs SSD without explanation
Date: 2018-01-16 15:08:01
Message-ID: e6ce9e6e-28cb-9f7d-7421-8f393053808b@normation.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Le 16/01/2018 à 11:14, Neto pr a écrit :
> 2018-01-15 20:04 GMT-08:00 Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>:
>> On 16/01/18 13:18, Fernando Hevia wrote:
>>
>>>
>>>
>>> The 6 Gb/s interface is capable of a maximum throughput of around 600
>>> Mb/s. None of your drives can achieve that so I don't think you are limited
>>> to the interface speed. The 12 Gb/s interface speed advantage kicks in when
>>> there are several drives installed and it won't make a diference in a single
>>> drive or even a two drive system.
>>>
>>> But don't take my word for it. Test your drives throughput with the
>>> command Justin suggested so you know exactly what each drive is capable of:
>>>
>>> Can you reproduce the speed difference using dd ?
>>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K
>>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size
>>>
>>>
>>> While common sense says SSD drive should outperform the mechanical one,
>>> your test scenario (large volume sequential reads) evens out the field a
>>> lot. Still I would have expected somewhat similar results in the outcome, so
>>> yes, it is weird that the SAS drive doubles the SSD performance. That is why
>>> I think there must be something else going on during your tests on the SSD
>>> server. It can also be that the SSD isn't working properly or you are
>>> running an suboptimal OS+server+controller configuration for the drive.
>>>
>> I would second the analysis above - unless you see your read MB/s slammed up
>> against 580-600MB/s contunuously then the interface speed is not the issue.
>> We have some similar servers that we replaced 12x SAS with 1x SATA 6 GBit/s
>> (Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS
>> drives.
>>
>> I suspect the problem is the particular SSD you have - I have benchmarked
>> the 256GB EVO variant and was underwhelmed by the performance. These
>> (budget) triple cell nand SSD seem to have highly variable read and write
>> performance (the write is all about when the SLC nand cache gets
>> full)...read I'm not so sure of - but it could be crappy chipset/firmware
>> combination. In short I'd recommend *not* using that particular SSD for a
>> database workload. I'd recommend one of the Intel Datacenter DC range (FWIW
>> I'm not affiliated with Intel in any way...but their DC stuff works well).
>>
>> regards
>>
>> Mark
> Hi Mark
> In other forums one person said me that on samsung evo should be
> partition aligned to 3072 not default 2048 , to start on erase block
> bounduary . And fs block should be 8kb. I am studing this too. Some
> DBAs have reported in other situations that the SSDs when they are
> full, are very slow. Mine is 85% full, so maybe that is also
> influencing. I'm disappointed with this SSD from Samsung, because in
> theory, the read speed of an SSD should be more than 300 times faster
> than an HDD and this is not happening.
>
> regards
> Neto
>
Hi Neto,

Unfortunately, Samsung 850 Evo is not a particularly fast SSD -
especially it's not really consistent in term of performance ( see
https://www.anandtech.com/show/8747/samsung-ssd-850-evo-review/5 and
https://www.anandtech.com/bench/product/1913 ). This is not a product
for professional usage, and you should not expect great performance from
it - as reported by these benchmark, you can have a 34ms latency in very
intensive usage:
ATSB - The Destroyer (99th Percentile Write Latency)99th Percentile
Latency in Microseconds - Lower is Better *34923

*Even average write latency of the Samsung 850 Evo is 3,3 ms in
intensive workload, while the HPE 300 GB 12G SAS is reported to have an
average of 2.9 ms, and won't suffer from write amplification

As long has you stick with a light usage, this SSD will probably be more
than capable, but if you want to host a database, you should really look
at PRO drives

Kind regards
Nicolas
**

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Mark Kirkwood 2018-01-16 21:24:41 Re: HDD vs SSD without explanation
Previous Message Thomas Kellerer 2018-01-16 12:20:39 Re: Query is slow when run for first time; subsequent execution is fast