Re: HDD vs SSD without explanation

From: Neto pr <netopr9(at)gmail(dot)com>
To: Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>
Cc: Fernando Hevia <fhevia(at)gmail(dot)com>, "Georg H(dot)" <georg-h(at)silentrunner(dot)de>, pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: HDD vs SSD without explanation
Date: 2018-07-17 12:47:07
Message-ID: CA+wPC0PjXj9JgajZTw_6-SJGBjUpOCihwHZYbMjmcNgg_9mffg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Thanks all, but I still have not figured it out.
This is really strange because the tests were done on the same machine
(I use HP ML110 Proliant 8gb RAM - Xeon 2.8 ghz processor (4
cores), and POSTGRESQL 10.1.
- Only the mentioned query running at the time of the test.
- I repeated the query 7 times and did not change the results.
- Before running each batch of 7 executions, I discarded the Operating
System cache and restarted DBMS like this:
(echo 3> / proc / sys / vm / drop_caches;

discs:
- 2 units of Samsung Evo SSD 500 GB (mounted on ZERO RAID)
- 2 SATA 7500 Krpm HDD units - 1TB (mounted on ZERO RAID)

- The Operating System and the Postgresql DBMS are installed on the SSD disk.

Best Regards
[ ]`s Neto

2018-01-16 13:24 GMT-08:00 Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>:
>
>
> On 16/01/18 23:14, Neto pr wrote:
>>
>> 2018-01-15 20:04 GMT-08:00 Mark Kirkwood <mark(dot)kirkwood(at)catalyst(dot)net(dot)nz>:
>>>
>>> On 16/01/18 13:18, Fernando Hevia wrote:
>>>
>>>>
>>>>
>>>> The 6 Gb/s interface is capable of a maximum throughput of around 600
>>>> Mb/s. None of your drives can achieve that so I don't think you are
>>>> limited
>>>> to the interface speed. The 12 Gb/s interface speed advantage kicks in
>>>> when
>>>> there are several drives installed and it won't make a diference in a
>>>> single
>>>> drive or even a two drive system.
>>>>
>>>> But don't take my word for it. Test your drives throughput with the
>>>> command Justin suggested so you know exactly what each drive is capable
>>>> of:
>>>>
>>>> Can you reproduce the speed difference using dd ?
>>>> time sudo dd if=/dev/sdX of=/dev/null bs=1M count=32K
>>>> skip=$((128*$RANDOM/32)) # set bs to optimal_io_size
>>>>
>>>>
>>>> While common sense says SSD drive should outperform the mechanical one,
>>>> your test scenario (large volume sequential reads) evens out the field a
>>>> lot. Still I would have expected somewhat similar results in the
>>>> outcome, so
>>>> yes, it is weird that the SAS drive doubles the SSD performance. That is
>>>> why
>>>> I think there must be something else going on during your tests on the
>>>> SSD
>>>> server. It can also be that the SSD isn't working properly or you are
>>>> running an suboptimal OS+server+controller configuration for the drive.
>>>>
>>> I would second the analysis above - unless you see your read MB/s slammed
>>> up
>>> against 580-600MB/s contunuously then the interface speed is not the
>>> issue.
>>> We have some similar servers that we replaced 12x SAS with 1x SATA 6
>>> GBit/s
>>> (Intel DC S3710) SSD...and the latter way outperforms the original 12 SAS
>>> drives.
>>>
>>> I suspect the problem is the particular SSD you have - I have benchmarked
>>> the 256GB EVO variant and was underwhelmed by the performance. These
>>> (budget) triple cell nand SSD seem to have highly variable read and write
>>> performance (the write is all about when the SLC nand cache gets
>>> full)...read I'm not so sure of - but it could be crappy chipset/firmware
>>> combination. In short I'd recommend *not* using that particular SSD for a
>>> database workload. I'd recommend one of the Intel Datacenter DC range
>>> (FWIW
>>> I'm not affiliated with Intel in any way...but their DC stuff works
>>> well).
>>>
>>> regards
>>>
>>> Mark
>>
>> Hi Mark
>> In other forums one person said me that on samsung evo should be
>> partition aligned to 3072 not default 2048 , to start on erase block
>> bounduary . And fs block should be 8kb. I am studing this too. Some
>> DBAs have reported in other situations that the SSDs when they are
>> full, are very slow. Mine is 85% full, so maybe that is also
>> influencing. I'm disappointed with this SSD from Samsung, because in
>> theory, the read speed of an SSD should be more than 300 times faster
>> than an HDD and this is not happening.
>>
>>
>
> Interesting - I didn't try changing the alignment. However I could get the
> rated write and read performance on simple benchmarks (provided it was in a
> PCIe V3 slot)...so figured it was ok with the default aligning. However once
> more complex workloads were attempted (databases and distributed object
> store) the performance was disappointing.
>
> If the SSD is 85% full that will not help either (also look at the expected
> lifetime of these EVO's - not that great for a server)!
>
> One thing worth trying is messing about with the IO scheduler: if you are
> using noop, then try deadline (like I said crappy firmware)...
>
> Realistically, I'd recommend getting an enterprise/DC SSD (put the EVO in
> your workstation, it will be quite nice there)!
>
> Cheers
> Mark

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Neto pr 2018-07-17 13:04:04 Re: Why HDD performance is better than SSD in this case
Previous Message Fabio Pardi 2018-07-17 08:08:03 Re: Why HDD performance is better than SSD in this case