From: | Ants Aasma <ants(dot)aasma(at)eesti(dot)ee> |
---|---|
To: | Satoshi Nagayasu <snaga(at)uptime(dot)jp> |
Cc: | 赖文豫 <xiaolai913(at)gmail(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: By now, why PostgreSQL 9.2 don't support SSDs? |
Date: | 2013-03-31 00:18:44 |
Message-ID: | CA+CSw_ucSZ5KAJoZiYqWUpfPWtqZvQAj0M+W66BVvfJgHNFacQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mar 30, 2013 7:13 PM, "Satoshi Nagayasu" <snaga(at)uptime(dot)jp> wrote:
> But I heard that larger block size, like 256kB, would take
> advantage of the SSD performance because of the block management
> within SSD.
This is only true for very bad SSDs. Any SSD that you would want to trust
with your data do block remapping internally, eliminating the issue. (See
for example Intel DC3700 sustaining 34'000 random 4k writes/s)
Larger block sizes would just lift the random access workload write
amplification into Postgresql where the drive can't fix it. For sequential
or mostly sequential workloads the OS can take care of it by merging
writes. Additionally, contention for page level locks will increase with
page size, cache efficiency goes down. I would expect cases where larger
block size is a significant benefit to be very rare.
Regards,
Ants Aasma
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2013-03-31 02:21:41 | Re: Fix for pg_upgrade and invalid indexes |
Previous Message | Andres Freund | 2013-03-30 22:59:56 | Re: HS and clog |