From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | John R Pierce <pierce(at)hogranch(dot)com> |
Cc: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Hardware recommendations? |
Date: | 2016-11-03 01:23:13 |
Message-ID: | CAOR=d=1+V+UO9vc5t2eew5Z9gcPVaxUEEdWOMZ=CMWG6km4HKQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Nov 2, 2016 at 4:19 PM, John R Pierce <pierce(at)hogranch(dot)com> wrote:
> On 11/2/2016 3:01 PM, Steve Crawford wrote:
>>
>> After much cogitation I eventually went RAID-less. Why? The only option
>> for hardware RAID was SAS SSDs and given that they are not built on
>> electro-mechanical spinning-rust technology it seemed like the RAID card was
>> just another point of solid-state failure. I combined that with the fact
>> that the RAID card limited me to the relatively slow SAS data-transfer rates
>> that are blown away by what you get with something like an Intel NVME SSD
>> plugged into the PCI bus. Raiding those could be done in software plus $$$
>> for the NVME SSDs but I already have data-redundancy through a combination
>> of regular backups and streaming replication to identically equipped
>> machines which rarely lag the master by more than a second.
>
>
> just track the write wear life remaining on those NVMe cards, and maintain a
> realistic estimate of lifetime remaining in months, so you can budget for
> replacements. the complication with PCI NVMe is how to manage a
> replacement when the card is nearing EOL. The best solution is probably
> failing over to a replication slave database, then replacing the worn out
> card on the original server, and bringing it up from scratch as a new slave,
> this can be done with minimal service interruptions. Note your slaves will
> be getting nearly as many writes as the masters so likely will need
> replacing in the same time frame.
Yeah the last thing you want is to start having all your ssds fail at
once due to write cycle end of life etc. Where I used to work we had
pretty hard working machines with something like 500 to 1000 writes/s
and after a year were at ~90% writes left. ymmv depending on the ssd
etc.
A common trick is to overprovision if possible. Need 100G of storage
for a fast transactional db? Use 10% of a bunch of 800GB drives to
make an array and you now have a BUNCH of spare write cycles per
device for extra long life.
From | Date | Subject | |
---|---|---|---|
Next Message | Gionatan Danti | 2016-11-03 06:18:50 | Re: Recover from corrupted database due to failing disk |
Previous Message | Craig Ringer | 2016-11-03 01:13:47 | Re: Replication (BDR) problem: won't catch up after connection timeout |