From: | Craig Ringer <ringerc(at)ringerc(dot)id(dot)au> |
---|---|
To: | Benjamin Adams <freebsdworld(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: External Storage of data dir |
Date: | 2012-08-11 13:17:44 |
Message-ID: | 50265B78.9060806@ringerc.id.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 08/10/2012 09:59 PM, Benjamin Adams wrote:
> Server <--------> iSCSI (6 Bay, Raid 5 or 10)
Use RAID 10 if you care even a little bit about performance. Especially
with the latency added by iSCSI.
> I have read GB can get 130 Mps.
Megabytes per second? Not likely. AFAIK 125 is the theoretical max, and
that's before protocol overheads. Even if you were using jumbo frames
and transferring your data with directly sent Ethernet frames you'd be
lucky.
In the real world you're extremely unlikely to be using jumbo frames
unless you have a fairly fancy switch and have set the hosts up to use
them. You're sending 1534 byte packets, then paying protocol overhead
out of that for:
- Ethernet frame
- IP headers
- TCP or UDP headers
- iSCSI headers
... and possibly IPSec headers too.
I tend to work with the assumption that real-world gigabit will deliver
100 to 110 MB/s.
See eg
http://it.toolbox.com/blogs/database-soup/the-problem-with-iscsi-30602
> Wondering how this would effect postgres performance or even if it would work.
It absolutely will, especially for sequential scans and uncached index
scans. It shouldn't have much effect on random read or write access though.
Even a good 'ol Western Digital Caviar Black 2GB, a midrange 7200RPM
drive, will deliver a solid 130 MB/s sequential read and write speed:
http://www.storagereview.com/western_digital_caviar_black_review_2tb
Random access is a prettier picture. While seek times aren't on the
datasheet, random access times seem to be benchmarked at around 12ms:
http://hothardware.com/Reviews/Western-Digital-Caviar-Black-and-RE4-2TB-Drives-Review/?page=5
Gigabit round trip times for 1400 byte packets should be in the 0.25 ms
range, so the latency is small relative to that of the HDD its self.
If you have enough RAM on the iSCSI client to fit all your busy tables
in RAM then the only thing you're likely to really suffer for is bulk
inserts and updates.
> Before I spend the money.
> (don't have to money to buy a new server)
> I know remount mounting issue maybe an issue.
I'd be more worried about fsync reliability and crash safety. An iSCSI
initiator or target can make things seem a LOT faster by lying and
claiming to have completed an fsync when it's received the data and
stored it in local cache but not yet flushed it to disk.
That's OK if you have a special purpose battery backup RAM cache or
flash cache, but otherwise it's highly likely to destroy your data if
the storage server crashes or loses power.
You'll want to test carefully to make sure that fsync()s are actually
happening.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Vick Khera | 2012-08-11 13:58:03 | Re: is 9.x so much better than 8.x? |
Previous Message | jan zimmek | 2012-08-11 12:05:06 | json support for composites in upcoming 9.2 |