From: | Brian Hurt <bhurt(at)janestcapital(dot)com> |
---|---|
To: | Bryan Murphy <bryan(dot)murphy(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: SAN vs Internal Disks |
Date: | 2007-09-07 18:21:52 |
Message-ID: | 46E196C0.5020508@janestcapital.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Bryan Murphy wrote:
>Our database server connects to the san via iSCSI over Gig/E using
>jumbo frames. File system is XFS (noatime).
>
>
>
>
...
>Throughput, however, kinda sucks. I just can't get the kind of
>throughput to it I was hoping to get. When our memory cache is blown,
>the database can be downright painful for the next few minutes as
>everything gets paged back into the cache.
>
>
>
Remember that Gig/E is bandwidth limited to about 100 Mbyte/sec. Maybe
a little faster than that downhill with a tailwind, but not much.
You're going to get much better bandwidth connecting to a local raid
card talking to local disks simply due to not having the ethernet as a
bottleneck. iSCSI is easy to set up and manage, but it's slow. This is
the big advantage Fibre Channel has- serious performance. You can have
multiple channels on a single fibre channel card- IIRC, QLogic's cards
have a default of 4 channels- each pumping 400 Mbyte/sec. At which
point the local bus rapidly becomes the bottleneck. Of course, this
comes at the cost of a signifigant increase in complexity.
Brian
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2007-09-07 18:21:58 | Re: SAN vs Internal Disks |
Previous Message | Alan Hodgson | 2007-09-07 18:18:42 | Re: SAN vs Internal Disks |