From: | "Bucky Jordan" <bjordan(at)lumeta(dot)com> |
---|---|
To: | "Matthew Schumacher" <matt(dot)s(at)aptalaska(dot)net>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Disk storage and san questions (was File Systems Compared) |
Date: | 2006-12-08 00:07:33 |
Message-ID: | 78ED28FACE63744386D68D8A9D1CF5D420A1F5@MAIL.corp.lumeta.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
I was working on a project that was considering using a Dell/EMC (dell's
rebranded emc hardware) and here's some thoughts on your questions based
on that.
> 1. Is iscsi a decent way to do a san? How much performance do I
loose
> vs connecting the hosts directly with a fiber channel controller?
It's cheaper, but if you want any sort of reasonable performance, you'll
need a dedicated gigabit network. I'd highly recommend a dedicated
switch too, not just vlan. You should also have dual nics, and use one
dedicated to iSCSI. Most all poweredges come with dual nics these days.
>
> 2. Would it be better to omit my database server from the san (or at
> least the database storage) and stick with local disks? If so what
> disks/controller card do I want? I use dell servers for everything so
> it would be nice if the recommendation is a dell system, but doesn't
> need to be. Overall I'm not very impressed with the LSI cards, but
I'm
> told the new ones are much better.
The new dell perc4, and perc5 to more extent, are reasonable performers
in my experience. However, this depends on the performance needs of your
database. You should be able to at least get better performance than
onboard storage (Poweredges max out at 6 disks- 8 if you go 2.5" SATA,
but I don't recommend those for reliability/performance reasons). If you
get one of the better Dell/EMC combo sans, you can allocate a raid pool
for your database and probably saturate the iSCSI interface. Next step
might be the MD1000 15 disk SAS enclosure with Perc5/e cards if you're
sticking with dell, or step up to multi-homed FC cards. (btw- you can
split the MD1000 in half and share it across two servers, since it has
two scsi cards. You can also daisy chain up to three of them for a total
of 45 disks). Either way, take a good look at what the SAN chassis can
support in terms of IO bandwidth- cause once you use it up, there's no
more to allocate to the DB.
>
> 3. Anyone use the sanrad box? Is it any good? Seems like
> consolidating disk space and disk spares platform wide is good idea,
but
> I've not used a san before so I'm nervous about it.
>
If you haven't used a san, much less an enterprise grade one, then I'd
be very nervous about them too. Optimizing SAN performance is much more
difficult than attached storage simply due to the complexity factor.
Definitely plan on a pretty steep learning curve, especially for
something like EMC and a good number of servers.
IMO, the big benefit to SAN is storage management and utilization, not
necessarily performance (you can get decent performance if you buy the
right hardware and tune it correctly). To your points- you can reduce
the number of hot spares, and allocate storage much more efficiently.
Also, you can allocate storage pools based on performance needs- slow
SATA 500Gb drives for archive, fast 15K SAS for db, etc. There's some
nice failover options too, as you mentioned boot from san allows you to
swap hardware, but I would get a demonstration from the vendor of this
working with your hardware/os setup (including booting up the cold spare
server). I know this was a big issue in some of the earlier Dell/EMC
hardware.
Sorry for the long post, but hopefully some of the info will be useful
to you.
Bucky
From | Date | Subject | |
---|---|---|---|
Next Message | Ron | 2006-12-08 00:11:09 | Re: Areca 1260 Performance |
Previous Message | Merlin Moncure | 2006-12-07 21:23:37 | Re: File Systems Compared |