From: | Greg Smith <greg(at)2ndquadrant(dot)com> |
---|---|
To: | Fernando Hevia <fhevia(at)ip-tel(dot)com(dot)ar> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: new server I/O setup |
Date: | 2010-01-15 05:46:26 |
Message-ID: | 4B500132.6050203@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Fernando Hevia wrote:
> I justified my first choice in that WAL writes are sequentially and OS
> pretty much are too, so a RAID 1 probably would hold ground against a
> 12 disc RAID 10 with random writes.
The problem with this theory is that when PostgreSQL does WAL writes and
asks to sync the data, you'll probably discover all of the open OS
writes that were sitting in the Linux write cache getting flushed before
that happens. And that could lead to horrible performance--good luck if
the database tries to do something after cron kicks off updatedb each
night for example.
I think there are two viable configurations you should be considering
you haven't thought about:
, but neither is quite what you're looking at:
2 discs in RAID 1 for OS
2 discs in RAID 1 for pg_xlog
10 discs in RAID 10 for postgres, ext3
2 spares.
14 discs in RAID 10 for everything
2 spares.
Impossible to say which of the four possibilities here will work out
better. I tend to lean toward the first one I listed above because it
makes it very easy to monitor the pg_xlog activity (and the non-database
activity) separately from everything else, and having no other writes
going on makes it very unlikely that the pg_xlog will ever become a
bottleneck. But if you've got 14 disks in there, it's unlikely to be a
bottleneck anyway. The second config above will get you slightly better
random I/O though, so for workloads that are really limited on that
there's a good reason to prefer it.
Also: the whole "use ext2 for the pg_xlog" idea is overrated far as I'm
concerned. I start with ext3, and only if I get evidence that the drive
is a bottleneck do I ever think of reverting to unjournaled writes just
to get a little speed boost. In practice I suspect you'll see no
benchmark difference, and will instead curse the decision the first time
your server is restarted badly and it gets stuck at fsck.
> Pd: any clue if hdparm works to deactive the disks write cache even if
> they are behind the 3ware controller?
You don't use hdparm for that sort of thing; you need to use 3ware's
tw_cli utility. I believe that the individual drive caches are always
disabled, but whether the controller cache is turned on or not depends
on whether the card has a battery. The behavior here is kind of weird
though--it changes if you're in RAID mode vs. JBOD mode, so be careful
to look at what all the settings are. Some of these 3ware cards default
to extremely aggressive background scanning for bad blocks too, you
might have to tweak that downward too.
--
Greg Smith 2ndQuadrant Baltimore, MD
PostgreSQL Training, Services and Support
greg(at)2ndQuadrant(dot)com www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Carlo Stonebanks | 2010-01-15 07:12:22 | Re: New server to improve performance on our large and busy DB - advice? |
Previous Message | Magnus Hagander | 2010-01-15 05:38:48 | Re: Inserting 8MB bytea: just 25% of disk perf used? |