| From: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> | 
|---|---|
| To: | Joost Kraaijeveld <J(dot)Kraaijeveld(at)Askesis(dot)nl> | 
| Cc: | "Pgsql-Performance (E-mail)" <pgsql-performance(at)postgresql(dot)org> | 
| Subject: | Re: Can anyone explain this pgbench results? | 
| Date: | 2006-03-07 19:59:30 | 
| Message-ID: | 20060307195930.GB82989@pervasive.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-performance | 
On Tue, Mar 07, 2006 at 08:49:30PM +0100, Joost Kraaijeveld wrote:
> 
> Jim C. Nasby wrote:
>  
> > Speaking of 'disks', what's your exact layout? Do you have a 5 drive
> > raid5 for the OS and the database, 1 drive for swap and 1 drive for
> > pg_xlog?
> 
> On a Sil SATA 3114 controller:
> /dev/sda OS + Swap
> /dev/sdb /var with pg_xlog
> 
> On the 3Ware 9500S-8, 5 disk array:
> /dev/sdc with the database (and very safe, my MP3 collection ;-))
> 
> As I wrote in one of my posts to Michael, I suspect that the card is not handling the amount of write operations as well as I expected. I wonder if anyone else sees the same characteristics with this kind of card.
Well, the problem is that you're using RAID5, which has a huge write
overhead. You're unlikely to get good performance with it.
Also, it sounds like sda and sdb are not mirrored. If that's the case,
you have no protection from a drive failure taking out your entire
database, because you'd lose pg_xlog.
If you want better performance your best bets are to either setup RAID10
or if you don't care about the data, just go to RAID0.
-- 
Jim C. Nasby, Sr. Engineering Consultant      jnasby(at)pervasive(dot)com
Pervasive Software      http://pervasive.com    work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf       cell: 512-569-9461
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Dave Page | 2006-03-07 20:08:50 | Re: Postgres on VPS - how much is enough? | 
| Previous Message | Tom Lane | 2006-03-07 19:56:45 | Re: firebird X postgresql 8.1.2 windows, performance |