From: | Mike Glover <mpg4(at)duluoz(dot)net> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | RAID or manual split? |
Date: | 2004-02-17 21:53:42 |
Message-ID: | 20040217135342.1924326f.mpg4@duluoz.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
It seems, that if I know the type and frequency of the queries a
database will be seeing, I could split the database by hand over
multiple disks and get better performance that I would with a RAID array
with similar hardware. Most of the data is volatile
and easily replaceable (and the rest is backed up independently), so
redundancy isn't importand, and I'm willing to do some ongoing
maintenance if I can get a decent speed boost. Am I misguided, or might
this work? details of my setup are below:
Six large (3-7 Mrow) 'summary' tables, each being updated continuously
by 5-20 processes with about 0.5 transactions/second/process.
Periodically (currently every two weeks), join queries are
performed between one of the 'summary' tables(same one each time) and
each of the other five. Each join touches most rows of both tables,
indexes aren't used. Results are written into a separate group of
'inventory' tables (about 500 Krow each), one for each join.
There are frequent (100-1000/day) queries of both the
inventory and summary tables using the primary key -- always using the
index and returning < 10 rows.
We're currently getting (barely) acceptable performance from a single
15k U160 SCSI disk, but db size and activity are growing quickly.
I've got more disks and a battery-backed LSI card on order.
-mike
--
Mike Glover
GPG Key ID BFD19F2C <mpg4(at)duluoz(dot)net>
From | Date | Subject | |
---|---|---|---|
Next Message | Rod Taylor | 2004-02-17 22:17:15 | Re: RAID or manual split? |
Previous Message | Todd Fulton | 2004-02-17 20:41:51 | Re: long running query running too long |