From: | Rod Taylor <pg(at)rbt(dot)ca> |
---|---|
To: | Jason Coene <jcoene(at)gotfrag(dot)com> |
Cc: | Postgresql Performance <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Hardware upgrade for a high-traffic database |
Date: | 2004-08-10 23:06:52 |
Message-ID: | 1092179211.11635.30.camel@jester |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
> Our database is about 20GB on disk, we have some quite large tables - 2M
> rows with TEXT fields in a sample table, accessed constantly. We average
> about 4,000 - 5,000 queries per second - all from web traffic. As you can
99% is reads? and probably the same data over and over again? You might
want to think about a small code change to cache sections of page output
in memory for the most commonly generated pages (there are usually 3 or
4 that account for 25% to 50% of web traffic -- starting pages).
The fact you're getting 5k queries/second off IDE drives tells me most
of the active data is in memory -- so your actual working data set is
probably quite small (less than 10% of the 20GB).
If the above is all true (mostly reads, smallish dataset, etc.) and the
database is not growing very quickly, you might want to look into RAM
and RAM bandwidth over disk. An Opteron with 8GB ram using the same old
IDE drives. Get a mobo with a SCSI raid controller in it, so the disk
component can be upgraded in the future (when necessary).
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2004-08-11 00:00:01 | Re: Hardware upgrade for a high-traffic database |
Previous Message | Rudi Starcevic | 2004-08-10 23:04:02 | Bulk Insert and Index use |