From: | "Dennis Fleurbaaij" <dennis(at)core-lan(dot)nl> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Extreme preformance |
Date: | 2001-04-12 11:14:47 |
Message-ID: | 3ad58e63$0$35002@news2.zeelandnet.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I'm creating a database for a file list on a lanparty for 400 people. The
list is expected
to hold somewhere between 2.5 to 5 million tupels and that list is refreshed
every 60 mins. Every file record hold 5 varchar fields and a int8 for the
filesize.
In the meantime people are searching on this database at an expected rate of
two queries/second whith a peak of 50 selects/sec or something like that.
The hardware for this database is either a dual P3 700 with 500Mb ram and
RAID-0. my question is can this be done ? The application that drives it is
capable of pushing that
much data in no time but my question is cah the database hold it and more
important,
where can I find a guide to tweaking the for such violence ?
Ow incase the dual P3 is not enough we have a spare Quad Xeon 700 with a gig
of ram but I'd like to uset that for other purposes.
tnx in advance,
Dennis Fleurbaaij
www.core-lan.nl (sorry dutch language only :/ )
From | Date | Subject | |
---|---|---|---|
Next Message | Simon Bæk Carstensen | 2001-04-12 11:39:45 | to_char(now(), 'YYYY') and time zones |
Previous Message | Simon Bæk Carstensen | 2001-04-12 10:54:40 | to_char(now(), 'YYYY') and time zones |