From: | Neil Conway <neilc(at)samurai(dot)com> |
---|---|
To: | gearond(at)cvc(dot)net |
Cc: | Guillaume Houssay <ghoussay(at)noos(dot)fr>, PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: configuration according to the database |
Date: | 2003-03-22 01:56:25 |
Message-ID: | 1048298185.11856.5.camel@tokyo |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 2003-03-21 at 15:28, Dennis Gearon wrote:
> If you are looking for speed, I would make the whole thing as arrays in memory
> in C++, and just do backups to the database on a regular basis.
You'd suggest storing "12 to 15GB" of data in main memory on an x86
machine with 4GB of RAM?
> Guillaume Houssay wrote:
> > 4 tables will have 1Million rows and 1000 columns with 90% of INT2 and
> > the rest of float (20% of all the data will be 0)
1,000 columns? That doesn't sound like the result of good database
design...
And if you'd like to try micro-optimizations, multiple NULL values in a
single tuple are stored efficiently -- so if those "0" values show up
more than once per tuple, consider storing them in the DB as NULL and
then converting them back to 0 (perhaps using COALESCE) on output.
> > DELL
> > bi-processor 2.8GHz
> > 4GB RAM
> > 76GB HD using Raid 5
> > Linux version to be defined (Redhat ?)
> >
> > Do you think this configuration is enough to have good performance after
> > setting up properly the database ?
Without telling us more information on how frequently your clients are
going to be accessing the DB, it's really impossible to say.
Cheers,
Neil
From | Date | Subject | |
---|---|---|---|
Next Message | P G | 2003-03-22 02:39:43 | Re: newbie question: 'multithreading' longrunning postgres-queries from plpgsql-function. |
Previous Message | Daniel R. Anderson | 2003-03-22 01:24:04 | Re: data transfer between databases |