From: | Francisco Reyes <lists(at)natserv(dot)com> |
---|---|
To: | Martin Dillard <martin(at)edusoftinc(dot)com> |
Cc: | Andrew Sullivan <andrew(at)libertyrms(dot)info>, <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: scaling a database |
Date: | 2002-03-08 17:47:37 |
Message-ID: | 20020308120134.X25992-100000@zoraida.natserv.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, 25 Feb 2002, Martin Dillard wrote:
> I am basically looking for examples or case studies to learn from. I
> realize that our application will be unique and that a valid answer
> to my question is "it depends" but I am interested in hearing if
> there are other measures required besides increasing the processing
> power, memory, or disk space allocated to PostgreSQL.
Take a very good look at your DB design, OS settings, buffer settings.
One thing we did was to split a couple of tables in two even though the
data was totally unique. The key issue was that a small part of the data
was used probably 80% of the time while the other data was probably 60% +
in size, but used only 20% (pulling numbers out of a hat, but just to give
you an idea).
These splits seem to have helped. Didn't measure how much, but enough to
be noticeable.
Also make sure you have properly normalized your data. The name of the
game is I/O. The more you can change your data/layouts to minimize I/O the
better performance you will have.
Our tables combined hold about 10million records. Our biggest table is
probably in the 8 million record area.
From | Date | Subject | |
---|---|---|---|
Next Message | Francisco Reyes | 2002-03-08 17:53:47 | Re: Shared buffers vs large files |
Previous Message | Richard Emberson | 2002-03-08 17:14:38 | which JDBC driver |