From: | Matthew Nuzum <mattnuzum(at)gmail(dot)com> |
---|---|
To: | Yves Vindevogel <yves(dot)vindevogel(at)implements(dot)be> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Projecting currentdb to more users |
Date: | 2005-07-12 18:00:49 |
Message-ID: | f3c0b4080507121100a55ccc4@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 7/12/05, Yves Vindevogel <yves(dot)vindevogel(at)implements(dot)be> wrote:
> Hi,
>
> We have a couple of database that are identical (one for each customer).
> They are all relatively small, ranging from 100k records to 1m records.
> There's only one main table with some smaller tables, a lot of indexes
> and some functions.
>
> I would like to make an estimation of the performance, the diskspace
> and other related things,
> when we have database of for instance 10 million records or 100 million
> records.
>
> Is there any math to be done on that ?
Its pretty easy to make a database run fast with only a few thousand
records, or even a million records, however things start to slow down
non-linearly when the database grows too big to fit in RAM.
I'm not a guru, but my attempts to do this have not been very accurate.
Maybe (just maybe) you could get an idea by disabling the OS cache on
the file system(s) holding the database and then somehow fragmenting
the drive severly (maybe by putting each table in it's own disk
partition?!?) and measuring performance.
On the positive side, there are a lot of wise people on this list who
have +++ experience optimzing slow queries on big databases. So
queries now that run in 20 ms but slow down to 7 seconds when your
tables grow will likely benefit from optimizing.
--
Matthew Nuzum
www.bearfruit.org
From | Date | Subject | |
---|---|---|---|
Next Message | Mohan, Ross | 2005-07-12 18:24:52 | Re: [PERFORM] Projecting currentdb to more users |
Previous Message | Ian Westmacott | 2005-07-12 17:50:51 | Re: cost-based vacuum |