From: | Chris <dmagick(at)gmail(dot)com> |
---|---|
To: | Tobias Brox <tobias(at)nordicbet(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Defining performance. |
Date: | 2006-12-01 03:32:05 |
Message-ID: | 456FA235.1070406@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tobias Brox wrote:
> [nospam(at)hardgeus(dot)com - Thu at 06:37:12PM -0600]
>> As my dataset has gotten larger I have had to throw more metal at the
>> problem, but I have also had to rethink my table and query design. Just
>> because your data set grows linearly does NOT mean that the performance of
>> your query is guaranteed to grow linearly! A sloppy query that runs OK
>> with 3000 rows in your table may choke horribly when you hit 50000.
>
> Then some limit is hit ... either the memory cache, or that the planner
> is doing an unlucky change of strategy when hitting 50000.
Not really. A bad query is a bad query (eg missing a join element). It
won't show up for 3000 rows, but will very quickly if you increase that
by a reasonable amount. Even as simple as a missing index on a join
column won't show up for a small dataset but will for a larger one.
It's a pretty common mistake to assume that a small dataset will behave
exactly the same as a larger one - not always the case.
--
Postgresql & php tutorials
http://www.designmagick.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Tobias Brox | 2006-12-01 04:03:26 | Re: Defining performance. |
Previous Message | Mark Kirkwood | 2006-12-01 03:00:05 | Re: Bad iostat numbers |