From: | Dennis Bjorklund <db(at)zigo(dot)dhs(dot)org> |
---|---|
To: | Pierre-Frédéric Caillaud <lists(at)boutiquenumerique(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: The usual sequential scan, but with LIMIT ! |
Date: | 2004-09-06 12:45:30 |
Message-ID: | Pine.LNX.4.44.0409061440040.9559-100000@zigo.dhs.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-performance |
On Mon, 6 Sep 2004, [iso-8859-15] Pierre-Frédéric Caillaud wrote:
> Why is it that way ? The planner should use the LIMIT values when
> planning the query, should it not ?
And it do use limit values, the estimated cost was lower when you had the
limit,
What you need to do is to tune pg for your computer. For example the
following settings:
* effective_cache - this setting tells pg how much the os are caching
(for example use top to find out during a normal work load). You said
that the tables fit in memory and by telling pg how much is cached it
might adjust it's plans accordingly.
* random_page_cost - how expensive is a random access compared to seq.
access. This is dependent on the computer and disk system you have.
If the setting above does not help, maybe you need to lower this to
variable to 2 or something.
And don't forget the shared_buffer setting. But most people usually have
it tuned in my experience (but usually too high). Here is an article that
might help you:
http://www.varlena.com/varlena/GeneralBits/Tidbits/perf.html
--
/Dennis Björklund
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2004-09-06 13:17:03 | Re: radius authentication |
Previous Message | Christopher Browne | 2004-09-06 12:31:12 | Re: One Database per Data File? |
From | Date | Subject | |
---|---|---|---|
Next Message | Markus Schaber | 2004-09-06 15:56:18 | Re: Multiple Uniques |
Previous Message | Pierre-Frédéric Caillaud | 2004-09-06 12:27:06 | Re: The usual sequential scan, but with LIMIT ! |