From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Pailloncy Jean-Gérard <pailloncy(at)ifrance(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: |
Date: | 2004-04-14 13:22:42 |
Message-ID: | 4923.1081948962@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
=?ISO-8859-1?Q?Pailloncy_Jean-G=E9rard?= <pailloncy(at)ifrance(dot)com> writes:
> I run the following command three times to prevent cache/disk results.
Do you think that's actually representative of how your database will
behave under load?
If the DB is small enough to be completely cached in RAM, and you
expect it to remain so, then it's sensible to optimize on the basis
of fully-cached test cases. Otherwise I think you are optimizing
the wrong thing.
If you do want to plan on this basis, you want to set random_page_cost
to 1, make sure effective_cache_size is large, and perhaps increase
the cpu_xxx cost numbers. (What you're essentially doing here is
reducing the estimated cost of a page fetch relative to CPU effort.)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Rajesh Kumar Mallah | 2004-04-14 17:53:13 | select count(*) very slow on an already vacuumed table. |
Previous Message | Pailloncy Jean-Gérard | 2004-04-14 12:39:21 |