From: | "Joshua D(dot) Drake" <jd(at)commandprompt(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | The science of optimization in practical terms? |
Date: | 2009-02-13 00:06:31 |
Message-ID: | 1234483591.9467.188.camel@jd-laptop.pragmaticzealot.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hello,
I was helping a customer today with what is becoming a common theme with
a lot of work we do. Basically, "It was working fine until recently."
Now 90% of the time it is as simple as running an ANALYZE VERBOSE and
picking apart relations that aren't being maintained properly and adjust
autovacuum or vacuum appropriately. If it isn't that, it is usually
something like increasing effective_cache_size, or
default_statistics_target.
However, in recent times I have found that increasing cpu_tuple_cost,
cpu_operator_cost and cpu_index_tuple_cost to be very useful. This is
always in the scenario of, "queries were running fine for months and
then all of a sudden, they are not". It is also always on systems that
we are already maintaining and thus (in theory) are in good shape.
So my question is, what is the science in practical terms behind those
parameters? Normally I would just accept it as another PostgreSQL
idiosyncrasy but the performance differences I am talking about are
large. After changing cpu_tuple_cost and cpu_operator_cost today to 0.5
I decreased two queries from 10 seconds and 15 seconds to 2 seconds and
~900 ms respectively.
Sincerely,
Joshua D. Drake
--
PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
Consulting, Development, Support, Training
503-667-4564 - http://www.commandprompt.com/
The PostgreSQL Company, serving since 1997
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-02-13 00:19:06 | Re: Missing files after make install ? |
Previous Message | David Fetter | 2009-02-12 23:25:13 | Re: pg_restore --multi-thread |