From: | Hannu Krosing <hannu(at)2ndQuadrant(dot)com> |
---|---|
To: | jd(at)commandprompt(dot)com |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: The science of optimization in practical terms? |
Date: | 2009-02-13 20:56:42 |
Message-ID: | 1234558602.7339.8.camel@huvostro |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, 2009-02-12 at 16:06 -0800, Joshua D. Drake wrote:
> Hello,
>
> I was helping a customer today with what is becoming a common theme with
> a lot of work we do. Basically, "It was working fine until recently."
> Now 90% of the time it is as simple as running an ANALYZE VERBOSE and
> picking apart relations that aren't being maintained properly and adjust
> autovacuum or vacuum appropriately. If it isn't that, it is usually
> something like increasing effective_cache_size, or
> default_statistics_target.
>
> However, in recent times I have found that increasing cpu_tuple_cost,
> cpu_operator_cost and cpu_index_tuple_cost to be very useful. This is
> always in the scenario of, "queries were running fine for months and
> then all of a sudden, they are not". It is also always on systems that
> we are already maintaining and thus (in theory) are in good shape.
>
> So my question is, what is the science in practical terms behind those
> parameters?
In most general terms increasing these will favor index access over
seqscans (well, increasing _only_ cpu_index_tuple_cost won't but still).
Basically - for any database with multiple user growing anywere near
where it will not fit mostly in cache - it is a good rule of a thumb to
push postgres in direction of selecting plans with mostly index access,
as postgreSQL's planner is not much aware of other queries running in
parallel and thus can not do much by itself know that it should.
Things are fast until main tables stay in cache, preferably in pg shared
memory byt os cache is also quite ok and then suddenly deteriorate, once
some part of it does not.
You can watch for these things by monitoring pg_stat_user_tables /
pg_stat_user_indexes and make educated guesses if som etables should or
should not have that much traffic of the type indicated there.
> Normally I would just accept it as another PostgreSQL
> idiosyncrasy but the performance differences I am talking about are
> large. After changing cpu_tuple_cost and cpu_operator_cost today to 0.5
> I decreased two queries from 10 seconds and 15 seconds to 2 seconds and
> ~900 ms respectively.
>
> Sincerely,
>
> Joshua D. Drake
>
>
> --
> PostgreSQL - XMPP: jdrake(at)jabber(dot)postgresql(dot)org
> Consulting, Development, Support, Training
> 503-667-4564 - http://www.commandprompt.com/
> The PostgreSQL Company, serving since 1997
>
--
------------------------------------------
Hannu Krosing http://www.2ndQuadrant.com
PostgreSQL Scalability and Availability
Services, Consulting and Training
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua D. Drake | 2009-02-13 21:16:02 | Re: autovacuum not honoring pg_autovacuum in 8.3.5? |
Previous Message | Jaime Casanova | 2009-02-13 20:51:38 | Re: autovacuum not honoring pg_autovacuum in 8.3.5? |