From: | marcin mank <marcin(dot)mank(at)gmail(dot)com> |
---|---|
To: | Richard Neill <rn214(at)cam(dot)ac(dot)uk> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Postgres query completion status? |
Date: | 2009-11-20 21:38:14 |
Message-ID: | b1b9fac60911201338v32f8b7belac02f1ec8a6a0a20@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
>>> max_connections = 500 # (change requires restart)
>>> work_mem = 256MB # min 64kB
>>
>> Not that it has to do with your current problem but this combination could
>> bog your server if enough clients run sorted queries simultaneously.
>> You probably should back on work_mem at least an order of magnitude.
>>
>
> What's the correct way to configure this?
>
> * We have one client which needs to run really big transactions (therefore
> needs the work memory).
>
You can set the work_mem for the specific user (like "set work_mem to
x") at the begginning of the session.
Here are some things I noticed (it is more like shooting in the dark,
but still...)
the expensive part is this:
-> Sort
(cost=280201.66..281923.16 rows=688602 width=300) (actual
time=177511.806..183486.593 rows=41317448 loops=1)
Sort Key:
du_report_sku.wid, du_report_sku.storeorderid,
du_report_sku.genreorderid
Sort Method: external
sort Disk: 380768kB
-> HashAggregate
(cost=197936.75..206544.27 rows=688602 width=36) (actual
time=7396.426..11224.839 rows=6282564 loops=1)
-> Seq Scan on
du_report_sku (cost=0.00..111861.61 rows=6886011 width=36) (actual
time=0.006..573.419 rows=6897682 loops=1)
(it is pretty confusing that the HashAggregate reports ~6M rows, but
the sort does 41M rows, but maybe I can not read this).
Anyway, I think that if You up the work_mem for this query to 512M,
the sort will be in memory, an thus plenty faster.
Also, You say You are experiencing unstable query plans, and this may
mean that geqo is kicking in (but Your query seems too simple for
that, even considering the views involved). A quick way to check that
would be to run explain <the query> a coule tens of times, and check
if the plans change. If they do, try upping geqo_threshold.
You have seq_page_cost 4 times larger than random_page_cost. You say
You are on SSD, so there is no random access penalty. Try setting them
equal.
Your plan is full of merge-joins, some indices may be in order. Merge
join is a kind of "last-chance" plan.
the query is :
SELECT ( core.demand.qty - viwcs.wave_end_demand.qty_remaining ) FROM
core.demand, viwcs.previous_wave LEFT OUTER JOIN viwcs.wave_end_demand
USING ( wid ) WHERE core.demand.id = viwcs.wave_end_demand.demand_id;
Isn`t the left join equivalent to an inner join, since in where You
are comparing values from the outer side of the join? If they come out
nulls, they will get discarded anyway...
I hope You find some of this useful.
Greetings
Marcin
From | Date | Subject | |
---|---|---|---|
Next Message | Richard Neill | 2009-11-20 21:39:49 | Re: Postgres query completion status? |
Previous Message | Fernando Hevia | 2009-11-20 21:32:07 | Re: Postgres query completion status? |