From: | Edoardo Panfili <edoardo(at)aspix(dot)it> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: A questions on planner choices |
Date: | 2011-08-19 20:37:49 |
Message-ID: | 4E4EC99D.5090009@aspix.it |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Il 19/08/11 22:15, Scott Marlowe ha scritto:
> On Fri, Aug 19, 2011 at 1:05 PM, Edoardo Panfili<edoardo(at)aspix(dot)it> wrote:
>> [1] Plan for the firts query
>> -------------------------------------------------------------------------------------------------------------------------------------------------------
>> Sort (cost=20.45..20.46 rows=1 width=931) (actual time=4457.775..4457.786
>> rows=76 loops=1)
>> Sort Key: cartellino.id
>> Sort Method: quicksort Memory: 74kB
>> -> Hash Join (cost=8.32..20.44 rows=1 width=931) (actual
>> time=243.679..4457.658 rows=76 loops=1)
>> Hash Cond: (cartellino.idspecie = principale.id)
>> -> Nested Loop (cost=0.00..9.81 rows=614 width=886) (actual
>> time=4.094..4439.024 rows=18370 loops=1)
>
> The row estimate here is off by a factor of 30 or so. In this case a
> different join method would likely work better. It might be that
> cranking up stats for the columns involved will help, but if that
> doesn't change the estimates then we might need to look elsewhere.
>
> What's your work_mem and random_page_cost?
work_mem = 1MB
random_page_cost = 4
I am using an SSD but the production system uses a standard hard disk.
I did a try also with
set default_statistics_target=10000;
vacuum analyze cartellino;
vacuum analyze specie; -- the base table for specienomi
vacuum analyze confini_regioni;
but is always 4617.023 ms
Edoardo
From | Date | Subject | |
---|---|---|---|
Next Message | Dimitri Fontaine | 2011-08-19 21:20:38 | Re: Dump a database excluding one table DATA? |
Previous Message | Scott Marlowe | 2011-08-19 20:15:00 | Re: A questions on planner choices |