From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Edoardo Panfili <edoardo(at)aspix(dot)it> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: A questions on planner choices |
Date: | 2011-08-19 22:11:41 |
Message-ID: | CAOR=d=36vvoDbN-N-2jL37Pcmruuf2Jv3wtDcaRbuuwxJMoHjA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Aug 19, 2011 at 2:37 PM, Edoardo Panfili <edoardo(at)aspix(dot)it> wrote:
>
> work_mem = 1MB
> random_page_cost = 4
>
> I am using an SSD but the production system uses a standard hard disk.
>
> I did a try also with
> set default_statistics_target=10000;
> vacuum analyze cartellino;
> vacuum analyze specie; -- the base table for specienomi
> vacuum analyze confini_regioni;
>
> but is always 4617.023 ms
OK, try turning up work_mem for just this connection, i.e.:
psql mydb
set work_mem='64MB';
explain analyze select .... ;
and see if you get a different plan. Often you only need a slightly
higher work_mem to get a better plan. We're looking for a hash_join
to occur here, which should be much much faster. After testing you
can set work_mem globally in the postgresql.conf file. Try to keep it
smallish, as it's per sort per connection, so usage can go up really
fast with a lot of active connections and swamp your server's memory.
I run a 128G memory machine with ~500 connections and have it set to
16MB.
From | Date | Subject | |
---|---|---|---|
Next Message | pasman pasmański | 2011-08-20 00:54:15 | Re: array_agg problem |
Previous Message | Merlin Moncure | 2011-08-19 21:44:31 | Re: array_agg problem |