From: | "Albe Laurenz" <laurenz(dot)albe(at)wien(dot)gv(dot)at> |
---|---|
To: | <andy(dot)gumbrecht(at)orprovision(dot)com>, <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Slow query, where am I going wrong? |
Date: | 2012-10-30 08:25:27 |
Message-ID: | D960CB61B694CF459DCFB4B0128514C2089A60C2@exadv11.host.magwien.gv.at |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Andy wrote:
> I have been pulling my hair out over the last few days trying to get
any useful performance out of the
> following
> painfully slow query.
> The query is JPA created, I've just cleaned the aliases to make it
more readable.
> Using 'distinct' or 'group by' deliver about the same results, but
'distinct' is marginally better.
> Hardware is pretty low end (a test box), but is mostly dedicated to
PostgreSQL.
> The box spec and configuration is included at the end of this post -
Some of the values have been
> changed just to see if
> things get better.
> Inserts have also become extremely slow. I was expecting a drop off
when the database grew out of
> memory, but not this much.
>
> Am I really missing the target somewhere?
> Any help and or suggestions will be very much appreciated.
>
> Best regards,
>
> Andy.
>
> http://explain.depesz.com/s/cfb
The estimate on the join between recipe_version and test_result is not
good.
Maybe things will improve if you increase the statistics on
test_result.id_recipe_version.
If that does not help, maybe the nested loop join that takes
all your time can be sped up with the following index:
CREATE INDEX any_name ON test_item (id_test_result, type);
But I would not expect much improvement there.
BTW, you seem to have an awful lot of indexes defined, some
of which seem redundant.
Yours,
Laurenz Albe
From | Date | Subject | |
---|---|---|---|
Next Message | Albe Laurenz | 2012-10-30 08:50:44 | Re: Replaying 48 WAL files takes 80 minutes |
Previous Message | Andy | 2012-10-30 07:33:53 | Slow query, where am I going wrong? |