Re: Re: Increasing work_mem and shared_buffers on Postgres 9.2 significantly slows down queries

From: Petr Praus <petr(at)praus(dot)net>
To: Gunnar Nick Bluth <gunnar(dot)bluth(at)pro-open(dot)de>
Cc: Marcos Ortiz <mlortiz(at)uci(dot)cu>, pgsql-performance(at)postgresql(dot)org, Claudio Freire <klaussfreire(at)gmail(dot)com>, jeff(dot)janes(at)gmail(dot)com
Subject: Re: Re: Increasing work_mem and shared_buffers on Postgres 9.2 significantly slows down queries
Date: 2012-11-08 18:23:03
Message-ID: CACezXZ-ka2RgGQ42NhEmqVXpBjFAw-sKi=qdXoJKsM5URCWnqA@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 6 November 2012 14:50, Gunnar "Nick" Bluth <gunnar(dot)bluth(at)pro-open(dot)de>wrote:

> Am 06.11.2012 21:24, schrieb Petr Praus:
>
> On 6 November 2012 14:17, Gunnar "Nick" Bluth <gunnar(dot)bluth(at)pro-open(dot)de>wrote:
>
>> Am 06.11.2012 21:08, schrieb Petr Praus:
>>
>>
>>> 2MB: http://explain.depesz.com/s/ul1
>>> 4MB: http://explain.depesz.com/s/IlVu
>>> 10MB: http://explain.depesz.com/s/afx3
>>> 12MB: http://explain.depesz.com/s/i0vQ
>>>
>>> See the change in the plan between 10MB and 12MB, directly at top
>> level? That narrows the thing down quite a bit.
>>
>> Though I wonder why this didn't show in the original plans...
>
>
> Yes, the 2,4 and 10 are the same, the only difference is number of
> buckets. But with 12, it makes completely different choices, it decides to
> make sequential scans and hash right joins instead of merge joins. And
> those sequential scans take a loong time. Could this be caused by some
> missing indices perhaps?
>
>
> Well, you do have indices, as we can clearly see.
>
>
> The original plans I posted at the start are the same as the 12MB plan,
> I'm not sure why is that, I really hope I didn't make some sort of mistake
> there.
>
>
> I had been wondering why you didn't have any indices, tbth. However, the
> execution times still grow with work_mem, which is interesting independent
> of the actual plan change...
>
>
>
> Thanks for your help by the way! :-)
>
>
>
> Oh, no worries there... this is by far the most interesting challenge I've
> encountered in months ;-)
>
> But I do admit that I've reached the end of the ladder now. No idea how
> you can improve your runtime yet. Probably
> - using full text search on "personinfo"
> - try different join_collapse_limit / from_collapse_limit /
> enable_hashjoin values
>
> The most pragmatic approach is probably to just stick with work_mem = 1MB
> (or less) ;-), but that may potentially bite you later.
>

Yes, that's what I'm running now in production :) When I have more time I
may come up with more queries to test overall system better.
We'll see if anyone else comes up with something but I am out of things to
try, too. So I guess I'll put this sideways for now.

>
>
> --
> Gunnar "Nick" Bluth
> RHCE/SCLA
>
> Mobil +49 172 8853339
> Email: gunnar(dot)bluth(at)pro-open(dot)de
> __________________________________________________________________________
> In 1984 mainstream users were choosing VMS over UNIX. Ten years later
> they are choosing Windows over UNIX. What part of that message aren't you
> getting? - Tom Payne
>
>

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Craig Ringer 2012-11-09 06:15:45 Re: Thousands databases or schemas
Previous Message Rodrigo Rosenfeld Rosas 2012-11-08 15:43:58 Re: Query completed in < 1s in PG 9.1 and ~ 700s in PG 9.2