Re: Help optimize view

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: "Relyea, Mike" <Mike(dot)Relyea(at)xerox(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org, "Kevin Grittner" <Kevin(dot)Grittner(at)wicourts(dot)gov>
Subject: Re: Help optimize view
Date: 2007-08-13 20:17:23
Message-ID: 21054.1187036243@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

"Relyea, Mike" <Mike(dot)Relyea(at)xerox(dot)com> writes:
> I've increased shared_buffers to 128MB, and restarted the server. My
> total run time didn't really change.

It doesn't look like you can hope for much in terms of improving the
plan. The bulk of the time is going into scanning ParameterValues and
Measurements, but AFAICS there is no way for the query to pull fewer
rows from those tables than it is doing, and the size of the join means
that a nestloop indexscan is likely to suck. (You could try forcing one
by setting enable_hashjoin and enable_mergejoin to OFF, but I don't have
much hope for that.)

If you haven't played with work_mem yet, increasing that might make the
hash joins go a bit faster --- but it looks like most of the time is
going into the raw relation scans, so there's not going to be a lot of
win to be had there either.

Basically, joining lots of rows like this takes awhile. If you have to
have a faster answer, I can only suggest rethinking your table design.
Sometimes denormalization of the schema is necessary for performance.

regards, tom lane

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Stefan Kaltenbrunner 2007-08-13 20:20:35 Re: [HACKERS] Proposal: Pluggable Optimizer Interface
Previous Message Philipp Specht 2007-08-13 20:12:33 Stable function optimisation