From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Josh Berkus <josh(at)agliodbs(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Planner performance extremely affected by an hanging transaction (20-30 times)? |
Date: | 2013-09-25 07:06:06 |
Message-ID: | CAMkU=1ys-fB2Nczt=LT1pVHzK3Jv4Jb-67N29Gzyx3nraweoZQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, Sep 24, 2013 at 10:43 AM, Josh Berkus <josh(at)agliodbs(dot)com> wrote:
> On 09/24/2013 08:01 AM, jesper(at)krogh(dot)cc wrote:
> > This stuff is a 9.2 feature right? What was the original problem to be
> > adressed?
>
> Earlier, actually. 9.1? 9.0?
>
> The problem addressed was that, for tables with a "progressive" value
> like a sequence or a timestamp, the planner tended to estimate 1 row any
> time the user queried the 10,000 most recent rows due to the stats being
> out-of-date. This resulted in some colossally bad query plans for a
> very common situation.
>
> So there's no question that the current behavior is an improvement,
> since it affects *only* users who have left an idle transaction open for
> long periods of time, something you're not supposed to do anyway.
Some transaction just take a long time to complete their work. If the
first thing it does is insert these poisoned values, then go on to do other
intensive work on other tables, it can do some serious damage without being
idle.
> Not
> that we shouldn't fix it (and backport the fix), but we don't want to
> regress to the prior planner behavior.
>
> However, a solution is not readily obvious:
>
The mergejoinscansel code is almost pathologically designed to exercise
this case (which seems to be what is doing in the original poster) because
it systematically probes the highest and lowest values from one table
against the other. If they have the same range, that means it will always
be testing the upper limit. Perhaps mergejoinscansel could pass a flag to
prevent the look-up from happening. My gut feeling is that mergejoin it
would not be very sensitive to the progressive value issue, but I can't
really back that up. On the other hand, if we could just make getting the
actual value faster then everyone would be better off.
>
> On 09/20/2013 03:01 PM, Jeff Janes wrote:> 3) Even worse, asking if a
> given transaction has finished yet can be a
> > serious point of system-wide contention, because it takes the
> > ProcArrayLock, once per row which needs to be checked. So you have 20
> > processes all fighting over the ProcArrayLock, each doing so 1000
> times per
> > query.
>
> Why do we need a procarraylock for this? Seems like the solution would
> be not to take a lock at all; the information on transaction commit is
> in the clog, after all.
>
My understanding is that you are not allowed to check the clog until after
you verify the transaction is no longer in progress, otherwise you open up
race conditions.
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Sam Wong | 2013-09-25 12:23:04 | Re: Slow plan for MAX/MIN or LIMIT 1? |
Previous Message | Jeff Janes | 2013-09-25 06:48:42 | Re: Planner performance extremely affected by an hanging transaction (20-30 times)? |