From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Greg Smith <greg(at)2ndquadrant(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Slow nested loop execution on larger server |
Date: | 2011-12-16 21:16:21 |
Message-ID: | CAOR=d=0k1KyuJqAWkouytUMB1-VAEy21z7PXgp=USJsFbNdVKg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Fri, Dec 16, 2011 at 11:27 AM, Greg Smith <greg(at)2ndquadrant(dot)com> wrote:
> Forwarding this on from someone who may pop up on the list too. This is a
> new server that's running a query slowly. The rate at which new semops
> happen may be related. OS is standard RHEL 5.5, kernel 2.6.18-194.32.1.el5.
> One possibly interesting part is that this is leading edge hardware: four
> 8-core processes with HyperThreading, for 64 threads total:
>
> processor : 63
> model name : Intel(R) Xeon(R) CPU X7550 @ 2.00GHz
> physical id : 3
> siblings : 16
> cpu cores : 8
>
> My only vague memory of issues around systems like this that comes to mind
> here is Scott Marlowe talking about some kernel tweaks he had to do on his
> 48 core AMD boxes to get them work right. That seems like a sketchy
> association, and I don't seem to have the details handy.
>
> I do have the full query text and associated plans, but it's mostly noise.
> Here's the fun part:
>
> -> Nested Loop (cost=581.52..3930.81 rows=2 width=51) (actual
> time=25.681..17562.198 rows=21574 loops=1)
> -> Merge Join (cost=581.52..783.07 rows=2 width=37) (actual
> time=25.650..98.268 rows=21574 loops=1)
> Merge Cond: (rim.instrumentid = ri.instrumentid)
> -> Index Scan using reportinstruments_part_125_pkey on
> reportinstruments_part_125 rim (cost=0.00..199.83 rows=110 width=8) (actual
> time=0.033..27.180 rows=20555 loops=1)
> Index Cond: (reportid = 105668)
> -> Sort (cost=581.52..582.31 rows=316 width=33) (actual
> time=25.608..34.931 rows=21574 loops=1)
> Sort Key: ri.instrumentid
> Sort Method: quicksort Memory: 2454kB
> -> Index Scan using riskbreakdown_part_51_pkey on
> riskbreakdown_part_51 ri (cost=0.00..568.40 rows=316 width=33) (actual
> time=0.019..11.599 rows=21574 loops=1)
> Index Cond: (reportid = 105668)
> -> Index Scan using energymarketlist_pkey on energymarketlist ip
> (cost=0.00..1573.86 rows=1 width=18) (actual time=0.408..0.808 rows=1
> loops=21574)
> Index Cond: ((reportid = 105668) AND (instrumentid =
> ri.instrumentid))
> ...
> Total runtime: 21250.377 ms
>
> The stats are terrible and the estimates off by many orders of magnitude.
> But that's not the point. It expected 2 rows and 21574 came out; fine.
> Why is it taking this server 17 seconds to process 21K rows of tiny width
> through a Nested Loop? Is it bouncing to a new CPU every time the thing
> processes a row or something? I'm finding it hard to imagine how this could
> be a PostgreSQL problem; seems more like a kernel bug aggrevated on this
> system. I wonder if we could produce a standalone test case with a similar
> plan from what the bad query looks like, and ask the person with this
> strange system to try it. See if it's possible to make it misbehave in the
> same way with something simpler others can try, too.
What's the difference in speed of running the query naked and with
explain analyze? It's not uncommon to run into a situation where the
instrumentation of explain analyze costs significantly more than
anything in the query, and for some loops this gets especially bad.
If the naked query runs in say 1 second, and the explain analyze runs
in 17 seconds, then the explain analyze time keeping is costing you
16/17ths of the time to do the accounting. If that's the case then
the real query is about 16 times faster than the explain analyzed one,
and the rows are being processed at about 25k/second.
From | Date | Subject | |
---|---|---|---|
Next Message | Roxanne Reid-Bennett | 2011-12-17 15:30:07 | will the planner ever use an index when the condition is <> ? |
Previous Message | Greg Smith | 2011-12-16 18:27:57 | Slow nested loop execution on larger server |