From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Performance issue with postgres9.6 |
Date: | 2017-04-07 17:32:19 |
Message-ID: | 4aa9f7b7-2960-61af-757c-4c726ea332dd@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 04/07/2017 06:31 PM, Merlin Moncure wrote:
> On Fri, Apr 7, 2017 at 5:16 AM, Prakash Itnal <prakash074(at)gmail(dot)com> wrote:
>> Hello,
>>
>> We currently use psotgres 9.3 in our products. Recently we upgraded to
>> postgres 9.6. But with 9.6 we have seen a drastic reduction in throughput.
>> After analyzing carefully I found that "planner time" in 9.6 is very high.
>> Below are the details:
>>
>> Scenario:
>> 1 Create a table with 100000 rows.
>> 2 Execute simple query: select * from subscriber where s_id = 100;
>> 3 No update/delete/insert; tried vacuum, full vacuum; by default we enable
>> auto-vacuum
>>
>> 9.3: Avg of "Total runtime" : 0.24ms [actual throughput: 650 TPS]
>> 9.6: Avg of Total time: 0.56ms (Avg of "Planning time" : 0.38ms + Avg of
>> "Execution time" : 0.18ms) [actual throughput: 80 TPS]
>
> I think your math is off. Looking at your attachments, planning time
> is 0.056ms, not 0.56ms. This is in no way relevant to performance on
> the order of your measured TPS. How are you measuring TPS?
>
Not sure where did you get the 0.056ms? What I see is this in the 9.3
explains:
Total runtime: 0.246 ms
and this in those from 9.6:
Planning time: 0.396 ms
Execution time: 0.181 ms
That is roughly 0.25ms vs. 0.6ms (0.4+0.2), as reported by Prakash.
Obviously, this "just" 2x slowdown, so it does not match the drop from
650 to 80 tps. Also, 0.25ms would be ~4000 tps, so I guess this was just
an example of a query that slowed down.
Prakash, are you using packages (which ones?), or have you compiled from
sources? Can you provide pg_config output from both versions, and also
'select * from pg_settings' (the full config)?
It might also be useful to collect profiles, i.e. (1) install debug
symbols (2) run the query in a loop and (3) collect profiles from that
one backend using 'perf'.
I assume you're using the same hardware / machine for the tests?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-04-07 17:47:38 | Re: Speed up Clog Access by increasing CLOG buffers |
Previous Message | Tom Lane | 2017-04-07 17:28:11 | Re: recent deadlock regression test failures |