From: | Pawel Veselov <pawel(dot)veselov(at)gmail(dot)com> |
---|---|
To: | Andy Colson <andy(at)squeakycode(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Improving performance of merging data between tables |
Date: | 2014-12-31 00:10:20 |
Message-ID: | CAMnJ+BdCaH8_2Cpyq8bxKdVMFim-LB7M76ROK+kzsXcbknpkQQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Mon, Dec 29, 2014 at 9:29 PM, Pawel Veselov <pawel(dot)veselov(at)gmail(dot)com>
wrote:
[skipped]
>>> 1) How do I find out what exactly is consuming the CPU in a PL/pgSQL
>>> function? All I see is that the calls to merge_all() function take long
>>> time, and the CPU is high while this is going on.
>>>
>>>
[skipped]
2) try pg_stat_statements, setting "pg_stat_statements.track = all". see:
>> http://www.postgresql.org/docs/9.4/static/pgstatstatements.html
>>
>> I have used this to profile some functions, and it worked pretty well.
>> Mostly I use it on a test box, but once ran it on the live, which was
>> scary, but worked great.
>>
>
> That looks promising. Turned it on, waiting for when I can turn the server
> at the next "quiet time".
>
I have to say this turned out into a bit of a disappointment for this use
case. It only measures total time spent in a call. So, it sends up
operations that waited a lot on some lock. It's good, but it would be great
if total_time was provided along with wait_time (and io_time may be as
well, since I also see operations that just naturally have to fetch a lot
of data)
[skipped]
From | Date | Subject | |
---|---|---|---|
Next Message | John Casey | 2014-12-31 02:12:17 | Re: bdr_init_copy fails when starting 2nd BDR node |
Previous Message | Andrew Dunstan | 2014-12-30 22:25:11 | Re: [HACKERS] ON_ERROR_ROLLBACK |