From: | Maxim Boguk <maxim(dot)boguk(at)gmail(dot)com> |
---|---|
To: | Pawel Veselov <pawel(dot)veselov(at)gmail(dot)com> |
Cc: | Andy Colson <andy(at)squeakycode(dot)net>, "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Improving performance of merging data between tables |
Date: | 2014-12-31 03:25:46 |
Message-ID: | CAK-MWwSt8JA67EiGhAzO7vHhjfUd=2Pm8=oKqGkFw9BZq3EjVA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Dec 31, 2014 at 11:10 AM, Pawel Veselov <pawel(dot)veselov(at)gmail(dot)com>
wrote
>
>
> [skipped]
>
> 2) try pg_stat_statements, setting "pg_stat_statements.track = all". see:
>>> http://www.postgresql.org/docs/9.4/static/pgstatstatements.html
>>>
>>> I have used this to profile some functions, and it worked pretty well.
>>> Mostly I use it on a test box, but once ran it on the live, which was
>>> scary, but worked great.
>>>
>>
>> That looks promising. Turned it on, waiting for when I can turn the
>> server at the next "quiet time".
>>
>
> I have to say this turned out into a bit of a disappointment for this use
> case. It only measures total time spent in a call. So, it sends up
> operations that waited a lot on some lock. It's good, but it would be great
> if total_time was provided along with wait_time (and io_time may be as
> well, since I also see operations that just naturally have to fetch a lot
> of data)
>
1) pg_stat_statements provide an information about io_time of each
statement but you should have track_io_timing
enabled for that.
2) About locking I suggest enable log_lock_waits and set deadlock_timeout
to say 100ms (just for testing purposes), and than any lock waiting more
than 100ms will be logged with some useful additional info.
PS: your setup look pretty complicated and hard to analyze without seeing
all involved table structures, transaction/query flow, and (especially)
involved procedures source code.
PPS: btw, please check the database logs for deadlocks messages, your setup
around "and then call a pgsql function to merge the data from its tables
into the common tables" part could be easily deadlock prone.
PPPS: and the last suggestion, after you finished with the "write all the
data into its own tables", then application should perform analyze of these
own tables (or you could have weird/inefficient plans during last stage).
--
Maxim Boguk
Senior Postgresql DBA
http://www.postgresql-consulting.ru/ <http://www.postgresql-consulting.com/>
Phone RU: +7 910 405 4718
Phone AU: +61 45 218 5678
LinkedIn: http://www.linkedin.com/pub/maksym-boguk/80/b99/b1b
Skype: maxim.boguk
Jabber: maxim(dot)boguk(at)gmail(dot)com
МойКруг: http://mboguk.moikrug.ru/
"People problems are solved with people.
If people cannot solve the problem, try technology.
People will then wish they'd listened at the first stage."
From | Date | Subject | |
---|---|---|---|
Next Message | sramay | 2014-12-31 07:31:11 | Re: vacuum vs pg_repack vs pg_reorg |
Previous Message | John Casey | 2014-12-31 02:12:17 | Re: bdr_init_copy fails when starting 2nd BDR node |