| From: | "Carlo" <reg01(at)stonebanks(dot)ca> |
|---|---|
| To: | <pgsql-performance(at)postgresql(dot)org> |
| Subject: | One long transaction or multiple short transactions? |
| Date: | 2015-10-06 03:10:49 |
| Message-ID: | 003d01d0ffe4$9ae8f890$d0bae9b0$@stonebanks.ca |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
We have a system which is constantly importing flat file data feeds into
normalized tables in a DB warehouse over 10-20 connections. Each data feed
row results in a single transaction of multiple single row writes to
multiple normalized tables.
The more columns in the feed row, the more write operations, longer the
transaction.
Operators are noticing that splitting a single feed of say - 100 columns -
into two consecutive feeds of 50 columns improves performance dramatically.
I am wondering whether the multi-threaded and very busy import environment
causes non-linear performance degradation for longer transactions. Would the
operators be advised to rewrite the feeds to result in more smaller
transactions rather than fewer, longer ones?
Carlo
| From | Date | Subject | |
|---|---|---|---|
| Next Message | FattahRozzaq | 2015-10-06 09:33:06 | Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average |
| Previous Message | Igor Neyman | 2015-10-05 18:29:12 | Re: shared-buffers set to 24GB but the RAM only use 4-5 GB average |