From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Charles Gomes <charlesrg(at)outlook(dot)com> |
Cc: | "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Performance on Bulk Insert to Partitioned Table |
Date: | 2012-12-20 20:02:34 |
Message-ID: | 20121220200234.GJ12354@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Charles,
* Charles Gomes (charlesrg(at)outlook(dot)com) wrote:
> I’m doing 1.2 Billion inserts into a table partitioned in
> 15.
Do you end up having multiple threads writing to the same, underlying,
tables..? If so, I've seen that problem before. Look at pg_locks while
things are running and see if there are 'extend' locks that aren't being
immediately granted.
Basically, there's a lock that PG has on a per-relation basis to extend
the relation (by a mere 8K..) which will block other writers. If
there's a lot of contention around that lock, you'll get poor
performance and it'll be faster to have independent threads writing
directly to the underlying tables. I doubt rewriting the trigger in C
will help if the problem is the extent lock.
If you do get this working well, I'd love to hear what you did to
accomplish that. Note also that you can get bottle-necked on the WAL
data, unless you've taken steps to avoid that WAL.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Charles Gomes | 2012-12-20 20:08:33 | Re: Performance on Bulk Insert to Partitioned Table |
Previous Message | Charles Gomes | 2012-12-20 18:55:29 | Re: Performance on Bulk Insert to Partitioned Table |