Re: Big data INSERT optimization - ExclusiveLock on extension of the table

From: pinker <pinker(at)onet(dot)eu>
To: pgsql-performance(at)postgresql(dot)org
Subject: Re: Big data INSERT optimization - ExclusiveLock on extension of the table
Date: 2016-08-18 22:26:35
Message-ID: 1471559195448-5917136.post@n5.nabble.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

> 1. rename table t01 to t02
OK...
> 2. insert into t02 1M rows in chunks for about 100k
Why not just insert into t01??

Because of cpu utilization, it speeds up when load is divided

> 3. from t01 (previously loaded table) insert data through stored procedure
But you renamed t01 so it no longer exists???
> to b01 - this happens parallel in over a dozen sessions
b01?

that's another table - permanent one

> 4. truncate t01
Huh??

The data were inserted to permanent storage so the temporary table can be
truncated and reused.

Ok, maybe the process is not so important; let's say the table is loaded,
then data are fetched and reloaded to other table through stored procedure
(with it's logic), then the table is truncated and process goes again. The
most important part is holding ExclusiveLocks ~ 1-5s.

--
View this message in context: http://postgresql.nabble.com/Big-data-INSERT-optimization-ExclusiveLock-on-extension-of-the-table-tp5916781p5917136.html
Sent from the PostgreSQL - performance mailing list archive at Nabble.com.

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Ashish Kumar Singh 2016-08-19 04:21:16 Re: Estimates on partial index
Previous Message Jim Nasby 2016-08-18 22:01:05 Re: Estimates on partial index