Re: Inserting millions of record in a partitioned Table

From: Rob Sargent <robjsargent(at)gmail(dot)com>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Inserting millions of record in a partitioned Table
Date: 2017-09-20 20:54:32
Message-ID: 4c8e2761-1c8f-56c4-a2bc-de33eef23206@gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


On 09/20/2017 02:46 PM, Vick Khera wrote:
> On Wed, Sep 20, 2017 at 10:10 AM, Job <Job(at)colliniconsulting(dot)it
> <mailto:Job(at)colliniconsulting(dot)it>> wrote:
>
> We noticed that if we import directly into the global table it is
> really, really slow.
> Importing directly in the single partition is faster.
>
>
> Do you have a rule or trigger on the main table to redirect to the
> partitions? You should expect that to take some extra time *per row*.
> Your best bet is to just import into the proper partition and make
> sure your application produces batch files that align with your
> partitions.
>
> Either that or write a program that reads the data, determines the
> partition, and then inserts directly to it. It might be faster.
>
I wonder if this is a case of hurry up and wait. A script which could
load say 10 records, and assuming that takes much less than one second,
run once per second (waiting 1000 - runtime ms) would by now have done
about a million records since the question was asked.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Job 2017-09-20 20:55:39 R: Insert large number of records
Previous Message Vick Khera 2017-09-20 20:46:41 Re: Inserting millions of record in a partitioned Table