Re: Any way to speed up INSERT INTO

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: aditya desai <admad123(at)gmail(dot)com>
Cc: Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org>
Subject: Re: Any way to speed up INSERT INTO
Date: 2022-03-04 18:42:39
Message-ID: 3967140.1646419359@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

aditya desai <admad123(at)gmail(dot)com> writes:
> One of the service layer app is inserting Millions of records in a table
> but one row at a time. Although COPY is the fastest way to import a file in
> a table. Application has a requirement of processing a row and inserting it
> into a table. Is there any way this INSERT can be tuned by increasing
> parameters? It is taking almost 10 hours for just 2.2 million rows in a
> table. Table does not have any indexes or triggers.

Using a prepared statement for the INSERT would help a little bit.
What would help more, if you don't expect any insertion failures,
is to group multiple inserts per transaction (ie put BEGIN ... COMMIT
around each batch of 100 or 1000 or so insertions). There's not
going to be any magic bullet that lets you get away without changing
the app, though.

It's quite possible that network round trip costs are a big chunk of your
problem, in which case physically grouping multiple rows into each INSERT
command (... or COPY ...) is the only way to fix it. But I'd start with
trying to reduce the transaction commit overhead.

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message aditya desai 2022-03-04 18:42:40 Re: Any way to speed up INSERT INTO
Previous Message Bruce Momjian 2022-03-04 18:38:51 Re: Any way to speed up INSERT INTO