From: | aditya desai <admad123(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Any way to speed up INSERT INTO |
Date: | 2022-03-08 13:06:17 |
Message-ID: | CAN0SRDF+zmPg54v_0JSjUxoCE7toXAPJ77AVgGg-9SKBP2Xbww@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi Tom,
I added BEGIN and COMMIT as shown below around insert and executed it from
pgadmin for 100,000 rows. It ran in just 1 min.
BEGIN;
INSERT INTO TABLE VALUES(....);
INSERT INTO TABLE VALUES(....);
.
,
COMMIT;
However when I run above from psql by passing it to psql(As shown below) as
a file. It still takes a lot of time. Am I doing anything wrong? How can I
run this from pgadmin within a minute?
psql -h host -U user -p Port -d database < INSERT_FILE.sql
PSQL is still printing as below.
INSERT 0 1
INSERT 0 1
Regards,
Aditya.
On Sat, Mar 5, 2022 at 12:12 AM Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> aditya desai <admad123(at)gmail(dot)com> writes:
> > One of the service layer app is inserting Millions of records in a table
> > but one row at a time. Although COPY is the fastest way to import a file
> in
> > a table. Application has a requirement of processing a row and inserting
> it
> > into a table. Is there any way this INSERT can be tuned by increasing
> > parameters? It is taking almost 10 hours for just 2.2 million rows in a
> > table. Table does not have any indexes or triggers.
>
> Using a prepared statement for the INSERT would help a little bit.
> What would help more, if you don't expect any insertion failures,
> is to group multiple inserts per transaction (ie put BEGIN ... COMMIT
> around each batch of 100 or 1000 or so insertions). There's not
> going to be any magic bullet that lets you get away without changing
> the app, though.
>
> It's quite possible that network round trip costs are a big chunk of your
> problem, in which case physically grouping multiple rows into each INSERT
> command (... or COPY ...) is the only way to fix it. But I'd start with
> trying to reduce the transaction commit overhead.
>
> regards, tom lane
>
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2022-03-08 14:53:49 | Re: Any way to speed up INSERT INTO |
Previous Message | Justin Pryzby | 2022-03-08 06:44:55 | Re: Optimal configuration for server |