From: | aditya desai <admad123(at)gmail(dot)com> |
---|---|
To: | Bruce Momjian <bruce(at)momjian(dot)us> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Pgsql Performance <pgsql-performance(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Any way to speed up INSERT INTO |
Date: | 2022-03-05 07:02:59 |
Message-ID: | CAN0SRDHAd-MR6Ss31pxbxgzcVEGGbH1MibYSSDLuK90-9Cka0Q@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Thanks all for your inputs. We will try to implement inserts in single
transaction. I feel that is the best approach.
Thanks,
AD.
On Saturday, March 5, 2022, Bruce Momjian <bruce(at)momjian(dot)us> wrote:
> On Fri, Mar 4, 2022 at 01:42:39PM -0500, Tom Lane wrote:
> > aditya desai <admad123(at)gmail(dot)com> writes:
> > > One of the service layer app is inserting Millions of records in a
> table
> > > but one row at a time. Although COPY is the fastest way to import a
> file in
> > > a table. Application has a requirement of processing a row and
> inserting it
> > > into a table. Is there any way this INSERT can be tuned by increasing
> > > parameters? It is taking almost 10 hours for just 2.2 million rows in a
> > > table. Table does not have any indexes or triggers.
> >
> > Using a prepared statement for the INSERT would help a little bit.
>
> Yeah, I thought about that but it seems it would only minimally help.
>
> > What would help more, if you don't expect any insertion failures,
> > is to group multiple inserts per transaction (ie put BEGIN ... COMMIT
> > around each batch of 100 or 1000 or so insertions). There's not
> > going to be any magic bullet that lets you get away without changing
> > the app, though.
>
> Yeah, he/she could insert via multiple rows too:
>
> CREATE TABLE test (x int);
> INSERT INTO test VALUES (1), (2), (3);
>
> > It's quite possible that network round trip costs are a big chunk of your
> > problem, in which case physically grouping multiple rows into each INSERT
> > command (... or COPY ...) is the only way to fix it. But I'd start with
> > trying to reduce the transaction commit overhead.
>
> Agreed, turning off synchronous_commit for that those queries would be
> my first approach.
>
> --
> Bruce Momjian <bruce(at)momjian(dot)us> https://momjian.us
> EDB https://enterprisedb.com
>
> If only the physical world exists, free will is an illusion.
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Marc Rechté | 2022-03-05 08:56:59 | Re: OOM killer while pg_restore |
Previous Message | Tom Lane | 2022-03-05 02:44:25 | Re: XA transactions much slower on 14.2 than on 13.5 |