Re: How to copy rows into same table efficiently

From: "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com>
To: Ron <ronljohnsonjr(at)gmail(dot)com>
Cc: "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: How to copy rows into same table efficiently
Date: 2021-10-26 13:23:55
Message-ID: CAKFQuwY1yFQw=yryUyJbKhaYhFsQrX97rp46SYU0y=4Zvuc5gw@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Tue, Oct 26, 2021 at 2:06 AM Ron <ronljohnsonjr(at)gmail(dot)com> wrote:

> Anyway, for millions of rows, I might use COPY instead of INSERT
> (depending
> on how many millions, how many indices, how large the rows, how fast the
> machine, etc.
>
>
I don't imagine using COPY TO to write the data to a file and then COPY
FROM to import it is going to be an improvement over INSERT-SELECT.

Now, if you can perform the COPY TO on a replica and then only run the COPY
FROM on the primary that might be worth it. Avoiding the I/O for the read
on the primary would be a big win.

David J.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Marcos Pegoraro 2021-10-26 13:36:13 Re: Determining if a table really changed in a trigger
Previous Message David G. Johnston 2021-10-26 13:17:23 Re: Determining if a table really changed in a trigger