Re: Bug in copy

From: me nefcanto <sn(dot)1361(at)gmail(dot)com>
To: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
Cc: Zhang Mingli <zmlpostgres(at)gmail(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: Bug in copy
Date: 2025-02-09 12:33:57
Message-ID: CAEHBEOAS4X4UipH2or=5qJzYw8J+PLh03Lq7uTSEfz9Gr7031Q@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

@David, I saw that pg_bulkload. Amazing performance. But that's a command
line tool. I need to insert bulk data in my Node.js app, via code.

On Sun, Feb 9, 2025 at 4:00 PM me nefcanto <sn(dot)1361(at)gmail(dot)com> wrote:

> @laurenz if I use `insert into` or the `merge` would I be able to bypass
> records with errors? Or would I fail there too? I mean there are lots of
> ways a record can be limited. Unique indexes, check constraints, foreign
> key constraints, etc. What happens in those cases?
>
> And why not fixing the "on_error ignore" in the first place? Maybe that
> would be a simpler way. I don't know the internals of bulk insertion, but
> if at some point it has a loop in it, then that's much simpler to catch
> errors in that loop.
>
> Regards
> Saeed
>
> On Sun, Feb 9, 2025 at 9:32 AM Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
> wrote:
>
>> On Sat, 2025-02-08 at 09:31 +0330, me nefcanto wrote:
>> > Inserting a million records not in an all-or-fail is a requirement.
>> What options do we have for that?
>>
>> Use COPY to load the data into a new (temporary?) table.
>> Then use INSERT INTO ... SELECT ... ON CONFLICT ... or MERGE to merge
>> the data from that table to the actual destination.
>>
>> COPY is not a full-fledged ETL tool.
>>
>> Yours,
>> Laurenz Albe
>>
>

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message David G. Johnston 2025-02-09 14:13:40 Re: Bug in copy
Previous Message me nefcanto 2025-02-09 12:30:11 Re: Bug in copy