| From: | me nefcanto <sn(dot)1361(at)gmail(dot)com> | 
|---|---|
| To: | Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at> | 
| Cc: | Zhang Mingli <zmlpostgres(at)gmail(dot)com>, pgsql-bugs(at)lists(dot)postgresql(dot)org | 
| Subject: | Re: Bug in copy | 
| Date: | 2025-02-09 12:30:11 | 
| Message-ID: | CAEHBEOD7QrxcsAh09qydomb4sV_HoZ4y+_C9-8pudJ8NDha=Tg@mail.gmail.com | 
| Views: | Whole Thread | Raw Message | Download mbox | Resend email | 
| Thread: | |
| Lists: | pgsql-bugs | 
@laurenz if I use `insert into` or the `merge` would I be able to bypass
records with errors? Or would I fail there too? I mean there are lots of
ways a record can be limited. Unique indexes, check constraints, foreign
key constraints, etc. What happens in those cases?
And why not fixing the "on_error ignore" in the first place? Maybe that
would be a simpler way. I don't know the internals of bulk insertion, but
if at some point it has a loop in it, then that's much simpler to catch
errors in that loop.
Regards
Saeed
On Sun, Feb 9, 2025 at 9:32 AM Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
wrote:
> On Sat, 2025-02-08 at 09:31 +0330, me nefcanto wrote:
> > Inserting a million records not in an all-or-fail is a requirement. What
> options do we have for that?
>
> Use COPY to load the data into a new (temporary?) table.
> Then use INSERT INTO ... SELECT ... ON CONFLICT ... or MERGE to merge
> the data from that table to the actual destination.
>
> COPY is not a full-fledged ETL tool.
>
> Yours,
> Laurenz Albe
>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | me nefcanto | 2025-02-09 12:33:57 | Re: Bug in copy | 
| Previous Message | Laurenz Albe | 2025-02-09 06:02:07 | Re: Bug in copy |