From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | me nefcanto <sn(dot)1361(at)gmail(dot)com> |
Cc: | Zhang Mingli <zmlpostgres(at)gmail(dot)com>, "pgsql-bugs(at)lists(dot)postgresql(dot)org" <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Bug in copy |
Date: | 2025-02-08 06:11:45 |
Message-ID: | CAKFQuwYUeKVSWgy7JLYP+YsqeVmxZNW2QNROHRM_vZbOtc6PjQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Friday, February 7, 2025, me nefcanto <sn(dot)1361(at)gmail(dot)com> wrote:
> Hi, thank you for the response. If we analyze semantically, it had to be
> on_type_error or something. But what matters is the problem at hand.
> Inserting a million records not in an all-or-fail is a requirement. What
> options do we have for that?
>
In core, nothing really. As you see we are just now adding this kind of
thing piece-by-piece. Insert has the “no duplicates” solved but not other
issues, while copy solves for malformed data at present. The generality of
the name is because future errors to be ignored can then be added without
having an option for everything single one of them.
You might consider searching the internet for solutions to this
long-standing need.
The one project that comes to mind, but I’ve never used, is pg_bulkload.
https://github.com/ossc-db/pg_bulkload
In any case, this is now a discussion better suited for the -general
mailing list.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2025-02-09 06:02:07 | Re: Bug in copy |
Previous Message | me nefcanto | 2025-02-08 06:01:08 | Re: Bug in copy |