Re: Bug in copy

From: me nefcanto <sn(dot)1361(at)gmail(dot)com>
To: Zhang Mingli <zmlpostgres(at)gmail(dot)com>
Cc: pgsql-bugs(at)lists(dot)postgresql(dot)org
Subject: Re: Bug in copy
Date: 2025-02-08 06:01:08
Message-ID: CAEHBEOB2LOaDZycmkjcYDG6JJF0_kFX3gc9H+ZrL=cPNF+WnOg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Hi, thank you for the response. If we analyze semantically, it had to be
on_type_error or something. But what matters is the problem at hand.
Inserting a million records not in an all-or-fail is a requirement. What
options do we have for that?

On Sat, Feb 8, 2025 at 9:22 AM Zhang Mingli <zmlpostgres(at)gmail(dot)com> wrote:

> On Feb 8, 2025 at 13:28 +0800, me nefcanto <sn(dot)1361(at)gmail(dot)com>, wrote:
>
> Hello
> I run this command:
> copy "Parts" ("Id","Title") from stdin with (format csv, delimiter ",",
> on_error ignore)
> But I receive this error:
> duplicate key value violates unique constraint "PartsUniqueLocaleTitle"
> This means that the on_error setting is not working. When I try to insert
> a million records, this becomes extremely annoying and counterproductive.
> When we specify that on_error should be ignored, any type of error
> including data type inconsistency, check constraint inconsistency, foreign
> key inconsistency, etc. should be ignored and Postgres should move to the
> next record and not fail the entire bulk operation.
> RegardsSaeed Nemati
>
>
> Hi,
>
> As my understanding, on_error is designed to handle errors during data
> type conversions in PostgreSQL, similar to what we do in Greenplum or
> Cloudberry.
> Since these rows are valid, on_error doesn’t raise any concerns.
>
> --
> Zhang Mingli
> HashData
>

In response to

Responses

Browse pgsql-bugs by date

  From Date Subject
Next Message David G. Johnston 2025-02-08 06:11:45 Re: Bug in copy
Previous Message Zhang Mingli 2025-02-08 05:52:14 Re: Bug in copy