From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Mark Fenbers <Mark(dot)Fenbers(at)noaa(dot)gov> |
Cc: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: import ignoring duplicates |
Date: | 2010-05-17 06:04:17 |
Message-ID: | AANLkTila7aZ9_VwSnnH47dBZ47MqsZmOY9OtiKSZviwS@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
On Sun, May 16, 2010 at 12:38 PM, Mark Fenbers <Mark(dot)Fenbers(at)noaa(dot)gov> wrote:
> I am using psql's \copy command to add records to a database from a file.
> The file has over 100,000 lines. Occasionally, there is a duplicate, and
> the import ceases and an internal rollback is performed. In other words, no
> data is imported even if the first error occurs near the end of the file.
>
> I am looking for an option/switch to tell psql (or the \copy command) to
> skip over any duplicate key constraint viloations and continue to load any
> data that doesn't violate a duplicate key constraint. Is there such an
> option?
Sounds like you want this:
http://pgfoundry.org/projects/pgloader/
Note that copy is optimized to work in a single transaction. Breaking
those semantics WILL result in a slow load time, and there's not much
you can do about that.
From | Date | Subject | |
---|---|---|---|
Next Message | Torsten Zühlsdorff | 2010-05-17 08:57:08 | Re: [Trigger] Help needed with NEW.* and TG_TABLE_NAME |
Previous Message | Tim Landscheidt | 2010-05-17 00:32:00 | Re: import ignoring duplicates |