From: | Steve Micallef <stevenm(at)ot(dot)com(dot)au> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Skipping duplicate records? |
Date: | 2001-06-06 23:55:35 |
Message-ID: | 20010607094751.S20209-100000@toaster.syd.ot |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
I've recently migrated from MySQL to PostgreSQL and as impressed as I am
with Postgres, I have found one seemingly missing feature to be a little
bothersome..
'mysqlimport' has the ability to skip duplicate records when doing bulk
imports from non-binary files. PostgreSQL doesn't seem to have this
feature, and it causes a problem for me as I import extremely large
amounts of data into Postgres using 'copy' and it rejects the whole file
if one record breaches the primary key.
I have managed to get around this by hacking
src/backend/access/nbtree/nbtinsert.c to call elog with NOTICE instead of
ERROR, causing it to skip the duplicate record and continue importing.
Is there a way to get around this without changing the code? If not, will
a future release of Postgres optionally implement this?
Thanks in advance,
Steve Micallef
From | Date | Subject | |
---|---|---|---|
Next Message | will trillich | 2001-06-07 00:18:22 | Re: Re: Updating views |
Previous Message | Alex Pilosov | 2001-06-06 23:27:50 | Re: [HACKERS] something smells bad |