From: | "Jim Buttafuoco" <jim(at)buttafuoco(dot)net> |
---|---|
To: | Peter Eisentraut <peter_e(at)gmx(dot)net>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Lee Kindness <lkindness(at)csl(dot)co(dot)uk>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Bulkloading using COPY - ignore duplicates? |
Date: | 2001-10-02 22:59:42 |
Message-ID: | 200110022259.f92MxgV26809@dual.buttafuoco.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I have used Oracle SQLOADER for many years now. It has the ability to
put rejects/discards/bad into an output file and keep on going, maybe
this should be added to the copy command.
COPY [ BINARY ] table [ WITH OIDS ]
FROM { 'filename' | stdin }
[ [USING] DELIMITERS 'delimiter' ]
[ WITH NULL AS 'null string' ]
[ DISCARDS 'filename' ]
what do you think???
> Tom Lane writes:
>
> > It occurs to me that skip-the-insert might be a useful option for
> > INSERTs that detect a unique-key conflict, not only for COPY. (Cf.
> > the regular discussions we see on whether to do INSERT first or
> > UPDATE first when the key might already exist.) Maybe a SET
variable
> > that applies to all forms of insertion would be appropriate.
>
> What we need is:
>
> 1. Make errors not abort the transaction.
>
> 2. Error codes
>
> Then you can make your client deal with this in which ever way you
want,
> at least for single-value inserts.
>
> However, it seems to me that COPY ignoring duplicates can easily be
done
> by preprocessing the input file.
>
> --
> Peter Eisentraut peter_e(at)gmx(dot)net http://funkturm.homeip.net/~peter
>
>
> ---------------------------(end of broadcast)-------------------------
--
> TIP 4: Don't 'kill -9' the postmaster
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Martín Marqués | 2001-10-02 23:28:07 | Missing inserts |
Previous Message | Alex Pilosov | 2001-10-02 22:52:41 | Re: RFD: access to remore databases: altername suggestion |