From: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
---|---|
To: | Lee Kindness <lkindness(at)csl(dot)co(dot)uk> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Jim Buttafuoco <jim(at)buttafuoco(dot)net>, <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Bulkloading using COPY - ignore duplicates? |
Date: | 2001-12-13 18:20:59 |
Message-ID: | Pine.LNX.4.30.0112131724100.647-100000@peter.localdomain |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Lee Kindness writes:
> Yes, in an ideal world the input to COPY should be clean and
> consistent with defined indexes. However this is only really the case
> when COPY is used for database/table backup and restore. It misses the
> point that a major use of COPY is in speed optimisation on bulk
> inserts...
I think allowing this feature would open up a world of new dangerous
ideas, such as ignoring check contraints or foreign keys or magically
massaging other tables so that the foreign keys are satisfied, or ignoring
default values, or whatever. The next step would then be allowing the
same optimizations in INSERT. I feel COPY should load the data and that's
it. If you don't like the data you have then you have to fix it first.
--
Peter Eisentraut peter_e(at)gmx(dot)net
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Avriette | 2001-12-13 18:26:46 | Re: Platform testing (last call?) |
Previous Message | Doug McNaught | 2001-12-13 18:20:45 | Re: Platform testing (last call?) |