From: | Lee Kindness <lkindness(at)csl(dot)co(dot)uk> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | Lee Kindness <lkindness(at)csl(dot)co(dot)uk> |
Subject: | Bulkloading using COPY - ignore duplicates? |
Date: | 2001-12-11 16:05:34 |
Message-ID: | 15382.11982.324375.978316@elsick.csl.co.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Gents,
I started quite a long thread about this back in September. To
summarise I was proposing that COPY FROM would not abort the
transaction when it encountered data which would cause a uniqueness
violation on the table index(s).
Generally I think this was seen as a 'Good Thing'TM for a number of
reasons:
1. Performance enhancements when doing doing bulk inserts - pre or
post processing the data to remove duplicates is very time
consuming. Likewise the best tool should always be used for the job at
and, and for searching/removing things it's a database.
2. Feature parity with other database systems. For example Oracle's
SQLOADER has a feature to not insert duplicates and rather move
them to another file for later investigation.
Naturally the default behaviour would be the current one of assuming
valid data. Also the duplicate check would not add anything to the
current code path for COPY FROM - it would not take any longer.
I attempted to add this functionality to PostgreSQL myself but got as
far as an updated parser and a COPY FROM which resulted in a database
recovery!
So (here's the question finally) is it worthwhile adding this
enhancement to the TODO list?
Thanks, Lee.
--
Lee Kindness, Senior Software Engineer, Concept Systems Limited.
http://services.csl.co.uk/ http://www.csl.co.uk/ +44 131 5575595
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2001-12-11 16:17:11 | Duplicate-rows bug reports |
Previous Message | Tom Lane | 2001-12-11 15:55:30 | Re: Restoring large tables with COPY |