From: | Oliver Elphick <olly(at)lfix(dot)co(dot)uk> |
---|---|
To: | Arup Rakshit <aruprakshit(at)rocketmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: How to skip duplicate records while copying from CSV to table in Postgresql using "COPY" |
Date: | 2015-05-24 12:52:47 |
Message-ID: | 1432471967.21861.72.camel@linda |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, 2015-05-24 at 16:56 +0630, Arup Rakshit wrote:
> Hi,
>
> I am copying the data from a CSV file to a Table using "COPY" command.
> But one thing that I got stuck, is how to skip duplicate records while
> copying from CSV to tables. By looking at the documentation, it seems,
> Postgresql don't have any inbuilt too to handle this with "copy"
> command. By doing Google I got below 1 idea to use temp table.
>
> http://stackoverflow.com/questions/13947327/to-ignore-duplicate-keys-during-copy-from-in-postgresql
>
> I am also thinking what if I let the records get inserted, and then
> delete the duplicate records from table as this post suggested -
> http://www.postgresql.org/message-id/37013500.DFF0A64A@manhattanproject.com.
>
> Both of the solution looks like doing double work. But I am not sure
> which is the best solution here. Can anybody suggest which approach
> should I adopt ? Or if any better ideas you guys have on this task,
> please share.
Assuming you are using Unix, or can install Unix tools, run the input
files through
sort -u
before passing them to COPY.
Oliver Elphick
From | Date | Subject | |
---|---|---|---|
Next Message | Ravi Krishna | 2015-05-24 13:15:11 | Re: PG and undo logging |
Previous Message | Arup Rakshit | 2015-05-24 11:55:27 | Re: How to skip duplicate records while copying from CSV to table in Postgresql using "COPY" |