On 07/09/2012 04:48 PM, Rich Shepard wrote:
> Source data has duplicates. I have a file that creates the table then
> INSERTS INTO the table all the rows. When I see errors flash by during the
> 'psql -d <database> -f <file.sql>' I try to scroll back in the terminal to
> see where the duplicate rows are located. Too often they are too far
> back to
> let me scroll to see them.
>
> There must be a better way of doing this. Can I run psql with the tee
> command to capture errors in a file I can examine? What is the proper/most
> efficient way to identify the duplicates so they can be removed?
>
> TIA,
>
> Rich
>
>
psql -d <database> -f file.sql > file.log 2>&1 would give you a logfile
sort -u file.raw > file.uniq might give you clean data?