From: | Scott Ribe <scott_ribe(at)elevated-dev(dot)com> |
---|---|
To: | Sachin Kumar <sachinkumaras(at)gmail(dot)com> |
Cc: | Pgsql-admin <pgsql-admin(at)postgresql(dot)org>, krishna(at)thewebconz(dot)com, pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: how to make duplicate finding query faster? |
Date: | 2020-12-30 13:28:26 |
Message-ID: | 84F2C1B0-3510-410B-8924-0EB7377684A9@elevated-dev.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
> On Dec 30, 2020, at 6:24 AM, Sachin Kumar <sachinkumaras(at)gmail(dot)com> wrote:
>
> Yes, I am checking one by one because my goal is to fail the whole upload if there is any duplicate entry and to inform the user that they have a duplicate entry in the file.
That's not what I said, though. If you want to fail the whole copy, then you don't have to check one by one, just try the copy--assuming you have the correct constraints in place.
Unless you want to tell the user *which* rows are duplicates, in which case you can try a variant on my prior suggestion, copy into a temp table, use a join to find duplicates...
From | Date | Subject | |
---|---|---|---|
Next Message | John Scalia | 2020-12-30 21:38:31 | Creating a materialized view causing blocks |
Previous Message | Sachin Kumar | 2020-12-30 13:24:14 | Re: how to make duplicate finding query faster? |