From: | Christof Glaser <gcg(at)gl(dot)aser(dot)de> |
---|---|
To: | Matthew Hagerty <matthew(at)brwholesale(dot)com>, pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Removing duplicates |
Date: | 2002-02-26 17:26:34 |
Message-ID: | 20020226171944.7B8FF636A3@mail.gl.aser.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Matthew,
On Tuesday, 26. February 2002 16:10, Matthew Hagerty wrote:
> Greetings,
>
> I have a customer database (name, address1, address2, city, state, zip)
> and I need a query (or two) that will give me a mailing list with the
> least amount of duplicates possible. I know that precise matching is
> not possible, i.e. "P.O. Box 123" will never match "PO Box 123" without
> some data massaging, but if I can isolate even 50% of any duplicates,
> that would help greatly.
>
> Also, any suggestions on which parameters to check the duplicates for?
> My first thoughts were to make sure there were no two addresses the
> same in the same zip code. Any insight (or examples) would be greatly
> appreciated.
I wrote a (Windows) C++ program some time ago to eliminate duplicate
address records in a plain text file. Basically it works like this:
1. Choose the fields to check for duplicates. Let's call these primary
fields. I used Name, State, ZIP, City, Address1, Address2
(in that order).
2. Choose the fields to decide, which entry should be kept, if duplicates
are found. These are secondary fields. (phone, contact, email, ...)
3. Normalize the primary fields: remove any whitespace, punctuation,
special chars, make lowercase, expand "St." to "street" etc.
That makes "P.O. Box 123" equal to "PO Box 123", as both are
normalized to "pobox123".
4. Sort on the normalized primary fields. Now duplicates are adjacent.
5. Iterate through the sorted data:
- Compare (normalized) primary fields with the next record
- If equal, we found a duplicate and start another iteration:
. Check the secondary fields and decide, which record to keep.
(In my case, those with a phone # would be kept.)
. Compare primary fields again with the next record, checking
for more duplicates
Translating the above into SQL is left as an exercise... :-)
Hope that helps to get you started.
Christof.
--
gl.aser . software engineering . internet service
http://gl.aser.de/ . Planckstraße 7 . D-39104 Magdeburg
From | Date | Subject | |
---|---|---|---|
Next Message | Andreas Joseph Krogh | 2002-02-26 17:56:00 | Re: Timestamp output |
Previous Message | Andy Marden | 2002-02-26 17:20:16 | Re: Join Statements |