| From: | The Hermit Hacker <scrappy(at)hub(dot)org> |
|---|---|
| To: | Clark Evans <clark(dot)evans(at)manhattanproject(dot)com> |
| Cc: | pgsql-hackers(at)postgreSQL(dot)org |
| Subject: | Re: [HACKERS] Should the following work...? |
| Date: | 1999-03-30 19:08:33 |
| Message-ID: | Pine.BSF.4.05.9903301508120.55565-100000@thelab.hub.org |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Ya, that's what I forgot too :( Its not something I use everyday, so
never think about it :)
On Tue, 30 Mar 1999, Clark Evans wrote:
> The Hermit Hacker wrote:
> > To find duplicate records, or, at least,
> > data in a particular field, he suggests
> > just doing:
> >
> > SELECT id,count(1)
> > FROM clients
> > GROUP BY id
> > HAVING count(1) > 1;
> >
> > A nice, clean, simple solution :)
>
> Ya. That's pretty. For some
> reason I always forget using the
> 'HAVING' clause, and end up using
> a double where clause.
>
> :) Clark
>
Marc G. Fournier ICQ#7615664 IRC Nick: Scrappy
Systems Administrator @ hub.org
primary: scrappy(at)hub(dot)org secondary: scrappy(at){freebsd|postgresql}.org
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Michael Davis | 1999-03-30 21:23:59 | Interesting failure when selecting aggregates |
| Previous Message | Clark Evans | 1999-03-30 18:47:58 | Re: [HACKERS] Should the following work...? |