From: | Andreas Kretschmer <akretschmer(at)despammed(dot)com> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: [despammed] Re: Duplicated records |
Date: | 2005-05-25 17:57:02 |
Message-ID: | 20050525175702.GA3389@webserv.wug-glas.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
am 25.05.2005, um 13:58:07 -0300 mailte lucas(at)presserv(dot)org folgendes:
> Hi.
> Thanks for the article...
> But, I have read it and the query works very slow...
> My table have aprox. 180.000 records (correct) and in entire table it has
> aprox.360.000 records(duplicated)...
How often is this necessary?
> Is there a way to delete those duplicated records faster??? Remembering the
> table have aprox 360.000 records...
I dont know, but i think, you should prevent duplicated records in the
future, and make the job (delete duplicates) now.
Btw.: you wrote, there is a primary key on the first row. Real?
,----[ sorry, messages in german language ]
| test_db=# create table blub (id int primary key, name varchar);
| HINWEIS: CREATE TABLE / PRIMARY KEY erstellt implizit einen Index >>blub_pkey<< für Tabelle >>blub<<
| CREATE TABLE
| test_db=# insert into blub values (1, 'x');
| INSERT 970706 1
| test_db=# insert into blub values (1, 'y');
| FEHLER: duplizierter Schlüssel verletzt Unique-Constraint >>blub_pkey<<
`----
In other words: if there a primary key on the first row, you cannot
insert duplicates.
Regards, Andreas
--
Andreas Kretschmer (Kontakt: siehe Header)
Heynitz: 035242/47212, D1: 0160/7141639
GnuPG-ID 0x3FFF606C http://wwwkeys.de.pgp.net
=== Schollglas Unternehmensgruppe ===
From | Date | Subject | |
---|---|---|---|
Next Message | lucas | 2005-05-25 20:53:56 | Re: Duplicated records |
Previous Message | Alvaro Herrera | 2005-05-25 17:44:56 | Re: Duplicated records |