From: | Lee Harr <missive(at)frontiernet(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: serial columns & loads misfeature? |
Date: | 2002-06-29 22:45:00 |
Message-ID: | afld9c$1eni$1@news.hub.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> After I created the DB, I inserted the data (thousands of inserts) via
> psql. All went well. Then I started testing the changed code (Perl)
> and when I went to insert, I got a "dup key" error.
>
> It took me awhile to figure out what was going on, but I can recreate
> the problem with:
>
> create table test (s serial, i int);
> insert into test values (1,1);
> insert into test values (2,2);
> insert into test values (3,3);
> insert into test (i) values (4);
> ERROR: Cannot insert a duplicate key into unique index test_s_key
>
With these inserts, you are bypassing the SERIAL mechanism
(it uses a DEFAULT value)
> I was expecting the system to realize new "keys" had been inserted, and
> so when the "nextval" that implicitly happens on a serial field is run,
> it would "know" that it was too small and return "max(s)+1". [FWIW, my
> expectations in this area were set by my experience with Informix and
> mysql, both do this; not sure if other RDBMs do.]
>
I can certainly see the advantage of having the SERIAL columns set
properly by some kind of OtherDB --> Postgres conversion tool, but
I do not think there is a need for a different mechanism in the
usual case.
From | Date | Subject | |
---|---|---|---|
Next Message | Curt Sampson | 2002-06-30 03:24:10 | Re: One source of constant annoyance identified |
Previous Message | Oleg Bartunov | 2002-06-29 16:30:45 | Re: literature about search-algorithms |