From: | Mark Felegyhazi <m_felegyhazi(at)yahoo(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Insert unique fails, still increments ID in a lookup table |
Date: | 2009-09-21 19:33:44 |
Message-ID: | 813829.42582.qm@web54408.mail.yahoo.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
Could you please give me some hints how to optimize this in my DB? I have a main table and a lookup table as follows:
create table main (
id bigserial primary key;
item_id bigint references lookup (item_id);
);
create table lookup (
item_id bigserial primary key;
item_name text unique not null;
);
When a new item arrives (in a temp table), I'm checking if it's already in the lookup table and insert it if not. I do this with a trigger function using the following snippet:
------
begin
insert into lookup values (default,NEW.item_name) returning item_id into itid;
exception
when unique_violation then
select into itid item_id from lookup where item_name=NEW.item_name;
end;
NEW.item_id := itid;
------
The problem is that the uniqueness check apparently increases the serial counter and hence I burn through the bigint IDs much faster. It's a waste for 100m+ records...
An example result for the main table where the second item arrives at the 4th record:
id | item_id
----------------
1 | 1
2 | 1
3 | 1
4 | 4
5 | 5
...
the lookup table becomes:
item_id | item_name
----------------------------
1 | apple
4 | orange
5 | banana
...
Any thoughts?
Thanks,
Mark
PS: I'd like to keep the unique property, because it makes the insert check fast and simple.
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Gould | 2009-09-21 19:56:35 | DDL with Reference on them. |
Previous Message | Scott Marlowe | 2009-09-21 19:04:09 | Re: VMWare file system / database corruption |