From: | Chris Travers <chris(dot)travers(at)gmail(dot)com> |
---|---|
To: | Mark Lybarger <mlybarger(at)gmail(dot)com> |
Cc: | Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: unique constraint with several null values |
Date: | 2016-07-21 06:39:56 |
Message-ID: | CAKt_ZfvM10=5_DAGy45siUK=+7TX8UX7HNx809vPnoaDnOrg4A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Jul 20, 2016 at 8:14 PM, Mark Lybarger <mlybarger(at)gmail(dot)com> wrote:
> I have a relation such as
> create table order_item ( id uuid not null primary key, order_id number
> not null, item_code text, make text, model text, reason text, size text,
> expiration_date timestamp );
>
> where the combination of the columns order_id, item_code, make, model,
> reason, size must be unique (unless there's an expiration date).
>
> I'm inclined to use a unique index:
>
> create unique index unique_index_order_item_1 on order_item (order_id,
> item_code, make, model, reason, size)
> where expiration_date is null;
>
You can also:
create unique index unique_index_order_item_1 on order_item
(coalesce(order_id, -1), coalesce(item_code, -1), coalesce(make,
'--NULL--), coalesce(model, '--NULL--), coalesce(reason, '--NULL--),
coalesce(size, '--NULL--'), coalesce(expiration_date, '1700-01-01
00:00:00'));
>
> this works as expected and the duplicate row is rejected
> :
> insert into order_item ( 1, 88, 'CODE', 'MAKE','MODEL','REASON', 'SIZE',
> null); <- first adds
> insert into order_item ( 1, 88, 'CODE', 'MAKE','MODEL','REASON', 'SIZE',
> null); <- rejects
>
> however, nulls are allowed for all the columns except the order_id. so,
> when I add a null value, it fails to meet my expectations,
>
> insert into order_item ( 1, 88, 'CODE', 'MAKE',null,'REASON', 'SIZE',
> null); <- first adds
> insert into order_item ( 1, 88, 'CODE', 'MAKE',null,'REASON', 'SIZE',
> null); <- adds, but should reject.
>
> This leads me to think I need to create 2^5 or 32 unique constraints to
> handle the various combinations of data that I can store. Until now, this
> integrity is handled in the application code. That breaks when the
> application is multi-threaded and the rules are not applied at the database
> level.
>
> Another solution I can think of is to just use a trigger to prevent the
> duplicate rows.
>
> Any thoughts are certainly appreciated. I can't do much about the data
> model itself right now, I need to protect the integrity of the data.
>
> Thanks!
> -mark-
>
>
>
>
>
>
--
Best Wishes,
Chris Travers
Efficito: Hosted Accounting and ERP. Robust and Flexible. No vendor
lock-in.
http://www.efficito.com/learn_more
From | Date | Subject | |
---|---|---|---|
Next Message | Alex Ignatov | 2016-07-21 11:14:38 | Re: pg_dump without any SET command in header of output plain text sql file |
Previous Message | Tatsuo Ishii | 2016-07-21 03:56:06 | Re: High Availability |