From: | Sam Mason <sam(at)samason(dot)me(dot)uk> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Dynamic table |
Date: | 2009-06-16 09:47:41 |
Message-ID: | 20090616094740.GF5407@samason.me.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Jun 16, 2009 at 09:11:20AM +0200, A B wrote:
> I have a standard table filled with customers (with a unique customer
> id, names etc.) and for each customer I need to store some integer
> values. The problem is that the number of integer values that should
> be stored WILL change over time (it can both increase and decrease).
> It will most likely grow from zero to 10-18 and then perhaps add 1 or
> remove one value per year but basically be about 10-18 or so.
>
> I must be able to add a new integer value to all customers, and remove
> an integer value from all customers Altering the table by adding and
> deleting columns would theoretically solve it, but since columns are
> not really dropped, just hidden away, and there is a 1600 column
> limit on tables as I understand it, this would crash the application
> at some time in the future, or at least it will waste a lot of
> discspace.
> Method B)
> The EAV way.
> With a meta table
> CREATE TABLE metadata (id integer primary key, name varchar(30),
> defaultvalue integer);
>
> and then the values
> CREATE TABLE eav (customer_id references customer, key integer
> references metadata on delete cascade , value integer,
> unique(customer_id, key) );
The way you described the problem the EAV solution sounds like the best
match--not sure if I'd use your synthetic keys though, they will save a
bit of space on disk but queries will be much more complicated to write.
EAV style solutions are rarely good/easy to maintain when the problem
changes so maybe you can take a step back from the problem and solve it
some other way.
The examples you gave (i.e. shoe size, hair length) would fit normal
table columns much better. Maybe just use a set of tables for each set
of related attributes (i.e. physical attributes, personal preferences,
not sure what would fit here). You'd also be able to use appropriate
data types/constraints, for example shoe size could have two columns one
for the measurement system (i.e. European, US, UK...) and the other for
the actual measurement.
Just had a quick flick through your previous posts; and I'd probably
stick with the multiple tables approach. It's the most natural fit to
relational databases and until you know more about the problem (i.e.
you've experienced the data your going to be getting and the ways it's
going to change) you can't do much better.
--
Sam http://samason.me.uk/
From | Date | Subject | |
---|---|---|---|
Next Message | Albe Laurenz | 2009-06-16 09:56:33 | Playing with set returning functions in SELECT list - behaviour intended? |
Previous Message | Pedro Doria Meunier | 2009-06-16 08:54:49 | Re: 10 TB database |