| From: | Stefan Schwarzer <stefan(dot)schwarzer(at)grid(dot)unep(dot)ch> |
|---|---|
| To: | pgsql-general(at)postgresql(dot)org, Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
| Subject: | Re: Normalized Tables & SELECT [was: Find "smallest common year"] |
| Date: | 2007-10-01 14:28:38 |
| Message-ID: | 618997C9-A5A5-4491-9A8A-A4B018AD8A1B@grid.unep.ch |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
>> I find it far easier to maintain normalized tables that produced
>> non-normalized ones (for things like data warehousing) than it is to
>> maintain non-normalized tables and trying to produce normalized data
>> from that.
>
> Ok, I do understand that.
>
> So, instead of the earlier mentioned database design, I would have
> something like this:
>
> - one table for the country names/ids/etc. (Afghanistan, 1;
> Albania, 2....)
> - one table for the variable names/ids/etc. (GDP, 1; Population,
> 2; Fish Catch, 3;....)
> - one table for the years names/ids/etc. (1970, 1; 1971, 2;
> 1973, 3; ....)
> and
> - one table for all "statistical data" with four fields -
> id_variable, id_country, id_year, and the actual value
>
> You say
>
>> I find it far easier to maintain normalized tables that produced
>> non-normalized ones (for things like data warehousing) than it is to
>> maintain non-normalized tables and trying to produce normalized data
>> from that.
What is your view about (having 500 different variables/data sets)
using a single table for all data versus one table for each variable.
In terms of "readability" I guess the second solution would be
better. But, then,.... I don't know...
Thanks for any views....
Stef
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Richard Huxton | 2007-10-01 14:35:29 | Re: sha1 function |
| Previous Message | Stefan Schwarzer | 2007-10-01 14:17:15 | Re: Normalized Tables & SELECT [was: Find "smallest common year"] |