From: | Ow Mun Heng <Ow(dot)Mun(dot)Heng(at)wdc(dot)com> |
---|---|
To: | Michael Glaesemann <grzm(at)seespotcode(dot)net> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Column as arrays.. more efficient than columns? |
Date: | 2007-09-07 01:46:26 |
Message-ID: | 1189129586.17218.19.camel@neuromancer.home.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, 2007-09-06 at 20:20 -0500, Michael Glaesemann wrote:
> On Sep 6, 2007, at 19:58 , Ow Mun Heng wrote:
>
> > Don't denormalise the table?
>
> Yes. Don't denormalize the tables.
I would believe performance would be better it being denormalised. (in
this case)
>
> > don't put them into arrays?
>
> Yes. Don't use arrays. Caveat: if the data is *naturally* an array
> and you will not be doing any relational operations on individual
> elements of the arrays, then it makes sense to use arrays. Treat
> arrays as you would any other opaque type.
Data is naturally an array, and will be used as an array in any case.
Since there will not be queries where users will select any one of the
values in that array, but the whole array itself.
data willbe used in this form
code | v1 | v2 | v3 | v4
A 1 2 10 23
B 10 12 15 22
C 11 24 18 46
D 21 22 20 41
which will be imported into statistical software/excel for further
manipulation.
I i give them in the denormalised form, it'll take them an addition
30min or so to make them back into the form above.
and it'll make the queries more efficient too.
index on Code,
select * from foo where code = 'B';
By denormalising, I will also get the benefit of reducing the # of rows
by a factor of 20.. (20 rows = 1 code)
From | Date | Subject | |
---|---|---|---|
Next Message | Merlin Moncure | 2007-09-07 01:53:26 | Re: Column as arrays.. more efficient than columns? |
Previous Message | Chris Browne | 2007-09-07 01:45:37 | Re: a provocative question? |