From: | Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com> |
---|---|
To: | Lee Hachadoorian <lee(dot)hachadoorian(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: JOIN column maximum |
Date: | 2012-01-06 02:22:57 |
Message-ID: | CAOR=d=3KwedYhmTUe24G0fTi6=fnh0KCtBpsbTXPPTSkTiEKbw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Jan 5, 2012 at 6:10 PM, Lee Hachadoorian
<lee(dot)hachadoorian(at)gmail(dot)com> wrote:
>
> Many of the smaller geographies, e.g. census tracts, do in fact have data
> for the vast majority of the columns. I am trying to combine it all into one
> table to avoid the slowness of multiple JOINs (even though in practice I'm
> never joining all the tables at once). EAV sounds correct in terms of
> normalization, but isn't it usually better performance-wise to store
> write-once/read-many data in a denormalized (i.e. flattened) fashion? One of
> these days I'll have to try to benchmark some different approaches, but for
> now planning on using array columns, with each "sequence" (in the Census
> sense, not the Postgres sense) of 200+ variables in its own array rather
> than its own table.
Are you using arrays or hstore?
From | Date | Subject | |
---|---|---|---|
Next Message | Darren Duncan | 2012-01-06 03:19:06 | Re: JOIN column maximum |
Previous Message | Lee Hachadoorian | 2012-01-06 01:10:17 | Re: JOIN column maximum |