From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Lee Hachadoorian <lee(dot)hachadoorian(at)gmail(dot)com> |
Cc: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: JOIN column maximum |
Date: | 2012-01-05 23:18:13 |
Message-ID: | 22933.1325805493@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Lee Hachadoorian <lee(dot)hachadoorian(at)gmail(dot)com> writes:
> How is the number of columns in a join determined? When I combine somewhere
> around 90 tables in a JOIN, the query returns:
> ERROR: joins can have at most 32767 columns
It's the sum of the number of columns in the base tables.
> I'm sure most people will say "Why the hell are you joining 90 tables."
Not only that, but why are you working with over-300-column tables?
Seems like your schema design needs rethinking.
> I've asked this list before for advice on how to work with the
> approximately 23,000 column American Community Survey dataset,
Are there really 23000 populated values in each row? I hesitate to
suggest an EAV approach, but it kinda seems like you need to go in that
direction. You're never going to get decent performance out of a schema
that requires 100-way joins, even if you avoid bumping up against hard
limits.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Kevin Duffy | 2012-01-05 23:32:58 | function return update count |
Previous Message | Lee Hachadoorian | 2012-01-05 23:11:11 | JOIN column maximum |