Re: maximum "target list" (maximum columns)

From: Gavin Flower <GavinFlower(at)archidevsys(dot)co(dot)nz>
To: Shaun Cutts <shauncutts(at)factfiber(dot)com>, pgsql-general(at)postgresql(dot)org
Subject: Re: maximum "target list" (maximum columns)
Date: 2016-06-16 21:31:37
Message-ID: 7860af19-fe03-568c-2c66-ca02e46882da@archidevsys.co.nz
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On 16/06/16 16:39, Shaun Cutts wrote:
> The opinion of database developers is that using more than 1664 columns is bad design, so that the current maximum number of columns is not onerous.
>
> When I simply built applications on top of databases, I was of the same opinion.
>
> However, now my job is to perform some sort of analysis on data from elsewhere. Over the course of a couple weeks I need to do something with data that often is
> structured into tables of many thousands of columns. Figuring out what the appropriate form is part of manipulating the data — something for which I’d
> like to use postgres. My applications, such as they are, handle metadata not data.
>
> Operations such as crosstab or json_populate_record can easily create rows with many more than 1664 fields. I also use data such as census data — with many different survey fields all applying to geographic areas. For a given application only some are relevant, but ideally I’d create a materialized view for a given project with relevant data. In the best case, the “library” version of the dataset would have just a few very wide tables. Storing in json is possible but inconvenient, slower and means that often type info must be stored
> separately.
>
> In short, IMHO the database shouldn’t force me to structure my data before I understand it, and being able to query it is how I come to understand it. At some scale, practicality dictates that one needs to take “special measures” to handle large volumes of data — large in breadth as well as width. But this boundary should be on the order of millions of columns, not thousands.
>
> Is there a reason besides “its bad design” to disallow tables with many columns?
>
>
>
Not had to deal with extremely fat database tables.

However, back when I was a COBOL programmer: one system had a very big
record type, more than 10 times bigger than anything else I'd ever dealt
with. It was an extreme pain to debug problems with such a large record
(try poring over hex dumps on lineflow!) - so I expect it would be
painful for a large database table.

Could you spread the huge number of columns into a number or database
tables with some sort of logical grouping?

Cheers,
Gavin

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2016-06-16 21:31:41 Re: maximum "target list" (maximum columns)
Previous Message Adrian Klaver 2016-06-16 19:48:12 Re: Re: regarding schema only migration from sqlserver to postgres with runmtk.sh