From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Steve Lane <slane(at)fmpro(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: update db doesnt work |
Date: | 2002-05-27 16:28:00 |
Message-ID: | 13805.1022516880@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Steve Lane <slane(at)fmpro(dot)com> writes:
> Does this limit only apply to a "defined table" (such as a table or view) ro
> does it also apply to the result of any select, for example a two-table join
> which would have 1601 output columns?
It would apply to anything that forms a tuple, so yes a join output is
restricted. The source-code comments may be illuminating:
/*
* MaxHeapAttributeNumber limits the number of (user) columns in a table.
* The key limit on this value is that the size of the fixed overhead for
* a tuple, plus the size of the null-values bitmap (at 1 bit per column),
* plus MAXALIGN alignment, must fit into t_hoff which is uint8. On most
* machines the absolute upper limit without making t_hoff wider would be
* about 1700. Note, however, that depending on column data types you will
* likely also be running into the disk-block-based limit on overall tuple
* size if you have more than a thousand or so columns. TOAST won't help.
*/
#define MaxHeapAttributeNumber 1600 /* 8 * 200 */
I am not sure that we are careful to check natts <=
MaxHeapAttributeNumber everywhere that we really should. It could be
that you would see an error (or buggy behavior:-() in the join case only
if there were actually some nulls in a created tuple. But IMHO the
system ought to reject the attempt to form the join to begin with...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2002-05-27 16:28:57 | Re: update db doesnt work |
Previous Message | Oliver Elphick | 2002-05-27 16:21:53 | Re: question about 2 versions and libraries |