From: | Andrew Gierth <andrew(at)tao11(dot)riddles(dot)org(dot)uk> |
---|---|
To: | "Nasby\, Jim" <nasbyj(at)amazon(dot)com> |
Cc: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com>, John Naylor <jcnaylor(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: PostgreSQL Limits and lack of documentation about them. |
Date: | 2018-11-01 08:15:18 |
Message-ID: | 878t2d5jgb.fsf@news-spur.riddles.org.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
>>>>> "Nasby," == Nasby, Jim <nasbyj(at)amazon(dot)com> writes:
>> I did try a table with 1600 text columns then inserted values of
>> several kB each. Trying with BIGINT columns the row was too large
>> for the page. I've never really gotten a chance to explore these
>> limits before, so I guess this is about the time.
Nasby> Hmm… 18 bytes doesn’t sound right, at least not for the Datum.
Nasby> Offhand I’d expect it to be the small (1 byte) varlena header +
Nasby> an OID (4 bytes). Even then I don’t understand how 1600 text
Nasby> columns would work; the data area of a tuple should be limited
Nasby> to ~2000 bytes, and 2000/5 = 400.
1600 text columns won't work unless the values are very short or null.
A toast pointer is indeed 18 bytes: 1 byte varlena header flagging it as
a toast pointer, 1 byte type tag, raw size, saved size, toast value oid,
toast table oid.
A tuple can be almost as large as a block; the block/4 threshold is only
the point at which the toaster is run, not a limit on tuple size.
So (with 8k blocks) the limit on the number of non-null external-toasted
columns is about 450, while you can have the full 1600 columns if they
are integers or smaller, or just over 1015 bigints. But you can have
1600 text columns if they average 4 bytes or less (excluding length
byte).
If you push too close to the limit, it may even be possible to overflow
the tuple size by setting fields to null, since the null bitmap is only
present if at least one field is null. So you can have 1010 non-null
bigints, but if you try and do 1009 non-null bigints and one null, it
won't fit (and nor will 999 non-nulls and 11 nulls, if I calculated
right).
(Note also that dropped columns DO count against the 1600 limit, and
also that they are (for new row versions) set to null and thus force the
null bitmap to be present.)
--
Andrew (irc:RhodiumToad)
From | Date | Subject | |
---|---|---|---|
Next Message | Erik Rijkers | 2018-11-01 08:30:36 | Re: row filtering for logical replication |
Previous Message | Yotsunaga, Naoki | 2018-11-01 08:01:32 | RE: [Proposal] Add accumulated statistics for wait event |