From: | "Nasby, Jim" <nasbyj(at)amazon(dot)com> |
---|---|
To: | David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> |
Cc: | John Naylor <jcnaylor(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: PostgreSQL Limits and lack of documentation about them. |
Date: | 2018-10-31 22:48:53 |
Message-ID: | AB174ECE-CBE6-4025-98A6-14B5BDC88738@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> On Oct 31, 2018, at 5:22 PM, David Rowley <david(dot)rowley(at)2ndquadrant(dot)com> wrote:
>
> On 1 November 2018 at 04:40, John Naylor <jcnaylor(at)gmail(dot)com> wrote:
>> Thanks for doing this. I haven't looked at the rendered output yet,
>> but I have some comments on the content.
>>
>> + <entry>Maximum Relation Size</entry>
>> + <entry>32 TB</entry>
>> + <entry>Limited by 2^32 pages per relation</entry>
>>
>> I prefer "limited to" or "limited by the max number of pages per
>> relation, ...". I think pedantically it's 2^32 - 1, since that value
>> is used for InvalidBlockNumber. More importantly, that seems to be for
>> 8kB pages. I imagine this would go up with a larger page size. Page
>> size might also be worth mentioning separately. Also max number of
>> relation file segments, if any.
>
> Thanks for looking at this.
>
> I've changed this and added mention of BLKSIZE. I was a bit unclear
> on how much internal detail should go into this.
It’s a bit misleading to say “Can be increased by increasing BLKSZ and recompiling”, since you’d also need to re initdb. Given that messing with BLKSZ is pretty uncommon I would simply put a note somewhere that mentions that these values assume the default BLKSZ of 8192.
>> + <entry>Maximum Columns per Table</entry>
>> + <entry>250 - 1600</entry>
>> + <entry>Depending on column types. (More details here)</entry>
>>
>> Would this also depend on page size? Also, I'd put this entry before this one:
>>
>> + <entry>Maximum Row Size</entry>
>> + <entry>1600 GB</entry>
>> + <entry>Assuming 1600 columns, each 1 GB in size</entry>
>>
>> A toast pointer is 18 bytes, according to the docs, so I would guess
>> the number of toasted columns would actually be much less? I'll test
>> this on my machine sometime (not 1600GB, but the max number of toasted
>> columns per tuple).
>
> I did try a table with 1600 text columns then inserted values of
> several kB each. Trying with BIGINT columns the row was too large for
> the page. I've never really gotten a chance to explore these limits
> before, so I guess this is about the time.
Hmm… 18 bytes doesn’t sound right, at least not for the Datum. Offhand I’d expect it to be the small (1 byte) varlena header + an OID (4 bytes). Even then I don’t understand how 1600 text columns would work; the data area of a tuple should be limited to ~2000 bytes, and 2000/5 = 400.
From | Date | Subject | |
---|---|---|---|
Next Message | Nasby, Jim | 2018-10-31 23:02:07 | Re: Super PathKeys (Allowing sort order through precision loss functions) |
Previous Message | Andres Freund | 2018-10-31 22:48:02 | Re: replication_slots usability issue |