On 4/28/15 5:41 AM, José Luis Tallón wrote:
> On 04/27/2015 08:49 AM, Jim Nasby wrote:
>> On 4/25/15 1:19 PM, Bruce Momjian wrote:
>>> Note if you are storing a table with rows that exceed 2KB in size
>>> (aggregate size of each row) then the "Maximum number of rows in a
>>> table" may be limited to 4 Billion, see TOAST.
>>
>> That's not accurate though; you could be limited to far less than 4B
>> rows. If each row has 10 fields that toast, you'd be limited to just
>> 400M rows.
>
> ISTM like the solution is almost here, and could be done without too
> much (additional) work:
> * We have already discussed having a page-per-sequence with the new
> SeqAMs being introduced and how that would improve scalability.
> * We have commented on having a sequence per TOAST table
> (hence, 4B toasted values per table each up to 4B chunks in size...
> vs just 4B toasted values per cluster)
>
> I'm not sure that I can do it all by myself just yet, but I sure
> can try if there is interest.
I don't think it would be hard at all to switch toast pointers to being
sequence generated instead of OIDs. The only potential downside I see is
the extra space required for all the sequnces... but that would only
matter on the tinyest of clusters (think embedded), which probably don't
have that many tables to begin with.
--
Jim Nasby, Data Architect, Blue Treble Consulting
Data in Trouble? Get it in Treble! http://BlueTreble.com