Re: [HACKERS] Storing rows bigger than one block

From: Mattias Kregert <matti(at)algonet(dot)se>
To: Darren King <darrenk(at)insightdist(dot)com>
Cc: pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] Storing rows bigger than one block
Date: 1998-01-12 17:18:47
Message-ID: 34BA5077.1A880D21@algonet.se
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Darren King wrote:
> > A related question: Is it possible to store tuples over more than one
> > block? Would it be possible to split a big TEXT into multiple blocks?
> Possible, but would cut the access speed to (1 / # blocks), no?

For "big" (multile blocks) rows, maybe. Consecutive blocks should be
buffered by the disk or the os, so I don't think the difference would
be big, or even noticeable.

> There is a var in the tuple header, t_chain, 6.2.1 that has since been
> removed for 6.3. I think its original purpose was with time-travel,
> _but_, if we go with a ROWID instead of an oid in the future, this could
> be put back in the header and would be the actual address of the next
> block in the chain.
>
> Oracle has this concept of chained rows. It is how they implement all
> of their LONG* types and also handle rows of normal types that are
> larger than the block size.

Yes! I can't see why PostgreSQL should not be able to store rows bigger
than one block? I have seen people referring to this limitation every
now and then, but I don't understand why it has to be that way?
Is this something fundamental to PostgreSQL?

/* m */

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message The Hermit Hacker 1998-01-12 18:12:59 ODBC & LGPL license...
Previous Message Thomas G. Lockhart 1998-01-12 17:13:58 Re: [HACKERS] Re: varchar() troubles