Re: [HACKERS] Arbitrary tuple size

From: Bruce Momjian <maillist(at)candle(dot)pha(dot)pa(dot)us>
To: Hannu Krosing <hannu(at)trust(dot)ee>
Cc: Vadim Mikheev <vadim(at)krs(dot)ru>, t-ishii(at)sra(dot)co(dot)jp, pgsql-hackers(at)postgreSQL(dot)org
Subject: Re: [HACKERS] Arbitrary tuple size
Date: 1999-07-09 16:32:33
Message-ID: 199907091632.MAA00619@candle.pha.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

> > Well, now consider update of 2Gb row!
> > I worry not due to non-overwriting but about writing
> > 2Gb log record to WAL - we'll not be able to do it, sure.
>
> Can't we write just some kind of diff (only changed pages) in WAL,
> either starting at some thresold or just based the seek/write logic of
> LOs?
>
> It will add complexity, but having some arbitrary limits seems very
> wrong.
>
> It will also make indexing LOs more complex, but as we don't currently
> index
> them anyway, its not a big problem yet.

Well, we do indexing of large objects by using the OS directory code to
find a given directory entry.

> Why not ?
>
> IMHO we should allow _arbitrary_ (in reality probably <= MAXINT), but
> optimize for some known size and tell the users that if they exceed it
> the performance would suffer.

If they go over a certain size, they can decide to store it in the file
system, as many users are doing now.

--
Bruce Momjian | http://www.op.net/~candle
maillist(at)candle(dot)pha(dot)pa(dot)us | (610) 853-3000
+ If your life is a hard drive, | 830 Blythe Avenue
+ Christ can be your backup. | Drexel Hill, Pennsylvania 19026

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Bruce Momjian 1999-07-09 16:39:57 Re: [HACKERS] Fwd: Joins and links
Previous Message Bruce Momjian 1999-07-09 16:29:57 Re: [HACKERS] Arbitrary tuple size