From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | t-ishii(at)sra(dot)co(dot)jp |
Cc: | Karel Zak - Zakkr <zakkr(at)zf(dot)jcu(dot)cz>, pgsql-hackers <pgsql-hackers(at)postgreSQL(dot)org> |
Subject: | Re: [HACKERS] compression in LO and other fields |
Date: | 1999-11-12 06:14:43 |
Message-ID: | 25684.942387283@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tatsuo Ishii <t-ishii(at)sra(dot)co(dot)jp> writes:
>> LO is a dead end. What we really want to do is eliminate tuple-size
>> restrictions and then have large ordinary fields (probably of type
>> bytea) in regular tuples. I'd suggest working on compression in that
>> context, say as a new data type called "bytez" or something like that.
> It sounds ideal but I remember that Vadim said inserting a 2GB record
> is not good idea since it will be written into the log too. If it's a
> necessary limitation from the point of view of WAL, we have to accept
> it, I think.
LO won't make that any better: the data still goes into a table.
You'd have 2GB worth of WAL entries either way.
The only thing LO would do for you is divide the data into block-sized
tuples, so there would be a bunch of little WAL entries instead of one
big one. But that'd probably be easy to duplicate too. If we implement
big tuples by chaining together disk-block-sized segments, which seems
like the most likely approach, couldn't WAL log each segment as a
separate log entry? If so, there's almost no difference between LO and
inline field for logging purposes.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Lockhart | 1999-11-12 06:54:20 | Re: AWL: Re: tm1 |
Previous Message | Bruce Momjian | 1999-11-12 06:07:46 | Re: failure of \e in psql |