From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Alvaro Herrera <alvherre(at)surnet(dot)cl> |
Cc: | Mark Dilger <pgsql(at)markdilger(dot)com>, pgsql-hackers(at)postgresql(dot)org, pgsql-general(at)postgresql(dot)org, Jan Wieck <JanWieck(at)Yahoo(dot)com> |
Subject: | Re: [HACKERS] Avoiding io penalty when updating large objects |
Date: | 2005-06-29 03:58:59 |
Message-ID: | 4477.1120017539@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
Alvaro Herrera <alvherre(at)surnet(dot)cl> writes:
> On Tue, Jun 28, 2005 at 07:38:43PM -0700, Mark Dilger wrote:
>> If, for a given row, the value of c is, say, approximately 2^30 bytes
>> large, then I would expect it to be divided up into 8K chunks in an
>> external table, and I should be able to fetch individual chunks of that
>> object (by offset) rather than having to detoast the whole thing.
> I don't think you can do this with the TOAST mechanism. The problem is
> that there's no API which allows you to operate on only certain chunks
> of data.
There is the ability to fetch chunks of a toasted value (if it was
stored out-of-line but not compressed). There is no ability at the
moment to update it by chunks. If Mark needs the latter then large
objects are probably the best bet.
I'm not sure what it'd take to support chunkwise update of toasted
fields. Jan, any thoughts?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Dilger | 2005-06-29 05:30:49 | Re: [HACKERS] Avoiding io penalty when updating large objects |
Previous Message | Alvaro Herrera | 2005-06-29 02:53:17 | Re: [HACKERS] Avoiding io penalty when updating large objects |
From | Date | Subject | |
---|---|---|---|
Next Message | Christopher Kings-Lynne | 2005-06-29 04:01:40 | Odd message with initdb on latest HEAD |
Previous Message | Christopher Kings-Lynne | 2005-06-29 03:54:10 | Feature request from irc... |