From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Dmitriy Igrishin <dmitigr(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Large objects. |
Date: | 2010-09-27 15:01:05 |
Message-ID: | AANLkTinr5s-jKyESwAbX5qW9-Oh6WWUdZZODFNeKw0Kc@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Sep 27, 2010 at 10:50 AM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Robert Haas <robertmhaas(at)gmail(dot)com> writes:
>> According to the documentation, the maximum size of a large object is
>> 2 GB, which may be the reason for this behavior.
>
> In principle, since pg_largeobject stores an integer pageno, we could
> support large objects of up to LOBLKSIZE * 2^31 bytes = 4TB without any
> incompatible change in on-disk format. This'd require converting a lot
> of the internal LO access logic to track positions as int64 not int32,
> but now that we require platforms to have working int64 that's no big
> drawback. The main practical problem is that the existing lo_seek and
> lo_tell APIs use int32 positions. I'm not sure if there's any cleaner
> way to deal with that than to add "lo_seek64" and "lo_tell64" functions,
> and have the existing ones throw error if asked to deal with positions
> past 2^31.
>
> In the particular case here, I think that lo_write may actually be
> writing past the 2GB boundary, while the coding in lo_read is a bit
> different and stops at the 2GB "limit".
Ouch. Letting people write data to where they can't get it back from
seems double-plus ungood.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise Postgres Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-09-27 15:04:32 | Re: Improving prep_buildtree used in VPATH builds |
Previous Message | Robert Haas | 2010-09-27 14:59:45 | Re: gist access methods parameter types |