From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | Radosław Smogura <rsmogura(at)softperience(dot)eu> |
Cc: | Dimitri Fontaine <dimitri(at)2ndquadrant(dot)fr>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Robert Haas <robertmhaas(at)gmail(dot)com>, Peter Eisentraut <peter_e(at)gmx(dot)net>, PG Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: BLOB support |
Date: | 2011-06-06 08:38:39 |
Message-ID: | BANLkTin91zOwEGjq3oDZ166-Umxn4ttKHw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
2011/6/6 Radosław Smogura <rsmogura(at)softperience(dot)eu>:
> On Sun, 05 Jun 2011 22:16:41 +0200, Dimitri Fontaine wrote:
>>
>> Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> writes:
>>>
>>> Yes. I think the appropriate problem statement is "provide streaming
>>> access to large field values, as an alternative to just fetching/storing
>>> the entire value at once". I see no good reason to import the entire
>>> messy notion of LOBS/CLOBS. (The fact that other databases have done it
>>> is not a good reason.)
>>
>> Spent some time in the archive to confirm a certain “déjà vu”
>> impression. Couldn't find it. Had to manually search in closed commit
>> fests… but here we are, I think:
>>
>> https://commitfest.postgresql.org/action/patch_view?id=70
>> http://archives.postgresql.org/message-id/17891.1246301879@sss.pgh.pa.us
>> http://archives.postgresql.org/message-id/4A4BF87E.7010107@ak.jp.nec.com
>>
>> Regards,
>
> I think more about this with contrast to sent references, but I still have
> in my mind construct
> Blob myWeddingDvd = conn.createBlob(myWeddingStream, size); //A bit outdated
> we have BlueRay
> conn.prepareStatemnt("INSERT INTO someonetubevideos values (?)")
> where 1st parameter is myWeddingDvd,
> or if someone doesn't like Java he/she/it may wish to put C++ istream, or C
> FILE.
>
> I think (with respect to below consideration), this implicite requires that
> LOBs should be stored in one, centralized place doesn't matter if this will
> be file system or special table, or something else, but when statement is
> processed there is no idea with which table LOB will be associated, if we
> want to TOAST, where TOAST it, what will be if insertion will by SQL
> function, which choose table depending on BLOB content?
>
> Quite interesting idea from cited patch was about string identifying LOB,
> but with above it close road to for JDBC create LOB. I think, as well
> constructs that insert 1st, small LOB into table to get some driver
> depending API are little bit old fashioned.
>
> Possible solutions, if we don't want centralized storage, may be:
> 1. Keep BLOB in memory, but this may, depending on implementation, reduce
> size of initial BLOB.
> 2. Temporally backup blob in file, then when values are stored copy file to
> TOAST table, but still some changes are required to support LOBs for complex
> types and arrays.
@1 is useles for multiuser applications. This is a problem of current
implemementation for large TOAST values. You can hold around
"work_mem" bytes in mem, but any larger content should to be forwarded
to file.
Pavel
>
> So please give some ideas how to resolve this, or may be it has low
> priority?
>
> Regards,
> Radek
>
> --
> Sent via pgsql-hackers mailing list (pgsql-hackers(at)postgresql(dot)org)
> To make changes to your subscription:
> http://www.postgresql.org/mailpref/pgsql-hackers
>
From | Date | Subject | |
---|---|---|---|
Next Message | Heikki Linnakangas | 2011-06-06 09:33:21 | Re: SAVEPOINTs and COMMIT performance |
Previous Message | Simon Riggs | 2011-06-06 08:22:49 | Re: heap vacuum & cleanup locks |