From: | Dave Tenny <jeffrey(dot)tenny(at)comcast(dot)net> |
---|---|
To: | Fernando Nasser <fnasser(at)redhat(dot)com> |
Cc: | Nicolas Modrzyk <nicolas(dot)modrzyk(at)inrialpes(dot)fr>, Andreas Prohaska <ap(at)apeiron(dot)de>, "'pgsql-jdbc(at)postgresql(dot)org'" <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: Streaming binary data into db, difference between Blob |
Date: | 2003-09-10 14:27:46 |
Message-ID: | 3F5F34E2.6060204@comcast.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Fernando Nasser wrote:
> Dave Tenny wrote:
>
>> You could always implement your own logical blob manager that
>> implements blob IDs
>> and breaks blobs into BYTEA records of a particular (manageable)
>> maximum size and associates
>> multiple BYTEA chunks with the blob id.
>> More work, but a least common denominator approach that should be
>> portable to other systems as well.
>>
>
> However, bytea is _not_ streamed on 7.3 backends (unless the patch is
> used, which actually uses postgreSQL Large Objects as a staging area).
>
> That would be fine for 7.4 where bytea values will be streamed though.
I know nothing of how the backend works, but assuming it doesn't keep
ALL new BYTEA records in memory,
you get some effect of streaming a chunk at a time with this approach,
so you can control your upper bound
buffer size.
From | Date | Subject | |
---|---|---|---|
Next Message | Marsh, Dan | 2003-09-10 15:10:27 | Help With the JDBC driver |
Previous Message | Andreas Prohaska | 2003-09-10 14:25:39 | Re: Streaming binary data into db, difference between Blob |