Re: long transfer time for binary data

From: Johannes <jotpe(at)posteo(dot)de>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: long transfer time for binary data
Date: 2016-01-23 23:13:27
Message-ID: 56A40917.1070705@posteo.de
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Am 23.01.2016 um 23:38 schrieb John R Pierce:
> On 1/23/2016 2:19 PM, Johannes wrote:
>> I save my images as large object, which afaik is in practise not
>> readable with a binary cursor (we should use the lo_* functions). And of
>> course I already use the LargeObjectManager of the postgresql jdbc
>> library.
>
>
> afaik, Large Objects are completely independent of the other mode
> stuff. they are stored and transmitted in binary.

Depends on the client. It can be transfered as text or binary. And the
data is sliced into bytea segements [1] and afaik it is stored as binary
string.

> I haven't read this whole ongoing thread, just glanced at messages as
> they passed by over the past week or whatever, but I have to say, I
> would NOT be storing 11MB images directly in SQL, rather, I would store
> it on a file server, and access it with nfs or https or whatever is most
> appropriate for the nature of the application. I would store the
> location and metadata in SQL.

The 11MB file is the biggest image one, the rest is normal. I know about
the arguments, but there are pros I want to use in production
(transactions, integrity). But if it fails (amount of space?, slow
import?) I may exclude the image data.

[1] http://www.postgresql.org/docs/9.5/static/catalog-pg-largeobject.html

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Joshua D. Drake 2016-01-23 23:31:02 Re: A motion
Previous Message Adrian Klaver 2016-01-23 23:08:48 Re: A motion