Re: large object write performance

From: "Graeme B(dot) Bell" <graeme(dot)bell(at)nibio(dot)no>
To: Bram Van Steenlandt <bram(at)diomedia(dot)be>, "pgsql-performance(at)postgresql(dot)org list" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: large object write performance
Date: 2015-10-08 12:10:38
Message-ID: 68E43830-5272-4AE7-AAF5-3887E2EB3C01@skogoglandskap.no
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance


> On 08 Oct 2015, at 13:50, Bram Van Steenlandt <bram(at)diomedia(dot)be> wrote:
>>> 1. The part is "fobj = lobject(db.db,0,"r",0,fpath)", I don't think there is anything there

Re: lobject

http://initd.org/psycopg/docs/usage.html#large-objects

"Psycopg large object support *efficient* import/export with file system files using the lo_import() and lo_export() libpq functions.”

See *

lobject seems to default to string handling in Python
That’s going to be slow.
Try using lo_import / export?

Graeme Bell

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Bram Van Steenlandt 2015-10-08 12:29:28 Re: large object write performance
Previous Message Bram Van Steenlandt 2015-10-08 12:09:58 Re: large object write performance