From: | Alexander Steinert <stony8(at)gmx(dot)de> |
---|---|
To: | pgsql-sql(at)postgresql(dot)org |
Subject: | Re: Large Objects |
Date: | 2002-02-28 22:56:17 |
Message-ID: | 20020228235617.A1403@tyche.svt.tu-harburg.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
> Don't use them. They were needed when Postgres only supported 8k
> per row. Now you can just use the 'text' datatype for text data
> and the 'bytea' datatype for binary data. You have a limit of a
> few gigs per row with them.
The problem with the text or bytea type is, that inserting large amounts
of data causes a big performance loss because everything must go through
the SQL-Parser. I would be glad if someone would correct me.
So far I have found no way to grant integrity / use PG's transactions
for large objects with satisfying performance. lo_import/export for
bytea would be a very nice interface to transfer directly between the
database and files readable/writable by the client process.
Suggestions are welcome.
Stony
From | Date | Subject | |
---|---|---|---|
Next Message | Barry Lind | 2002-03-01 02:32:54 | Re: [SQL] Timestamp output |
Previous Message | Andy Marden | 2002-02-28 21:44:59 | Re: Left Outer Join Question |