From: | Oliver Jowett <oliver(at)opencloud(dot)com> |
---|---|
To: | Anton Komarevtsev <avkx(at)rt(dot)mipt(dot)ru> |
Cc: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: binary data in `bytea' column |
Date: | 2004-04-19 22:08:39 |
Message-ID: | 40844DE7.9070802@opencloud.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Anton Komarevtsev wrote:
> Hello.
>
> It's nesessary for me to store bianary data (images, sounds, texts as
> bytes) in the database. What should I use for better scalability,
> generic `PreparedStatement.setBinaryStream' and
> `SesultSet.getBinaryStream' or Large Objects?
How big is each piece of binary data? How much memory on the JVM side
can you throw at the problem?
If each chunk of data is bigger than around 10% of available memory,
you'll want to use LOs. Otherwise bytea + setBinaryStream should work
and is simpler to use.
The current driver will require 6-7 times the raw data size in
additional memory when binary data is used as a query parameter, and
roughly the same (I think) when binary data is received as a query
result. I am working on some changes that reduce the parameter overhead
to essentially zero (just waiting on some testing results); however the
query result overhead is going to remain for a while since we really
need the v3 protocol to fix it.
> I use Postgres 7.4, and autocommit=on is hardcoded in it.
7.4 doesn't support autocommit=off on the server side, as I understand
it -- is this what you mean?
> But I've
> listened, that Large Objects may only work in a single transaction. So,
> if I should use Large Objects (if so), how should I use them?
You can still change the JDBC-level autocommit setting safely (via
Connection.setAutoCommit) regardless of the server's autocommit
capabilities.
-O
From | Date | Subject | |
---|---|---|---|
Next Message | Ron St-Pierre | 2004-04-19 22:14:25 | Make not working (on RHE) |
Previous Message | Anton Komarevtsev | 2004-04-19 20:31:06 | binary data in `bytea' column |