R: Slow performance updating CLOB data

From: Nicola Zanaga <NZanaga(at)efsw(dot)it>
To: Thomas Kellerer <spam_eater(at)gmx(dot)net>, "pgsql-jdbc(at)postgresql(dot)org" <pgsql-jdbc(at)postgresql(dot)org>
Subject: R: Slow performance updating CLOB data
Date: 2016-07-19 09:34:30
Message-ID: 47856758BAE4794A9EC4FCA2E63FC85E3705963F@exchange.intranet.efsw.it
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-jdbc

-----Messaggio originale-----
Da: pgsql-jdbc-owner(at)postgresql(dot)org [mailto:pgsql-jdbc-owner(at)postgresql(dot)org] Per conto di Thomas Kellerer
Inviato: lunedì 18 luglio 2016 15:01
A: pgsql-jdbc(at)postgresql(dot)org
Oggetto: Re: [JDBC] Slow performance updating CLOB data

Nicola Zanaga schrieb am 18.07.2016 um 14:28:
>
> I can change strategy for postgres, but I don’t think is good to issue
> a query like “UPDATE table SET clob = ‘value’ WHERE key = x” if value is more than 10Mb.
>
You should use a PreparedStatement not string literals.
But apart from that, that won't be any different to the SQL that the driver uses.

Why do you think that would be a problem?
The client needs to send 10MB of data, regardless on _how_ it sends that.

Thomas

I solved my problem switching to prepared statement.
Now the performance are like other drivers.

However, in general, it's not the same thing sending the full sql query, instead of using 'setCharacterStream' or 'setBinaryStream' (for prepared statement) or 'updateCharacterStream' or 'updateBinaryStream' (for updatable resultset).
Using streams a driver could optimize sending data to the server in small packets.

Thanks

In response to

Browse pgsql-jdbc by date

  From Date Subject
Next Message Pavel Raiskup 2016-07-19 11:42:50 postgresql-jdbc 9.4-1209 src tarball issue
Previous Message Vladimir Sitnikov 2016-07-19 09:16:27 Re: curious line in jdbc change log