From: | Mikko Tiihonen <Mikko(dot)Tiihonen(at)nitorcreations(dot)com> |
---|---|
To: | Joshua Engel <postgres(at)purgo(dot)net>, "pgsql-jdbc(at)postgresql(dot)org" <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: getFastLong gets longs slowly |
Date: | 2014-05-29 08:46:12 |
Message-ID: | 1401353163152.48455@nitorcreations.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
If you use prepared statements the long is transferred as the raw 8 byte representation, not as numeric ascii that needs to be formatted and parsed. By default the n-th (3-5 depending on jdbc driver version) invocation of the prepared statement will switch to the binary representation.
If you do not reuse prepared statements you can set the prepareThreshold to -1 to force the binary transfers from the start (at the cost of extra initial round-trip to figure out the exact in/out parameter types).
The above setting works at in latest git code. In latest released driver you have to use -Dorg.postgresql.forcebinary=true instead.
-Mikko
________________________________
From: pgsql-jdbc-owner(at)postgresql(dot)org <pgsql-jdbc-owner(at)postgresql(dot)org> on behalf of Joshua Engel <postgres(at)purgo(dot)net>
Sent: 29 May 2014 00:54
To: pgsql-jdbc(at)postgresql(dot)org
Subject: [JDBC] getFastLong gets longs slowly
I tracked down a performance problem to the JDBC driver's getFastLong function.
For very large long values (in particular, Long.MAX_VALUE and MIN_VALUE), a special case in getFastLong throws an exception (FAST_NUMBER_FAILED) because it's 19 digits rather than 18. It's then caught and reprocessed using the regular long processing code (mostly, deferring to Long.getLong).
In the JVMs I'm using (JDK 6 and 7), the delay introduced by that exception is substantial. It's not large on a per-instance basis but I'm sorting through a great many records, many of which are MAX_VALUE. Several different performance measurement tools all put the blame right there.
I compiled the code myself with larger values for that cutoff, and get a huge performance increase.
So... is there a way I can check that change in?
And incidentally, while I'm at it... is there any reason it's storing my long values as binary coded decimal? Can I get it to stop doing that? The whole point of using a long rather than a string value was presumably to have it work faster.
Thanks,
Joshua Engel
From | Date | Subject | |
---|---|---|---|
Next Message | 夏 | 2014-05-29 09:31:09 | One question about connection parameter stringtype |
Previous Message | Joshua Engel | 2014-05-28 21:54:16 | getFastLong gets longs slowly |