From: | Oliver Jowett <oliver(at)opencloud(dot)com> |
---|---|
To: | PostgreSQL - JDBC <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Implementing setQueryTimeout() |
Date: | 2008-02-18 01:40:16 |
Message-ID: | 47B8E200.2020307@opencloud.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
The discussion around TCP keepalives reminded me that I have wanted to
implement setQueryTimeout() support for a while. Here's my thoughts on
it so far.
My motivation here is that we have an environment where having threads
hang for extended periods when doing DB queries is not good. We must
have them fail the query within a relatively short time, or the entire
system can grind to a halt due to a lack of threads.
This is the same regardless of the cause of the delay - slow query, or a
resultset that's taking a long time to transfer, or the server just died
and TCP hasn't killed the connection yet, or there is a slow query ahead
of us on the same connection.
In essense the requirement is: calling Statement.execute() should either
return a result or throw an exception within some relatively short
period of time. In my case the timeout is on the order of 15 seconds or
less.
...
There are two main approaches I see:
(1) Map query timeout to server-side statement_timeout. Add a timeout
parameter to QueryExecutor methods. The protocol layer remembers the
current setting of statement_timeout and issues an appropriate "SET
statement_timeout" as necessary before submitting each query.
Advantages: Gives nice failure characteristics (query is cancelled,
connection remains usable)
Disadvantages: Doesn't help with anything but slow queries on the server
side, relies on server-side query cancellation due to timeout happening
reasonably promptly. Client code that sets statement_timeout itself can
confuse it.
(2) Run a separate timer thread. Start a timer in Statement.execute()
before submitting the query to the protocol layer. If the timer expires,
close the low-level DB connection (from the timer thread) which should
cause an IOException in the guts of the protocol layer where the query
executing thread is blocked on network I/O, eventually propagating up as
a fatal SQLException to the caller.
The assumption here is that if a thread is blocked inside
Statement.execute(), the eventual cause of that is that something is
blocked on network I/O. I think that's generally true?
Advantages: Should catch all cases regardless of cause.
Disadvantages: Draconian failure mode - a query timeout means the
connection is dead.
...
I would like to implement (2) but I can see that killing the connection
on timeout may not be desirable in all cases.
Independant of approach, I'd also like to implement a connection
parameter that provides a default query timeout for all statements that
don't override it (this would include internal driver queries e.g. OID
lookups, metadata queries).
Any thoughts on this?
-O
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2008-02-18 01:58:23 | Re: Implementing setQueryTimeout() |
Previous Message | Kris Jurka | 2008-02-17 16:53:46 | Re: Patch for Statement.getGeneratedKeys() |