From: | "Alessandro Depase" <alessandro(dot)depase(at)libero(dot)it> |
---|---|
To: | "dinakar" <din_akar(at)yahoo(dot)com>, <pgsql-jdbc(at)postgresql(dot)org> |
Subject: | Re: problem in handling transactions + jdbc |
Date: | 2004-01-20 12:22:34 |
Message-ID: | 005b01c3df50$15e08800$0200a8c0@dedalus1 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
> Hi Depase,
>
> sorry it was due spell check.. the word 'idle' changed
> to 'ideal'..
>
> let me explain my problem once again..
>
> currently i am using Tomcat 4.x, Postgresql 7.3.x,
> Linux OS, JSP, Java for buliding a web application.
>
> i received a mail from my client stating that : they
> are facing some problem in postgres database server.
> when ever the database grows in size, postgres is
> trying to use Vacuum database option to shrink it.
>
> while do so, the vacuum process would be unable to
> free up dead rows, because they'd still be accessible
> to some idle transactions...
>
> whenever my application throws any exception the
> transactions are left unhandled so like this some
> transactions are idle even after some 5 to 6 days...
>
> now the problem comes to the server memory. it is
> occupying lot of memory..
>
> ---previously i was using the below code.....
>
> preStmt = con.prepareStatement("BEGIN; SELECT
> fn_list_patientsaudit('" + pstrPatientName + "'," +
> intNoRecords + "," + intStart + ",'cursor_name');");
>
> resultSet = preStmt.executeQuery();
>
> String strCn = "cursor_name" ;
>
> preStmt = con.prepareStatement("FETCH ALL IN \"" +
> strCn + "\"; END;");
>
> resultSet = preStmt.executeQuery();
>
> while (resultSet.next()) {
> ---------
> ------
> }
>
>
> instead of getting the cursor name from the postgres,
> i am giving my own name to the cursor.
>
>
> --client reports the problem... now i have changed my
> code to below...
>
> con.setAutoCommit(false);
> preStmt = con.prepareStatement("BEGIN;SELECT
> fn_list_allpatients('cursor_name');");
>
> resultSet = preStmt.executeQuery();
>
> String strCn = "cursor_name";
>
> preStmt = con.prepareStatement("FETCH ALL IN \"" +
> strCn + "\";END;");
>
> resultSet = preStmt.executeQuery();
> while (resultSet.next())
> {
> -----
> -----
> }
>
> con.setAutoCommit(true);
> iam using finally block to close my connection and
> connction related objects...
>
> will this work without any problem ?. i mean will this
> work for 500 users simul. without any problem...
Let's see if I understood it well (I have the problem that I don't know yet
what your function is doing, anyway, so my answer will not be the final
answer you need, and, above all, when you have 500 users simul., there could
be a lot of other problems overlapping with this one...):
- you have a function which makes a lot of things, the last of which is a
select, which you store in a cursor. Should your function only perform a
select, it could be better to prepare and execute it directly, rather than
executing a function (to be evaluated, but you are doing 2 prepare/execute,
which can be reduced to 1 and they are compelling you to turn autocommit
off, with the problems you are telling us). If the function only performs a
select, but you are using it as a wrapper which let you modify the sql code
without recompiling the Java class, please consider that there are several
other ways to reach this (from just putting the select in a property, to
creating a singleton which holds the queries to which ask for your one using
a logical name etc. etc.)
- you are using the double executeQuery as a workaround to get resultSet
from function (maybe it could help see the new feature of the 7.4 release,
which should let you do such a thing in a more standard way).
If the above are true (but there could be other explanations, I suppose: I'm
just trying to understand the not-usual code I see), I think that a finally
clause with a close of all objects can help, but, just to be a little bit
more sure, not knowing the real internals of PostgreSQL (I'm just a designer
as you, involved in other projects, not one of the PostgreSQL team...), I
would add a setAutoCommit(true) in the finally block before closing.
But you can face other problems. For example, you could get an exception
while closing the connection (for example, for a network problem). Will the
connection stay up, in this case, or PostgreSQL can detect the failure?
Maybe on a network problem it can detect the failure (this is just a
supposition, but if is there anyone knowing the answer and reading this, it
would be interesting to know an 'official' answer), but what about other
possible problems?
Well, sometimes, if you use a connection pooler (are you using it, isn't it?
It can really help when you have a lot of 'simultaneous' users - I can't
belive that they are 500 simultaneous accesses to database, i.e. in the same
fraction of second: it could lead to, at least, 18 milion users in 10
working hours... 500 simul. web sessions is much more believable, using the
architecture you told us), it can help, because you are not really closing
the connection, just releasing it and the connection pooler, depending upon
the pooler you are using and your configuration, can try to close (rollback,
usually) any open transaction and, upon error, to invalidate the connection.
Sometimes the connection pool performs some checks also before giving you
the connection, so if there is a temporary problem, it can resolve it. There
could be a lot of other scenarios, so it is very difficult to answer your
simply question ('will this work without any problem?').
But all of these (all my answer) are design consideration, not related with
the working of the jdbc driver and, so, with this mailing list, I suppose.
However I hope my answer can help you...
Bye
Alessandro Depase
> please advice...
>
> thanks,
> dinakar
From | Date | Subject | |
---|---|---|---|
Next Message | Guido Fiala | 2004-01-20 16:53:21 | Re: updatable resultsets -> SQLException: -1 |
Previous Message | Kris Jurka | 2004-01-20 12:15:24 | Re: problem in handling transactions + jdbc |