From: | Guido Fiala <guido(dot)fiala(at)dka-gmbh(dot)de> |
---|---|
To: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: JDBC and processing large numbers of rows |
Date: | 2004-05-12 06:37:42 |
Message-ID: | 200405120837.42865.guido.fiala@dka-gmbh.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Reading all this i'd like to know if all this isn't just a tradeof between
_where_ the memory is consumed?
If your JDBC-client holds all in memory - it gets an OutOfMem-Exception.
If your backend uses Cursors - it caches the whole resultset and probably
starts swapping and gets slow (needs the memory of all users).
If you use Limit and Offset the database has to do more to find the
data-snippet and in worst case (last few records) still needs temporary the
whole resultset? (not sure here)
Is that just a "choose your poison" ? At least in the first case the memory of
the Client _gets_ used too and not all load to the backend, on the other side
- most the the user does not really read all the data at all, so it puts
unnecessary load on all the hardware.
Really like to know what the best way to go is then...
Guido
From | Date | Subject | |
---|---|---|---|
Next Message | Andy Jefferson | 2004-05-12 09:45:19 | setAutoCommit(false) |
Previous Message | Tom Lane | 2004-05-12 03:28:36 | Re: JDBC and processing large numbers of rows |