From: | Oliver Jowett <oliver(at)opencloud(dot)com> |
---|---|
To: | Guido Fiala <guido(dot)fiala(at)dka-gmbh(dot)de> |
Cc: | pgsql-jdbc(at)postgresql(dot)org |
Subject: | Re: JDBC and processing large numbers of rows |
Date: | 2004-05-12 13:23:16 |
Message-ID: | 40A22544.8000206@opencloud.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-jdbc |
Guido Fiala wrote:
> Am Mittwoch, 12. Mai 2004 12:00 schrieb Kris Jurka:
>
>>The backend spools to a file when a materialized cursor uses more than
>>sort_mem amount of memory. This is not quite the same as swapping as it
>>will consume disk bandwidth, but it won't hog memory from other
>>applications.
>
>
> Well thats good on one side, but from the side of the user its worse:
>
> He will see a large drop in performance (factor 1000) ASAP the database starts
> using disk for such things. Ok - once the database is to large to be hold in
> memory it is disk-bandwith-limited anyway...
What about the kernel cache? I doubt you'll see a *sudden* drop in
performance .. it'll just degrade gradually towards disk speeds as your
resultset gets larger.
-O
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Kellerer | 2004-05-12 15:48:16 | Re: setAutoCommit(false) |
Previous Message | Guido Fiala | 2004-05-12 12:31:08 | Re: JDBC and processing large numbers of rows |