From: | Kris Jurka <books(at)ejurka(dot)com> |
---|---|
To: | Stephen Crowley <stephen(dot)crowley(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Large # of rows in query extremely slow, not using |
Date: | 2004-09-23 23:22:15 |
Message-ID: | Pine.BSO.4.56.0409231816450.18935@leary.csoft.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Tue, 14 Sep 2004, Stephen Crowley wrote:
> Problem solved.. I set the fetchSize to a reasonable value instead of
> the default of unlimited in the PreparedStatement and now the query
> is . After some searching it seeems this is a common problem, would it
> make sense to change the default value to something other than 0 in
> the JDBC driver?
In the JDBC driver, setting the fetch size to a non-zero value means that
the query will be run using what the frontend/backend protocol calls a
named statement. What this means on the backend is that the planner will
not be able to use the values from the query parameters to generate the
optimum query plan and must use generic placeholders and create a generic
plan. For this reason we have decided not to default to a non-zero
fetch size. This is something whose default value could be set by a URL
parameter if you think that is something that is really required.
Kris Jurka
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Crowley | 2004-09-23 23:36:49 | Re: Large # of rows in query extremely slow, not using index |
Previous Message | Gregory S. Williamson | 2004-09-23 23:05:22 | Re: Cleaning up indexes |