From: | Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, John R Pierce <pierce(at)hogranch(dot)com>, Neil Conway <neilc(at)samurai(dot)com>, pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #1756: PQexec eats huge amounts of memory |
Date: | 2005-07-13 14:56:47 |
Message-ID: | 200507131756.47065.vda@ilport.com.ua |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
On Wednesday 13 July 2005 17:43, Tom Lane wrote:
> Denis Vlasenko <vda(at)ilport(dot)com(dot)ua> writes:
> > Consider my posts in this thread as user wish to
> > * libpq and network protocol to be changed to allow for incremental reads
> > of executed queries and for multiple outstanding result sets,
> > or, if above thing looks unsurmountable at the moment,
> > * libpq-only change as to allow incremental reads of single outstanding
> > result set. Attempt to use pg_numrows, etc, or attempt to execute
> > another query forces libpq to read and store all remaining rows
> > in client's memory (i.e. current behaviour).
>
> This isn't going to happen because it would be a fundamental change in
> libpq's behavior and would undoubtedly break a lot of applications.
> The reason it cannot be done transparently is that you would lose the
> guarantee that a query either succeeds or fails: it would be entirely
> possible to return some rows to the application and only later get a
> failure.
>
> You can have this behavior today, though, as long as you are willing to
> work a little harder at it --- just declare some cursors and then FETCH
> in convenient chunks from the cursors.
Thanks, I already tried that. It works.
--
vda
From | Date | Subject | |
---|---|---|---|
Next Message | ISHIDA Akio | 2005-07-14 00:52:22 | BUG #1766: contrib/ modules can't install with --without-docdir |
Previous Message | Tom Lane | 2005-07-13 14:43:41 | Re: BUG #1756: PQexec eats huge amounts of memory |