From: | "Timothy H(dot) Keitt" <Timothy(dot)Keitt(at)StonyBrook(dot)Edu> |
---|---|
To: | alexey(at)price(dot)ru |
Cc: | pgsql-interfaces(at)postgresql(dot)org |
Subject: | Re: Subject: unbuffered results from libpq? |
Date: | 2001-05-21 15:05:41 |
Message-ID: | 3B092EC5.9060906@StonyBrook.Edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-interfaces |
I realized after this post that one solution is to use a cursor. You
can then fetch in small chuncks and write to a local buffer. The
problem for a general interface applicaiton such as mine is nested
transactions --- cursors can only be used within transactions which will
fail if the the user has independently initiated a transaction (unless
this has been fixed).
Tim
Alexey Nalbat wrote:
> Hello.
>
> I have exactly the same question. Could anybody answer it?
>
> My impression is that it can't be maid. Even use of functions PQsetnonblocking,
> PQsendQuery, PQgetResult, PQconsumeInput,.. doesn't help. Is it so?
>
> Thanks.
>
>
>>My impression is that libpq buffers all results of a query. Is there a
>>way to get a "result stream" from the backend instead of having libpq
>>read everything into a local buffer first? Currently, my application,
>>after a query, simply copies the entire buffer from the libpq side into
>>storage on the application side. This seems expensive for large
>>queries. I'd rather read results directly into storage on the
>>application side, i.e., "while input get next value from backend". (I
>>know about libpqeasy, but this only iterates over the libpq buffer.)
>>
>
--
Timothy H. Keitt
Department of Ecology and Evolution
State University of New York at Stony Brook
Stony Brook, New York 11794 USA
Phone: 631-632-1101, FAX: 631-632-7626
http://life.bio.sunysb.edu/ee/keitt/
From | Date | Subject | |
---|---|---|---|
Next Message | Erny | 2001-05-21 20:27:49 | RE: Python as a procedural language |
Previous Message | Reeching Wu | 2001-05-21 07:28:19 | help me ! |