| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Stephen R(dot) van den Berg" <srb(at)cuci(dot)nl> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: Protocol 3, Execute, maxrows to return, impact? |
| Date: | 2008-07-10 14:22:28 |
| Message-ID: | 1116.1215699748@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
"Stephen R. van den Berg" <srb(at)cuci(dot)nl> writes:
> Then, from a client perspective, there is no use at all, because the
> client can actually pause reading the results at any time it wants,
> when it wants to avoid storing all of the result rows. The network
> will perform the cursor/fetch facility for it.
[ shrug... ] In principle you could write a client library that would
act that way, but I think you'll find that none of the extant ones
will hand back an incomplete query result to the application.
A possibly more convincing argument is that with that approach, the
connection is completely tied up --- you cannot issue additional
database commands based on what you just read, nor pull rows from
multiple portals in an interleaved fashion.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Dave Page | 2008-07-10 14:28:35 | Re: CommitFest rules |
| Previous Message | David Fetter | 2008-07-10 14:18:28 | Re: WITH RECURSIVE updated to CVS TIP |