From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | Shay Rojansky <roji(at)roji(dot)org>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Entities created in one query not available in another in extended protocol |
Date: | 2015-06-12 18:06:19 |
Message-ID: | 26872.1434132379@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Simon Riggs <simon(at)2ndquadrant(dot)com> writes:
> On 11 June 2015 at 22:12, Shay Rojansky <roji(at)roji(dot)org> wrote:
>> Just in case it's interesting to you... The reason we implemented things
>> this way is in order to avoid a deadlock situation - if we send two queries
>> as P1/D1/B1/E1/P2/D2/B2/E2, and the first query has a large resultset,
>> PostgreSQL may block writing the resultset, since Npgsql isn't reading it
>> at that point. Npgsql on its part may get stuck writing the second query
>> (if it's big enough) since PostgreSQL isn't reading on its end (thanks to
>> Emil Lenngren for pointing this out originally).
> That part does sound like a problem that we have no good answer to. Sounds
> worth starting a new thread on that.
I do not accept that the backend needs to deal with that; it's the
responsibility of the client side to manage buffering properly if it is
trying to overlap sending the next query with receipt of data from a
previous one. See commit 2a3f6e368 for a related issue in libpq.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2015-06-12 18:15:59 | Re: On columnar storage |
Previous Message | Tom Lane | 2015-06-12 17:59:31 | Re: On columnar storage |