| From: | Igor Korot <ikorot01(at)gmail(dot)com> |
|---|---|
| To: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
| Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: Retrieving query results |
| Date: | 2017-08-24 23:05:22 |
| Message-ID: | CA+FnnTwY0mZ1cO8hhBW2sOOGjXaVqfhECUKL6S_cZ6z2LUfcQw@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
Michael et al,
On Thu, Aug 24, 2017 at 6:57 PM, Michael Paquier
<michael(dot)paquier(at)gmail(dot)com> wrote:
> On Thu, Aug 24, 2017 at 11:56 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I haven't tried it, but it sure looks like it would, if you don't hit
>> OOM first. pqAddTuple() isn't doing anything to guard against integer
>> overflow. The lack of reports implies that no one has ever tried to
>> retrieve even 1G rows, let alone more ...
>
> Yeah, looking at the code we would just need to check if ntups gets
> negative (well, equal to INT_MIN) after being incremented.
So there is no way to retrieve an arbitrary number of rows from the query?
That sucks...
Thank you.
> --
> Michael
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2017-08-24 23:10:04 | Re: Retrieving query results |
| Previous Message | Michael Paquier | 2017-08-24 22:57:50 | Re: Retrieving query results |