Re: Retrieving query results

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>
Cc: Igor Korot <ikorot01(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Retrieving query results
Date: 2017-08-24 23:10:04
Message-ID: 17247.1503616204@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Michael Paquier <michael(dot)paquier(at)gmail(dot)com> writes:
> On Thu, Aug 24, 2017 at 11:56 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
>> I haven't tried it, but it sure looks like it would, if you don't hit
>> OOM first. pqAddTuple() isn't doing anything to guard against integer
>> overflow. The lack of reports implies that no one has ever tried to
>> retrieve even 1G rows, let alone more ...

> Yeah, looking at the code we would just need to check if ntups gets
> negative (well, equal to INT_MIN) after being incremented.

I think the real problem occurs where we realloc the array bigger.
tupArrSize needs to be kept to no more than INT_MAX --- and, ideally,
it should reach that value rather than dying on the iteration after
it reaches 2^30 (so that we support resultsets as large as we possibly
can). Without a range-check, it's not very clear what realloc will think
it's being asked for. Also, on 32-bit machines, we could overflow size_t
before tupArrSize even gets that big, so a test against
SIZE_MAX/sizeof(pointer) may be needed as well.

As long as we constrain tupArrSize to be within bounds, we don't
have to worry about overflow of ntups per se.

regards, tom lane

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2017-08-24 23:18:47 Re: Retrieving query results
Previous Message Igor Korot 2017-08-24 23:05:22 Re: Retrieving query results