Re: Retrieving query results

From: Igor Korot <ikorot01(at)gmail(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Retrieving query results
Date: 2017-08-25 00:00:53
Message-ID: CA+FnnTwyUpQG4z06mhYnES65Wt+3+L6Ke8S=jbPGRZS85=tOLQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Hi,

On Thu, Aug 24, 2017 at 7:18 PM, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> wrote:
> Igor Korot <ikorot01(at)gmail(dot)com> writes:
>> So there is no way to retrieve an arbitrary number of rows from the query?
>> That sucks...
>
> The restriction is on the number of rows in one PGresult, not the total
> size of the query result. You could use single-row mode, or use a cursor
> and fetch some reasonable number of rows at a time. If you try to inhale
> all of a many-gigarow result at once, you're going to have OOM problems
> anyway, even if you had the patience to wait for it. So I don't think the
> existence of a limit is a problem. Failure to check it *is* a problem,
> certainly.

Is there a sample of using single-row mode?
How to turn it on and turn it off?

Is there a cursor example with the Prepared Statements?
The one in the documentation doesn't use them - it uses PQexec().

Thank you.

>
> regards, tom lane

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Johann Spies 2017-08-25 08:29:30 Out of memory/corrupted shared memory problem on server
Previous Message Tom Lane 2017-08-24 23:18:47 Re: Retrieving query results