From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | chris(at)bitmead(dot)com |
Cc: | pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: [HACKERS] Another nasty cache problem |
Date: | 2000-02-04 06:33:45 |
Message-ID: | 1812.949646025@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Chris Bitmead <chrisb(at)nimrod(dot)itg(dot)telstra(dot)com(dot)au> writes:
>> No ... portals are a backend concept ...
> Since when?
> According to the old doco you do...
> select portal XX * from table_name where ...;
> fetch 20 into XX.
That still works if you spell it in the SQL-approved way,
DECLARE CURSOR followed by FETCH.
> If the PQexec() is called with "fetch 20" at a time
> wouldn't this mean that you wouldn't exhaust front-end
> memory with a big query?
Sure, and that's how you work around the problem. Nonetheless
this requires the user to structure his queries to avoid sucking
up a lot of data in a single query. If the user doesn't have any
particular reason to need random access into a query result, it'd
be nicer to be able to read the result in a streaming fashion
without buffering it anywhere *or* making arbitrary divisions in it.
In any case, psql doesn't (and IMHO shouldn't) convert a SELECT
into a series of FETCHes for you.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Lockhart | 2000-02-04 07:47:27 | Re: [HACKERS] docs and createlang patch for plperl |
Previous Message | Tom Lane | 2000-02-04 06:06:53 | Re: [HACKERS] how to deal with sparse/to-be populated tables |