Re: Correct use of cursors for very large result sets in Postgres

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Mike Beaton <mjsbeaton(at)gmail(dot)com>
Cc: pgsql-performance(at)postgresql(dot)org
Subject: Re: Correct use of cursors for very large result sets in Postgres
Date: 2017-02-18 17:43:49
Message-ID: 26739.1487439829@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Mike Beaton <mjsbeaton(at)gmail(dot)com> writes:
> One outstanding question I have. Based on a lot of helpful responses given
> to the SO question I can now test and see what disk buffers are generated
> (by setting `log_temp_files` to `0` and then `tail -f log`), as well as how
> long it takes for results to start arriving.

> With a large (10,000,000 row) test table, if I do `SELECT * FROM table` on
> psql it starts to return results immediately with no disk buffer. If I do
> `FETCH ALL FROM cursortotable` on psql it takes about 7.5 seconds to start
> returning results, and generates a 14MB buffer. If I do `SELECT * FROM
> table` on a correctly coded streaming client, it also starts to return
> results immediately with no disk buffer. But if I do `FETCH ALL FROM
> cursortotable` from my streaming client, it takes about 1.5 seconds for
> results to start coming... but again with no disk buffer, as hoped

Seems odd. Is your cursor just on "SELECT * FROM table", or is there
some processing in there you're not mentioning? Maybe it's a cursor
WITH HOLD and you're exiting the source transaction?

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Mike Beaton 2017-02-19 07:54:18 Re: Correct use of cursors for very large result sets in Postgres
Previous Message Mike Beaton 2017-02-18 08:46:41 Re: Correct use of cursors for very large result sets in Postgres