From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Mike Beaton <mjsbeaton(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Correct use of cursors for very large result sets in Postgres |
Date: | 2017-02-21 13:32:09 |
Message-ID: | 17679.1487683929@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Mike Beaton <mjsbeaton(at)gmail(dot)com> writes:
> New TL;DR (I'm afraid): PostgreSQL is always generating a huge buffer file
> on `FETCH ALL FROM CursorToHuge`.
I poked into this and determined that it's happening because pquery.c
executes FETCH statements the same as it does with any other
tuple-returning utility statement, ie "run it to completion and put
the results in a tuplestore, then send the tuplestore contents to the
client". I think the main reason nobody worried about that being
non-optimal was that we weren't expecting people to FETCH very large
amounts of data in one go --- if you want the whole query result at
once, why are you bothering with a cursor?
This could probably be improved, but it would (I think) require inventing
an additional PortalStrategy specifically for FETCH, and writing
associated code paths in pquery.c. Don't know when/if someone might get
excited enough about it to do that.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Mike Beaton | 2017-02-21 13:49:09 | Re: Correct use of cursors for very large result sets in Postgres |
Previous Message | Mike Beaton | 2017-02-21 12:36:36 | Re: Correct use of cursors for very large result sets in Postgres |