Re: Lazier alternative to row_to_json ?

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Florents Tselai <florents(dot)tselai(at)gmail(dot)com>
Cc: pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: Lazier alternative to row_to_json ?
Date: 2021-11-01 14:05:38
Message-ID: 1131853.1635775538@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Florents Tselai <florents(dot)tselai(at)gmail(dot)com> writes:
> I have the following simple query
> select row_to_json(d) from documents d
> The output of this goes to script that expects new-line-delimited stream of JSON objects.
> But as-is, ti looks like the server’s memory fills-up before ti starts emitting results.

Usually, when we hear about query result size problems, they're on the
client side not the server side ... so are you sure about your diagnosis
above? The server flushes its results to the client after each row,
but common client libraries (particularly libpq) try to accumulate the
whole query result before returning it to the application.

If the problem is indeed inside libpq, you could alleviate it by using
single-row mode [1], assuming you're using reasonably late-model
libpq. Another solution could be to use COPY, which again provides
more stream-based access to the output; though you'd have to provide
a layer of dequoting and line-separation logic.

regards, tom lane

[1] https://www.postgresql.org/docs/current/libpq-single-row-mode.html

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Stephen Frost 2021-11-01 17:00:45 Re: ZFS filesystem - supported ?
Previous Message Florents Tselai 2021-11-01 11:18:46 Lazier alternative to row_to_json ?