From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Ilya Knyazev <knuazev(at)gmail(dot)com> |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB |
Date: | 2025-01-16 19:18:06 |
Message-ID: | 1849376.1737055086@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Ilya Knyazev <knuazev(at)gmail(dot)com> writes:
> But I know that there may not be enough memory, so I use the "copy" keyword
> in the query and the PQgetCopyData function. I thought that this function
> was designed for portioned work. By analogy with the PQputCopyData
> function, which works fine.
Its documentation is fairly clear, I thought:
Attempts to obtain another row of data from the server during a
<command>COPY</command>. Data is always returned one data row at
a time; if only a partial row is available, it is not returned.
If you need to work with data values that are large enough to risk
memory problems, I think "large objects" are the best answer. Their
interface is a bit clunky, but it's at least designed to let you
both read and write by chunks.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2025-01-16 19:25:58 | Re: BUG #18777: Error running unnest function in a two phase commit transaction |
Previous Message | PG Bug reporting form | 2025-01-16 18:35:54 | BUG #18777: Error running unnest function in a two phase commit transaction |