From: | PG Bug reporting form <noreply(at)postgresql(dot)org> |
---|---|
To: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Cc: | knuazev(at)gmail(dot)com |
Subject: | BUG #18775: PQgetCopyData always has an out-of-memory error if the table field stores bytea ~700 MB |
Date: | 2025-01-16 09:29:17 |
Message-ID: | 18775-e6af405ca08393cc@postgresql.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
The following bug has been logged on the website:
Bug reference: 18775
Logged by: Ilya
Email address: knuazev(at)gmail(dot)com
PostgreSQL version: 16.0
Operating system: windows 10
Description:
Hello, I need your help. The essence of the problem is that I am trying to
download a bytea field from a table row. The size of the data in the field
is about 700 MB. In response, I receive an out-of-memory error.
More details:
1) I execute the query
COPY (SELECT %s FROM %s.%s WHERE %s=%s) TO STDOUT BINARY
2) I call the PQgetCopyData function.
This function works with the pg_conn.inBuffer buffer.
I did not go into detail on how memory is allocated for it, but in my case,
about 300 MB is allocated. And the data from the database is successfully
accumulated in this buffer.
And when the buffer is almost full, the function
int pqGetCopyData3(PGconn *conn, char **buffer, int async)
tries to allocate a buffer almost the same size as pg_conn.inBuffer in order
to give it to the user:
*buffer = (char *) malloc(msgLength + 1); (fe-protosol.c LN. 1767).
In my case, the malloc function consistently fails to do this. It returns
NULL and the library generates an "out of memory" error.
p.s.
It seems to me that the only chance to make PQgetCopyData work correctly
without changing the library interface is to somehow reduce the size of the
pg_conn.inBuffer buffer to small values.
And ideally, it would be good to add the PQgetCopyData2 function that would
accept a buffer and simply fill it with data and return the filling size.
I also tried to get around this problem using the outdated PQgetline
function, but it does not work with binary data. And if you make a COPY
(SELECT %s FROM %s.%s WHERE %s=%s) TO STDOUT query, an error will occur when
trying to execute it. Because, perhaps, the library is trying to download
everything at once into pg_conn.inBuffer, which is too small for this.
And yes, I know that for storing large data you have "large object
facility". But my method should also work for files up to 1 GB.
From | Date | Subject | |
---|---|---|---|
Next Message | Rajni Bobal | 2025-01-16 10:50:14 | Re: BUG #18774: Not the required output of the query used in the function(delete_from_table1) in postgresql9.4 |
Previous Message | Fujii Masao | 2025-01-15 16:53:44 | reltuples kept decreasing with each autovacuum run Re: BUG #18773: オートバキュームのリトライ時にreltuplesの値が減少する |