From: | "Andrea Urbani" <matfanjol(at)mail(dot)com> |
---|---|
To: | "Robert Haas" <robertmhaas(at)gmail(dot)com> |
Cc: | "Stephen Frost" <sfrost(at)snowman(dot)net>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [ patch ] pg_dump: new --custom-fetch-table and --custom-fetch-value parameters |
Date: | 2017-02-11 14:56:19 |
Message-ID: | trinity-b16e6d2c-09bf-4512-b5c8-9b4fd81268cb-1486824979231@3capp-mailcom-lxa05 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I'm a beginner here... anyway I try to share my ideas.
My situation is changed in a worst state: I'm no more able to make a pg_dump neither with my custom fetch value (I have tried "1" as value = one row at the time) neither without the "--column-inserts":
pg_dump: Dumping the contents of table "tDocumentsFiles" failed: PQgetResult() failed.
pg_dump: Error message from server: ERROR: out of memory
DETAIL: Failed on request of size 1073741823.
pg_dump: The command was: COPY public."tDocumentsFiles" ("ID_Document", "ID_File", "Name", "FileName", "Link", "Note", "Picture", "Content", "FileSize", "FileDateTime", "DrugBox", "DrugPicture", "DrugInstructions") TO stdout;
I don't know if the Kyotaro Horiguchi patch will solve this, because, again, I'm not able to get neither one single row.
Similar problem trying to read and to write the bloab fields with my program.
Actually I'm working via pieces:
Read
r1) I get the length of the bloab field
r2) I check the available free memory (on the client pc)
r3) I read pieces of the bloab field, according to the free memory, appending them to a physical file
Write
w1) I check the length of the file to save inside the bloab
w2) I check the available free memory (on the client pc)
w3) I create a temporary table on the server
w4) I add lines to this temporary table, writing pieces of the file according to the free memory
w5) I ask the server to write, inside the final bloab field, the concatenation of the rows of the temporary data
The read and write is working now.
Probably the free memory check should be done on both sides (client and server [does a function/view with the available free memory exist?]) taking the smallest one.
What do you think to use a similar approach in the pg_dump?
a) go through the table getting the size of each row / fields
b) when the size of the row or of the field is bigger than the value (provided or stored somewhere), read pieces of the field till the end
PS: I have see there are the "large object" that can work via streams. My files are actually not bigger than 1Gb, but, ok, maybe in the future I will use them instead of the bloabs.
Thank you
Andrea
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2017-02-11 16:06:59 | Re: Checksums by default? |
Previous Message | Ashutosh Sharma | 2017-02-11 14:04:55 | Re: Should we cacheline align PGXACT? |