From: | "David G(dot) Johnston" <david(dot)g(dot)johnston(at)gmail(dot)com> |
---|---|
To: | Saladin <jiaoshuntian(at)highgo(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: postgres_fdw has insufficient support for large object |
Date: | 2022-05-23 05:41:34 |
Message-ID: | CAKFQuwZjG2d65Aw6bNJ9yEhdTx_y5qazxbiAW+_qNy3NFO-4jA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sunday, May 22, 2022, Saladin <jiaoshuntian(at)highgo(dot)com> wrote:
>
> The output i expected:
> pg_largeobject_metadata and pg_largeobject in both database A and database
> B should have rows.Shouldn't only in database A.So, i can use large object
> functions
> to operate large_objectin remote table or foreign table.
>
This is an off-topic email for the -hackers mailing list. -general is the
appropriate list.
Your expectation is simply unsupported by anything in the documentation.
If you want to do what you say you will need to use dblink (and the file
needs to be accessible to the remote server directly) and directly execute
entire queries on the remote server, the FDW infrastructure simply does not
work in the way you are expecting.
Or just use bytea.
David J.
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2022-05-23 05:49:52 | Re: automatically generating node support functions |
Previous Message | Amit Kapila | 2022-05-23 05:39:41 | Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns |