From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Steven Lembark <lembark(at)wrkhors(dot)com>, "pgsql-general(at)lists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Move vs. copy table between databases that share a tablespace? |
Date: | 2019-04-03 15:12:46 |
Message-ID: | 20190403151246.GA10916@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 2019-Apr-03, Tom Lane wrote:
> I wrote:
> > Steven Lembark <lembark(at)wrkhors(dot)com> writes:
> >> Given that the two databases live in the same cluster and have
> >> the owner & the tablespace in common, is there any way to move
> >> the contents without a dump & reload?
>
> > In principle you could do that; it's more or less the same thing that
> > pg_upgrade --link does.
>
> Actually, thinking about that a bit harder: there's one aspect of
> what pg_upgrade does that's really hard to control from userspace,
> and that's forcing tables to have the same OIDs as before. In this
> context, that means you're probably out of luck if the table has a
> TOAST table, unless the TOAST table is empty. There wouldn't be
> any good way to keep TOAST pointers valid across the move.
Hmm, couldn't you use the binary-upgrade support functions just prior to
creating the table in the target database? If the OID is not already
being used in the target database, it should be possible to reserve it
just like pg_upgrade does it. This is a one-shot for a single table, so
there's no need to automate it or anything too sophisticated.
--
Álvaro Herrera https://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-04-03 15:19:29 | Re: Move vs. copy table between databases that share a tablespace? |
Previous Message | Peter J. Holzer | 2019-04-03 14:57:58 | Re: Recommendation to run vacuum FULL in parallel |