From: | Francisco Reyes <lists(at)stringsutils(dot)com> |
---|---|
To: | PostgreSQL general <pgsql-general(at)postgresql(dot)org> |
Subject: | pg_dump vs schemas |
Date: | 2007-07-14 00:10:33 |
Message-ID: | cone.1184371833.324427.28091.5001@35st.simplicato.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
pg_dump by default puts at the top
SET search_path = public,pg_catalog;
This considering a plain vanilla setup where no schemas other than public
have been created.
I however noticed that pg_dump also does this:
ALTER TABLE public.mytable OWNER TO pgsql;
Shouldn't the "public." be left out?
I verified that even if multiple tables exist with the same name only the
table in the first referenced schema in the path will be deleted.
By the same token shouldn't all references to schemas be left out?
In the case there are reasons why the schema is referenced, perhaps create a
parameter in pg_dump to omit the schema.
The rationale is to be able to easily move schemas in the target restore.
Specially if one was doing an entire database.
Alternatively is there any easy way to take all data in one schema and load
it into a target DB and a different schema?
The default produced by pg_dump would be a problem because of the "schema."
references.
As for why I am doing this schema move..
>From what i can tell it may be best to have tsearch into it's own schema so
I either move tsearch out of public, or my data out of public. I figure
since public is what tsearch and other utilities like it target may be
easier to move the data out of public.
Currently trying a small data set to see how this work and whether it is
better to move the data out of public or tsearch.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Allison | 2007-07-14 00:59:39 | Re: multirow insert |
Previous Message | Jim Nasby | 2007-07-13 22:04:08 | Re: restore dump to 8.19 |