From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Edson Richter <edsonrichter(at)hotmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Moving several databases into one database with several schemas |
Date: | 2012-09-07 15:41:26 |
Message-ID: | CAHyXU0zM7tgRrX8xh+R2rzGD9mQgFWtsOLxju-SZ7U3O50HX5A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Thu, Sep 6, 2012 at 10:12 PM, Edson Richter <edsonrichter(at)hotmail(dot)com> wrote:
> Em 06/09/2012 15:40, John R Pierce escreveu:
>
>> On 09/06/12 5:30 AM, Edson Richter wrote:
>>>>
>>>> You could change the default setting for the user with
>>>>
>>>> ALTER ROLE someuser SET search_path=...
>>>
>>> That is perfect! I can have separate users for each application, and then
>>> they will have the correct search path.
>>> You saved my day,
>>
>>
>> the default search_path is $USER,public, so by naming your schema's to the
>> usernames, you don't even need to alter role...
>>
> Wonderful, this would have the effect I expect that the connection defines
> the path. Then I'll use user to select the specific schema, and the "public"
> schema as the main schema.
>
> Thanks to you all, I think I have everything needed to put my migration
> project in practice.
I do this exact thing frequently. I route everything through dumps.
Here's some roughed out bash script for ya.. The basic MO is to
restore hack the restore script with sed, restoring to a scratch
schema so that the drop/reload of the client private schema can be
deferred until the data is already loaded.
function load_client {
client=$1
database=master_db
echo "[`date`] Loading $client "
psql -c "update client set load_started = now(), LoadedPO = NULL
where name = '$client';" $database
<get backup database and place into $client.current.gz>
psql -c "drop schema if exists ${client}_work cascade" $database
2>&1 | grep ERROR
psql -c "create schema ${client}_work" $database 2>&1 | grep ERROR
gzip -cd $client.backup.gz | sed "s/^SET search_path = public/SET
search_path=${client}_work/" | psql -XAq $database 2>&1 | grep ERROR |
grep -v "plpgsql"
psql -c "begin; drop schema if exists ${client} cascade; alter
schema ${client}_work rename to $client; commit;" $database
psql -c "update client set load_finished = now() where name =
'$client';" $database
rm -f $client.current.gz
}
To cut restore time down I run them in parallel:
NUM_FORKS=4
function do_parallel {
while [ `jobs | wc -l` -ge $NUM_FORKS ]
do
sleep 1
done
"$@" &
}
Then it's just a matter of:
<get $clients somehow>
for client in $clients
do
do_parallel load_client $client
done
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Sacket | 2012-09-07 18:09:05 | INSERT… RETURNING for copying records |
Previous Message | Marti Raudsepp | 2012-09-07 15:15:37 | Re: Multiple indexes, huge table |