From: | Philip Warner <pjw(at)rhyme(dot)com(dot)au> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)postgresql(dot)org, dyp(at)perchine(dot)com |
Subject: | Re: Re: Problem with disabling triggers in pg_dump |
Date: | 2000-07-26 00:43:43 |
Message-ID: | 3.0.5.32.20000726104343.025f2a40@mail.rhyme.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
At 12:23 25/07/00 -0400, Tom Lane wrote:
>Philip Warner <pjw(at)rhyme(dot)com(dot)au> writes:
>>> (Compare the behavior of "tar x" when run as superuser or not.)
>>> This has not worked in the past (because those \connect commands can't
>>> be ignored),
>
>> I don't follow...do you mean that psql barfs, or do you mean that there has
>> been no way to prevent them being output?
>
>No, I mean that if they're in the script there is no way to have them
>be ignored. There is a way to prevent them from being output (-z switch
>to pg_dump, or some such) but *that puts control at the wrong end of the
>process*. When you make a tarfile you don't have to specify whether it
>will be restored with the same file ownerships or not; you determine
>that when you do the restore. pg_dump scripts need the same flexibility.
OK, so adding '--no-owner, -O(?)' on pg_restore is probably worth doing.
>An additional reason for getting rid of the \connect's is performance;
>each one causes a fresh backend startup. You might also care to
>contemplate the implications for per-session variables like the "disable
>triggers" switch you were suggesting.
Yes good point; per-session probably is better than updating reltriggers,
but I'd argue that a restore operation (even a partial one) should not be
done while the DB is online anyway.
>>> The thing that really bothers me about this reltriggers hack is that
>>> it doesn't work if the script is being run as non-superuser. I don't
>>> see why it's necessary anyway; shouldn't the order of operations be
>>> create table;
>>> load data;
>>> create triggers and indexes;
>
>> Yes, for a full restore that is true. But for a partial restore (where I
>> have assumed triggers may exist), it's nice to disable the triggers...
>
>Why is that nice? If you are loading more data into an existing table
>structure, seems to me the *last* thing you'd want is to disable your
>consistency checks. Too risky --- certainly not something I want
>pg_dump doing automatically without my knowledge or consent.
But, but, but...that's the default behaviour in 7.0.2. I can probably
safely change the default behaviour in pg_restore, but are you also
suggesting I do it for pg_dump as well.
>I've forgotten the exact details of the scenario that forced us to put
>in the reltriggers hack in the first place, but it may well be that
>the problem has a better solution now that pg_dump is smart enough to
>reorder its output. I can see from the CVS logs that Jan put the
>trigger change into pg_dump on 7-Feb-2000, but I didn't have much luck
>finding any related discussion in the mail archives. Jan, do you recall
>the reasoning for it?
From Jan's subsequent email, this is because of RI triggers.
How does this sound as a plan:
1. Add --no-owner to pg_restore & pg_dump (both versions)
This will use the current uid to build the entire database. The will affect
pg_restore as well as plain text output from pg_dump.
2. Add --superuser[=<name>], -S to pg_restore & pg_dump (both versions)
If restoring directly to a db, then pg_restore will check if the current
uid has superuser, and if so, will run as though --superuser=<current-uid>
were specified (unless another name was already specified).
Without --superuser it will NOT DISABLE TRIGGERS, and will issue a warning
to that effect (since it can't anyway).
With --superuser specified (or defaulted), it will issue '\connect -
<superuser>' before disabling triggers, then reconnect as the previous user
afterwards.
If and when 'SET SESSION CONSTRAINTS DISABLED' & 'SET SESSION TRIGGERS
DISABLED' comes along, the code for disabling triggers can be modified
appropriately.
I'm not too worried about the cost of reconnecting; it is not is any where
near as great as the cost of actually restoring the data for any big DB, I
think.
Does this sound like a reasonable approach?
----------------------------------------------------------------
Philip Warner | __---_____
Albatross Consulting Pty. Ltd. |----/ - \
(A.C.N. 008 659 498) | /(@) ______---_
Tel: (+61) 0500 83 82 81 | _________ \
Fax: (+61) 0500 83 82 82 | ___________ |
Http://www.rhyme.com.au | / \|
| --________--
PGP key available upon request, | /
and from pgp5.ai.mit.edu:11371 |/
From | Date | Subject | |
---|---|---|---|
Next Message | Philip Warner | 2000-07-26 00:54:19 | Re: pg_dump, libdump, dump API, & backend again |
Previous Message | Hiroshi Inoue | 2000-07-26 00:17:20 | RE: DELETE/DROP on Columns |