From: | Joel Jacobson <joel(at)trustly(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Schema version management |
Date: | 2012-05-23 05:49:44 |
Message-ID: | CAASwCXdE+Oe_eTUaGUKm1OBYxZ_cwkFqeBrUaLrSC4Zh_KYLrQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On the topic on fixing pg_dump to dump in a predictable order, can
someone please update me on the current state of the problem?
I've read though pg_dump_sort.c, and note objects are first sorted in
type/name-based ordering, then topologically sorted in a way which
"minimize unnecessary rearrangement".
How come this not always generates a predictable order? Any ideas on
how to fix the problem? If someone gives me a hint I might make an
effort trying to implement the idea.
If pg_dump would dump in a predictable order, it would make sense to
dump all overloaded versions of functions sharing the same name in the
same file.
Then it would be _guaranteed_ two different databases committing their
schema to a shared VCS commit exactly the same files if the schema is
the same, which is not guaranteed unless the dump order is
predictable.
Having thought about it, I agree the idea with arguments in filenames
is, probably possible, but suboptimal.
Much better writing all overloaded functions to the same file and
fixing the predictable dump order problem.
From | Date | Subject | |
---|---|---|---|
Next Message | Hitoshi Harada | 2012-05-23 05:50:02 | Re: Missing optimization when filters are applied after window functions |
Previous Message | Joel Jacobson | 2012-05-23 03:31:03 | Re: Schema version management |