From: | François Beausoleil <francois(at)teksol(dot)info> |
---|---|
To: | David Salisbury <salisbury(at)globe(dot)gov> |
Cc: | Postgres List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_dump -Fd must create directory |
Date: | 2012-09-14 03:09:07 |
Message-ID: | FE0988F2-9D8A-42E1-A25B-624887EC3A0C@teksol.info |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Le 2012-09-13 à 16:51, David Salisbury a écrit :
>
> It looks to me like you're misusing git..
>
> You should only git init once, and always use that directory.
> Then pg_dump, which should create one file per database
> with the file name you've specified.
> Not sure of the flags but I'd recommend plain text format.
>
> I'm also unsure what you mean by network traffic, as you don't
> mention a remote repository, but there nice visual tools
> for you to see the changes to files between you're committed
> objects. git init.. will more than likely lose all changes
> to files.
I was just running a test: looking at a way to transfer large amounts of data for backup purposes with a tool that's especially suited for deltas. I know about rsync, but this was a thought experiment. I was only surprised at the restriction of pg_dump that must create a new directory every time. Was looking for a rationale.
Also, git init is a safe operation: within a repository, git init says it reinitialized, but does not lose files. Haven't tried with local changes, or a dirty index.
Finally, when NOT using the plain text format, pg_restore can restore more than one table at a time, using the --jobs flag. On a multi-core, multi-spindle machine, this can cut down the restore time tremendously.
Bye,
François
From | Date | Subject | |
---|---|---|---|
Next Message | Wells Oliver | 2012-09-14 05:17:29 | Performance of pl/pgsql functions? |
Previous Message | Chris Curvey | 2012-09-14 01:41:40 | Re: Best free tool for relationship extraction |