From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Bruce Momjian <bruce(at)momjian(dot)us>, Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: pg_restore -j <nothing> |
Date: | 2009-04-23 01:27:38 |
Message-ID: | 20090423012738.GJ8123@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> Yeah. Even if Make has a sane way to estimate how many jobs it should
> use, I'm not sure that pg_restore does. (The most obvious heuristic
> for Make is to try to find out how many CPUs there are --- but at
> least it's running on the same machine it's going to be eating CPU
> on. pg_restore can't assume that.)
I'm not sure if I'd consider it 'sane', but make basically uses the
dependency information, if a job can be run based on its dependency
requirements, then it's started. For small projects, this isn't
necessairly terrible, but it's not something I would generally
recommend.
I don't see any reasonable implementation, or justification, for
supporting something like that in pg_restore.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2009-04-23 02:14:07 | Re: Prepared transactions vs novice DBAs, again |
Previous Message | Robert Haas | 2009-04-23 01:21:12 | Re: Prepared transactions vs novice DBAs, again |