From: | Peter Eisentraut <peter_e(at)gmx(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Alvaro Herrera <alvherre(at)commandprompt(dot)com>, Kevin Grittner <kevin(dot)grittner(at)wicourts(dot)gov>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: make world fails |
Date: | 2011-05-01 10:26:21 |
Message-ID: | 1304245581.25776.8.camel@vanquo.pezone.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On tor, 2011-04-28 at 00:03 -0400, Tom Lane wrote:
> Peter Eisentraut <peter_e(at)gmx(dot)net> writes:
> > On ons, 2011-04-27 at 17:54 -0300, Alvaro Herrera wrote:
> >> I take it that if I have a manpages/docbook.xsl in that path, it uses
> >> that instead of trying to fetch it from sourceforge.
>
> > Exactly.
>
> > If you don't want to depend on net access, you can do something like
> > make whatever XSLTPROCFLAGS=--nonet
>
> Is there a way to say "fetch all the documents I need for this build
> into my local cache"? Then you could do that when your network was up,
> and not have to worry about failures in future. The set of URIs we
> reference doesn't change much.
No, not without some external program to do the caching.
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2011-05-01 12:09:41 | Re: a bit strange btree index tuples |
Previous Message | Pavel Stehule | 2011-05-01 09:36:12 | strange view performance |