| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
| Cc: | pgsql-hackers(at)postgresql(dot)org, Rocco Altier <RoccoA(at)Routescape(dot)com> |
| Subject: | Re: Cygwin - make check broken |
| Date: | 2005-08-07 16:02:24 |
| Message-ID: | 2159.1123430544@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Andrew Dunstan <andrew(at)dunslane(dot)net> writes:
> ... The second part should not be
> applied - I simply include it to illustrate the hack (taken from a
> recent clue on the Cygwin mailing list) that I found necessary to get
> around brokenness on the latest release of Cygwin. The good news is
> that they do seem to be trying to find out what broke and fix it.
You mean this?
> *** src/backend/storage/file/fd.c 4 Jul 2005 04:51:48 -0000 1.118
> --- src/backend/storage/file/fd.c 7 Aug 2005 13:22:00 -0000
> ***************
> *** 327,332 ****
> --- 327,334 ----
> elog(WARNING, "dup(0) failed after %d successes: %m", used);
> break;
> }
> + if (used >= 250)
> + break;
>
> if (used >= size)
> {
Looking at that code, I wonder why we don't make the loop stop at
max_files_per_process opened files --- the useful result will be
bounded by that anyhow. Actively running the system out of FDs,
even momentarily, doesn't seem like a friendly thing to do.
This wouldn't directly solve your problem unless you reduced the
default value of max_files_per_process, but at least that would
be something reasonable to do instead of hacking the code.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2005-08-07 16:08:28 | Re: Cygwin - make check broken |
| Previous Message | Andrew Dunstan | 2005-08-07 15:01:27 | Re: Cygwin - make check broken |