From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Valentin Hocher <valentin(dot)hocher(at)kabelbw(dot)de>, pgsql-general(at)postgresql(dot)org |
Subject: | Re: initdb fails on Centos 5.4 x64 |
Date: | 2010-05-11 07:04:31 |
Message-ID: | 4BE9017F.7040408@postnewspapers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On 11/05/10 08:04, Tom Lane wrote:
> valentin(dot)hocher(at)kabelbw(dot)de (Valentin Hocher) writes:
>> [ cPanel's "Shell Fork Bomb Protection" actually does this: ]
>> ulimit -n 100 -u 20 -m 200000 -d 200000 -s 8192 -c 200000 -v 200000 2>/dev/null
>
> Just to annotate that: some experimentation I did confirms that on
> RHEL5 x86_64, PG 8.4.3 falls over with the mentioned error when run
> under ulimit -v in the vicinity of 200000 (ie 200MB). It's kind of
> surprising that initdb eats that much virtual memory space, although
> certainly loading all the encoding translation libraries simultaneously
> is a bit of a stress test. But the actual memory footprint is surely a
> lot less than that. Apparently there is a good deal of inefficiency in
> address-space consumption when loading a bunch of .so's on this
> platform. I'd be interested to know if people can reproduce similar
> problems on other Linux variants.
I wouldn't be surprised if prelinking turned out to be the culprit
there. If it's loading shared libraries to pre-selected address ranges
that're globally unique across the system, it might much some
significant address space.
... though come to think of it, each should be an individual mapping
(right?) so it shouldn't really matter where in virtual memory they're
located, just their size. Scratch that idea?
--
Craig Ringer
Tech-related writing: http://soapyfrogs.blogspot.com/
From | Date | Subject | |
---|---|---|---|
Next Message | Craig Ringer | 2010-05-11 07:09:17 | Re: peer-to-peer replication with Postgres |
Previous Message | A. Kretschmer | 2010-05-11 07:03:55 | Re: Performance issues when the number of records are around 10 Million |