From: | Andres Freund <andres(at)2ndquadrant(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Abhijit Menon-Sen <ams(at)2ndQuadrant(dot)com>, Heikki Linnakangas <hlinnakangas(at)vmware(dot)com>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: [PATCH] Use MAP_HUGETLB where supported (v3) |
Date: | 2013-11-21 21:14:35 |
Message-ID: | 20131121211435.GA14939@alap2.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2013-11-21 18:09:38 -0300, Alvaro Herrera wrote:
> Abhijit Menon-Sen wrote:
> > At 2013-11-15 15:17:32 +0200, hlinnakangas(at)vmware(dot)com wrote:
>
> > > But I'm not wedded to the idea if someone objects; a log message might
> > > also be reasonable: "LOG: huge TLB pages are not supported on this
> > > platform, but huge_tlb_pages was 'on'"
> >
> > Put that way, I have to wonder if the right thing to do is just to have
> > a "try_huge_pages=on|off" setting, and log a warning if the attempt did
> > not succeed. It would be easier to document, and I don't think there's
> > much point in making it an error if the allocation fails.
>
> What about
> huge_tlb_pages={off,try}
>
> Or maybe
> huge_tlb_pages={off,try,require}
I'd certainly want a setting that errors out if it cannot get the memory
using hugetables. If you rely on the reduction in memory (which can be
significant on large s_b, large max_connections), it's rather annoying
not to know whether it suceeded using it.
Greetings,
Andres Freund
--
Andres Freund http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2013-11-21 21:15:38 | Re: Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1 |
Previous Message | Heikki Linnakangas | 2013-11-21 21:13:01 | Re: Data corruption issues using streaming replication on 9.0.14/9.2.5/9.3.1 |