From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, pgsql-hackers(at)postgresql(dot)org, Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
Subject: | Re: profiling connection overhead |
Date: | 2010-11-24 21:05:13 |
Message-ID: | 20931.1290632713@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Wed, Nov 24, 2010 at 3:53 PM, Andres Freund <andres(at)anarazel(dot)de> wrote:
>>> The idea I had was to go the other way and say, hey, if these hash
>>> tables can't be expanded anyway, let's put them on the BSS instead of
>>> heap-allocating them.
>> Won't this just cause loads of additional pagefaults after fork() when those
>> pages are used the first time and then a second time when first written to (to
>> copy it)?
> Aren't we incurring those page faults anyway, for whatever memory
> palloc is handing out? The heap is no different from bss; we just
> move the pointer with sbrk().
I think you're missing the real point, which that the cost you're
measuring here probably isn't so much memset() as faulting in large
chunks of address space. Avoiding the explicit memset() likely will
save little in real runtime --- it'll just make sure the initial-touch
costs are more distributed and harder to measure. But in any case I
think this idea is a nonstarter because it gets in the way of making
those hashtables expansible, which we *do* need to do eventually.
(You might be able to confirm or disprove this theory if you ask
oprofile to count memory access stalls instead of CPU clock cycles...)
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2010-11-24 21:05:48 | Re: profiling connection overhead |
Previous Message | Tom Lane | 2010-11-24 20:56:34 | Regex code versus Unicode chars beyond codepoint 255 |