From: | "Merlin Moncure" <merlin(dot)moncure(at)rcsonline(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: shared memory release following failed lock acquirement. |
Date: | 2004-09-29 12:56:12 |
Message-ID: | 6EE64EF3AB31D5448D0007DD34EEB3412A74D8@Herge.rcsinc.local |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tgl wrote:
> > As I see it, this means the user-locks (and perhaps all
> > locks...?) eat around ~ 6k bytes memory each.
>
> They're allocated in groups of 32, which would work out to close to
6k;
> maybe you were measuring the incremental cost of allocating the first
one?
I got my 6k figure by dividing 10000 into 64M, 10000 being the value
that crashed the server. That's reasonable because doubling shared
buffers slightly more than doubled the crash value.
I was wondering how ~ 10k locks ran me out of shared memory when each
lock takes ~ 260b (half that, as you say) and I am running 8k buffers =
64M.
260 * 100 backends * 64 maxlocks = 1.7 M. Sure, the hash table and
other stuff adds some...but this is no where near what it should take to
run me out.
Am I just totally misunderstanding how to estimate locks memory
consumption?
Merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Marc Colosimo | 2004-09-29 13:38:39 | Re: tweaking MemSet() performance - 7.4.5 |
Previous Message | Oleg Bartunov | 2004-09-29 12:10:56 | Re: tsearch2 poor performance |