From: | "Jim C(dot) Nasby" <jim(at)nasby(dot)net> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Sort memory not being released |
Date: | 2003-06-17 21:25:00 |
Message-ID: | 20030617212500.GO40542@flake.decibel.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Tue, Jun 17, 2003 at 10:45:39AM -0400, Tom Lane wrote:
> Martijn van Oosterhout <kleptog(at)svana(dot)org> writes:
> > For large allocations glibc tends to mmap() which does get unmapped. There's
> > a threshold of 4KB I think. Ofcourse, thousands of allocations for a few
> > bytes will never trigger it.
>
> But essentially all our allocation traffic goes through palloc, which
> bunches small allocations together. In typical scenarios malloc will
> only see requests of 8K or more, so we should be in good shape on this
> front.
>
> (Not that this is very relevant to Jim's problem, since he's not using
> glibc...)
Maybe it would be helpful to describe why I noticed this...
I've been doing some things that require very large sorts. I generally
have very few connections though, so I thought I'd set sort_mem to about
1/3 of my memory. My thought was that it's better to suck down a ton of
memory and blow out the disk cache if it means we can avoid hitting the
disk for a sort at all.
Of course I wasn't planning on sucking down a bunch of memory and
holding on to it. :)
I've read through the sort code and it seems that the pre-buffering once
you go to disk will probably hurt with a huge sort_mem setting, since
the data could be double or even triple buffered (in memtuples[], in
pgsql's shared buffers, and by the OS).
I think that a more ideal scenario (which I've been meaning to email
hackers about) would be something like this:
If the OS is running low on free physical memory, a sort will use less
than sort_mem, as an attempt to avoid swapping.
sort_mem is the maximum amount of sort memory a single sort (or maybe a
single connection) can take.
If sort_mem is over X size, then use only Y for pre-buffering (How much
does a large sort_mem help if you have to spill to disk?)
If it's pretty clear that the sort won't fit in memory (due to sort_mem
or system free memory being low), I think it might help if tuplesort
just went to disk right away, instead of waiting until all the memory
was used up, but again, I'm not sure how the sort algorithm works when
it goes to tape.
This should mean that you can set the system up to allow very large
sorts before spilling to disk... if there's not a lot of sorts sucking
down memory, a large sort will be able to avoid overflowing to disk,
which is obviously a huge performance gain. If the system is busy/memory
bound though, sorts will overflow to disk, rather than using swap space
which I'm sure would be a lot worse.
--
Jim C. Nasby (aka Decibel!) jim(at)nasby(dot)net
Member: Triangle Fraternity, Sports Car Club of America
Give your computer some brain candy! www.distributed.net Team #1828
Windows: "Where do you want to go today?"
Linux: "Where do you want to go tomorrow?"
FreeBSD: "Are you guys coming, or what?"
From | Date | Subject | |
---|---|---|---|
Next Message | Jim C. Nasby | 2003-06-17 21:31:34 | Re: order of nested loop |
Previous Message | Andrew J. Kopciuch | 2003-06-17 21:11:05 | Re: tsearch - v2 new dict |