Re: Performance problem in aset.c

From: JanWieck(at)t-online(dot)de (Jan Wieck)
To: Alfred Perlstein <bright(at)wintelcom(dot)net>
Cc: PostgreSQL HACKERS <pgsql-hackers(at)postgresql(dot)org>
Subject: Re: Performance problem in aset.c
Date: 2000-07-12 11:27:50
Message-ID: 200007121127.NAA23451@hot.jw.home
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers

Alfred Perlstein wrote:
> * Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> [000711 22:23] wrote:
> > Philip Warner <pjw(at)rhyme(dot)com(dot)au> writes:
> > > Can you maintain one free list for each power of 2 (which it might already
> > > be doing by the look of it), and always allocate the max size for the list.
> > > Then when you want a 10k chunk, you get a 16k chunk, but you know from the
> > > request size which list to go to, and anything on the list will satisfy the
> > > requirement.
> >
> > That is how it works for small chunks (< 1K with the current
> > parameters). I don't think we want to do it that way for really
> > huge chunks though.
> >
> > Maybe the right answer is to eliminate the gap between small chunks
> > (which basically work as Philip sketches above) and huge chunks (for
> > which we fall back on malloc). The problem is with the stuff in
> > between, for which we have a kind of half-baked approach...
>
> Er, are you guys seriously layering your own general purpose allocator
> over the OS/c library allocator?
>
> Don't do that!
>
> The only time you may want to do this is if you're doing a special purpose
> allocator like a zone or slab allocator, otherwise it's a pessimization.
> The algorithms you're discussing to fix these leaks have been implemented
> in almost any modern allocator that I know of.
>
> Sorry if i'm totally off base, but "for which we fall back on malloc"
> makes me wonder what's going on here.

To clearify this:

I developed this in aset.c because of the fact that we use
alot (really alot) of very small chunks beeing palloc()'d.
Any allocation must be remembered in some linked lists to
know what to free at memory context reset or destruction. In
the old version, every however small amount was allocated
using malloc() and remembered separately in one huge List for
the context. Traversing this list was awfully slow when a
context said bye. And I saw no way to speedup this traversal.

With the actual concept, only big chunks are remembered for
their own. All small allocations aren't tracked that
accurately and memory context destruction simply can throw
away all the blocks allocated for it.

At the time I implemented it it gained a speedup of ~10% for
the regression test. It's an approach of gaining speed by
wasting memory.

Jan

--

#======================================================================#
# It's easier to get forgiveness for being wrong than for being right. #
# Let's break this rule - forgive me. #
#================================================== JanWieck(at)Yahoo(dot)com #

In response to

Browse pgsql-hackers by date

  From Date Subject
Next Message Tim Perdue 2000-07-12 11:27:51 Re: 7.0.2 issues / Geocrawler
Previous Message Philip Warner 2000-07-12 11:27:14 Re: Insert..returning (was Re: Re: postgres TODO)