From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Petr Jelinek <petr(at)2ndquadrant(dot)com>, pgsql-hackers(at)postgreSQL(dot)org |
Subject: | Re: PATCH: two slab-like memory allocators |
Date: | 2016-09-27 02:10:17 |
Message-ID: | 03110c11-ea72-7b4d-027d-6f19eb300b2a@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
Attached is v2 of the patch, updated based on the review. That means:
- Better comment explaining how free chunks are tracked in Slab context.
- Removed the unused SlabPointerIsValid macro.
- Modified the comment before SlabChunkData, explaining how it relates
to StandardChunkHeader.
- Replaced the two Assert() calls with elog().
- Implemented SlabCheck(). I've ended up with quite a few checks there,
checking pointers between the context, block and chunks, checks due
to MEMORY_CONTEXT_CHECKING etc. And of course, cross-checking the
number of free chunks (bitmap, freelist vs. chunk header).
- I've also modified SlabContextCreate() to compute chunksPerBlock a
bit more efficiently (use a simple formula instead of the loop, which
might be a bit too expensive for large blocks / small chunks).
I haven't done any changes to GenSlab, but I do have a few notes:
Firstly, I've realized there's an issue when chunkSize gets too large -
once it exceeds blockSize, the SlabContextCreate() fails as it's
impossible to place a single chunk into the block. In reorderbuffer,
this may happen when the tuples (allocated in tup_context) get larger
than 8MB, as the context uses SLAB_LARGE_BLOCK_SIZE (which is 8MB).
For Slab the elog(ERROR) is fine as both parameters are controlled by
the developer directly, but GenSlab computes the chunkSize on the fly,
so we must not let it fail like that - that'd result in unpredictable
failures, which is not very nice.
I see two ways to fix this. We may either increase the block size
automatically - e.g. instead of specifying specifying chunkSize and
blockSize when creating the Slab, specify chunkSize and chunksPerBlock
(and then choose the smallest 2^k block large enough). For example with
chunkSize=96 and chunksPerBlock=1000, we'd get 128kB blocks, as that's
the closest 2^k block larger than 96000 bytes.
But maybe there's a simpler solution - we may simply cap the chunkSize
(in GenSlab) to ALLOC_CHUNK_LIMIT. That's fine, because AllocSet handles
those requests in a special way - for example instead of tracking them
in freelist, those chunks got freed immediately.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment | Content-Type | Size |
---|---|---|
0001-simple-slab-allocator-fixed-size-allocations.patch | binary/octet-stream | 39.7 KB |
0002-generational-slab-auto-tuning-allocator.patch | binary/octet-stream | 16.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Eisentraut | 2016-09-27 02:16:41 | Re: pg_basebackup, pg_receivexlog and data durability (was: silent data loss with ext4 / all current versions) |
Previous Message | Craig Ringer | 2016-09-27 02:05:20 | Re: Stopping logical replication protocol |