From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
Cc: | Noah Misch <noah(at)leadboat(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: MemoryContextAllocHuge(): selectively bypassing MaxAllocSize |
Date: | 2013-07-06 16:54:24 |
Message-ID: | 20130706165424.GD3286@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Jeff,
* Jeff Janes (jeff(dot)janes(at)gmail(dot)com) wrote:
> I was going to add another item to make nodeHash.c use the new huge
> allocator, but after looking at it just now it was not clear to me that it
> even has such a limitation. nbatch is limited by MaxAllocSize, but
> nbuckets doesn't seem to be.
nodeHash.c:ExecHashTableCreate() allocates ->buckets using:
palloc(nbuckets * sizeof(HashJoinTuple))
(where HashJoinTuple is actually just a pointer), and reallocates same
in ExecHashTableReset(). That limits the current implementation to only
about 134M buckets, no?
Now, what I was really suggesting wasn't so much changing those specific
calls; my point was really that there's a ton of stuff in the HashJoin
code that uses 32bit integers for things which, these days, might be too
small (nbuckets being one example, imv). There's a lot of code there
though and you'd have to really consider which things make sense to have
as int64's.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2013-07-06 17:02:11 | Re: GIN improvements part 3: ordering in index |
Previous Message | Tomas Vondra | 2013-07-06 16:48:00 | Re: GIN improvements part2: fast scan |