HASH: Out of overflow pages. Out of luck.

From: John Frank <jrf(at)segovia(dot)mit(dot)edu>
To: <pgsql-general(at)postgresql(dot)org>
Subject: HASH: Out of overflow pages. Out of luck.
Date: 2001-01-24 02:54:41
Message-ID: Pine.LNX.4.30.0101232117340.5180-100000@segovia.mit.edu
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


Does anyone have experience hacking the HASH index code to allow more
overflow pages?

I get the following when indexing a table with about 300M entries:

db=# \d table1
Table "table1"
Attribute | Type | Modifier
-----------+--------------+----------
field1 | varchar(256) |
field2 | integer |
field3 | float8 |

db=# create index table1_field1 on table1 using hash(field1);
ERROR: HASH: Out of overflow pages. Out of luck.

This also happens for field2.

I looked in the source for postgresql-7.0.3 from
src/include/access/hash.h:

* The reason that the size is restricted to NCACHED (32) is because
* the bitmaps are 16 bits: upper 5 represent the splitpoint, lower 11
* indicate the page number within the splitpoint. Since there are
* only 5 bits to store the splitpoint, there can only be 32 splitpoints.
* Both spares[] and bitmaps[] use splitpoints as there indices, so there
* can only be 32 of them.
*/
#define NCACHED 32

Is there a way around this?! If not: what a horrific limitation.

Thanks! John

Browse pgsql-general by date

  From Date Subject
Next Message Doug Semig 2001-01-24 03:12:54 Re: Re: VACUUM and 24/7 database operation
Previous Message Ronnie Esguerra 2001-01-24 01:36:51 FW: Postgres-Book from addison-wesley?