From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Gerhard Heift <ml-postgresql-20081012-3518(at)gheift(dot)de> |
Cc: | PostgreSQL general <pgsql-general(at)postgresql(dot)org>, Teodor Sigaev <teodor(at)sigaev(dot)ru>, Oleg Bartunov <oleg(at)sai(dot)msu(dot)su> |
Subject: | Re: size of data stored in gist index |
Date: | 2009-07-31 17:27:05 |
Message-ID: | 23247.1249061225@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Gerhard Heift <ml-postgresql-20081012-3518(at)gheift(dot)de> writes:
> I try to index histograms in my table. For this I uses the cube contrib
> module in which I removed the dimension check. If the cube has more than
> 255 dimensions, for example 256 ^= 4 + 4 + 256 * 2 * 8 = 4104 bytes, this
> data can not be stored in the gist index any more. If I try it, I get the
> following error:
> PANIC: failed to add item to index page in "histogram_idx"
> Do I have to compress the data in any way or is it possible, to store index
> data with this huge size?
Well, if you're going to turn cube into an unlimited-size datatype,
it would behoove you to make its compress and decompress routines
do something.
Still, it seems like gist ought to defend itself a bit better against
ill-considered datatypes. Maybe put a check in gistFormTuple to verify
that the tuple isn't larger than can fit on one page? Or is there a
better place to check it?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Arnold, Sandra | 2009-07-31 17:27:38 | Limiting records in pg_dump |
Previous Message | Ketema Harris | 2009-07-31 17:02:25 | Re: Grouping Question |