From: | Teodor Sigaev <teodor(at)sigaev(dot)ru> |
---|---|
To: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
Cc: | Nick Raj <nickrajjain(at)gmail(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Cube Index Size |
Date: | 2011-06-01 12:38:08 |
Message-ID: | 4DE632B0.5030603@sigaev.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Ok, I can reproduce the issue with that. The index is only 4MB in size
> when I populate it with random data (vs. 15 MB with your data). The
> command I used is:
>
> INSERT INTO cubtest SELECT cube(random(), random()) FROM
> generate_series(1,20000);
>
> My guess is that the picksplit algorithm performs poorly with that data.
> Unfortunately, I have no idea how to improve that.
One of idea is add sorting of Datums to be splitted by cost of insertion. It's
implemented in intarray/tsearch GiST indexes.
Although I'm not sure that it will help but our researches on Guttman's
picksplit algorimth show significant improvements.
--
Teodor Sigaev E-mail: teodor(at)sigaev(dot)ru
WWW: http://www.sigaev.ru/
From | Date | Subject | |
---|---|---|---|
Next Message | panam | 2011-06-01 12:40:55 | Re: [PERFORM] Hash Anti Join performance degradation |
Previous Message | Dave Page | 2011-06-01 12:29:57 | Re: pg_listener in 9.0 |