From: | Heikki Linnakangas <heikki(dot)linnakangas(at)enterprisedb(dot)com> |
---|---|
To: | Nick Raj <nickrajjain(at)gmail(dot)com> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Cube Index Size |
Date: | 2011-06-01 11:37:41 |
Message-ID: | 4DE62485.9080904@enterprisedb.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 01.06.2011 10:48, Nick Raj wrote:
> On Tue, May 31, 2011 at 12:46 PM, Heikki Linnakangas<
> heikki(dot)linnakangas(at)enterprisedb(dot)com> wrote:
>> If not, please post a self-contained test case to create and populate the
>> table, so that others can easily try to reproduce it.
>>
>
> I have attached .sql file that having 20000 tuples
> Table creation - create table cubtest(c cube);
> Index creation - create index t on cubtest using gist(c);
Ok, I can reproduce the issue with that. The index is only 4MB in size
when I populate it with random data (vs. 15 MB with your data). The
command I used is:
INSERT INTO cubtest SELECT cube(random(), random()) FROM
generate_series(1,20000);
My guess is that the picksplit algorithm performs poorly with that data.
Unfortunately, I have no idea how to improve that.
--
Heikki Linnakangas
EnterpriseDB http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2011-06-01 11:40:27 | Re: [PERFORM] Hash Anti Join performance degradation |
Previous Message | Heikki Linnakangas | 2011-06-01 11:27:09 | Re: pg_listener in 9.0 |