From: | Celso Pinto <cpinto(at)yimports(dot)com> |
---|---|
To: | pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: array column and b-tree index allowing only 8191 bytes |
Date: | 2008-06-12 00:12:13 |
Message-ID: | 1213229533.22568.8.camel@starfish |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi Alvaro,
thanks for the hint. I've since experimented with gin and gist and did a
small pgbench custom script test.
Recalling from my previous message, the int[] on a row can have a
maximum of 5000 values. From here I judged gin to be the best option but
inserting is really slow. The test was performed on a small EC2
instance. I raised maintenance_work_mem to 512MB but still inserting 50K
rows takes more than an hour.
I also tested gist, inserts run quickly but running pgbench with 100
clients, each making 10 selects on a random value contained in the int[]
takes the machine load to values such as 88 which is definately a no go.
What, if any, would be the recommended options to improve this
scenario? Not using intarray? :-)
Cheers,
Celso
On Sáb, 2008-06-07 at 12:38 -0400, Alvaro Herrera wrote:
> Celso Pinto wrote:
>
> > So my questions are: is this at all possible? If so, is is possible to
> > increate that maximum size?
>
> Indexing the arrays themselves is probably pretty useless. Try indexing
> the elements, which you can do with the intarray contrib module.
From | Date | Subject | |
---|---|---|---|
Next Message | Klint Gore | 2008-06-12 01:04:04 | Re: Multithreaded queue in PgSQL |
Previous Message | Adam Dear | 2008-06-11 23:40:26 | Re: Unable to dump database using pg_dump |