From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | Achilleus Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, pgsql-sql <pgsql-sql(at)postgresql(dot)org> |
Subject: | Re: Gist indexes on int arrays |
Date: | 2003-03-04 15:28:06 |
Message-ID: | 87smu3kv7d.fsf@stark.dyndns.tv |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-sql |
Achilleus Mantzios <achill(at)matrix(dot)gatewaynet(dot)com> writes:
> Moreover if your array element positions that you want to compare
> against(e.g attr_a[1], or attr_b[n], where n is the last element) are
> known, then you could have a function "first" that returns
> the first element ...
Except that's precisely what I'm *not* doing. I'm treating the arrays as sets
and looking for records where the set contains a given value. This is a
denormalized table generated nightly from fully normalized raw data.
So to simplify it, the query might have clauses like:
WHERE foo_id = 900
AND '{5}'::integer[] ~ attribute_set_array
Right now I have a btree index on (foo_id).
Can I have a GiST index on (foo_id, attribute_set_array) and have it be just
as fast at narrowing the search to just foo_id = 900 but also speed up the ~
operation?
Incidentally, it seems odd that there isn't an operator like ~ but optimized
specifically for searching for a single item. It seems awkward and possibly
unnecessarily slow to have to construct an array for the search parameter
every time.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2003-03-04 16:56:43 | Re: Gist indexes on int arrays |
Previous Message | Douglas Trainor | 2003-03-04 12:43:50 | Re: SQL books |