From: | Listmail <lists(at)peufeu(dot)com> |
---|---|
To: | tom <tom(at)tacocat(dot)net>, Postgresql <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: SQL WHERE: many sql or large IN() |
Date: | 2007-04-06 14:08:02 |
Message-ID: | op.tqc77otozcizji@apollo13 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> I have a choice of running:
>
> SELECT bar FROM tokens WHERE foo IN ('apple','orange','biscuit'....) for
> up to ~300 words
>
> OR
>
> SELECT bar FROM tokens WHERE foo = 'apple' up to ~300 times as a
> prepared/cached SQL statements.
With new PG versions you can also use VALUES which will save you a hash
if you know your keys are unique.
Example use integers but you can use anything. Just like a normal join
from a table.
Putting 300 values in VALUES is certainly a LOT faster than doing 300
individual SELECTs !
test=> EXPLAIN ANALYZE SELECT t.* FROM test t, ( VALUES (1),(2) ) AS v
WHERE t.id=v.column1;
QUERY PLAN
------------------------------------------------------------------------------------------------------------------------
Nested Loop (cost=0.00..16.76 rows=2 width=8) (actual time=0.029..0.039
rows=2 loops=1)
-> Values Scan on "*VALUES*" (cost=0.00..0.03 rows=2 width=4) (actual
time=0.004..0.005 rows=2 loops=1)
-> Index Scan using test_pkey on test t (cost=0.00..8.36 rows=1
width=8) (actual time=0.013..0.015 rows=1 loops=2)
Index Cond: (t.id = "*VALUES*".column1)
Total runtime: 0.085 ms
From | Date | Subject | |
---|---|---|---|
Next Message | Bruno Wolff III | 2007-04-06 15:11:05 | Re: simple coordinate system |
Previous Message | Oleg Bartunov | 2007-04-06 13:35:54 | Re: SQL WHERE: many sql or large IN() |