From: | tom <tom(at)tacocat(dot)net> |
---|---|
To: | Postgresql <pgsql-general(at)postgresql(dot)org> |
Subject: | SQL WHERE: many sql or large IN() |
Date: | 2007-04-06 12:58:46 |
Message-ID: | 5251CBA4-FDF4-450F-8011-6071806E24D0@tacocat.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
I'm wondering where the differences are in running two different
types of SQL statements.
Given ~300 tokens/words I can either run 1 sql statement with a large
list in a "WHERE foo IN (...300 tokens...)"
or I can run ~300 statements, one for each token.
In the first case, the SQL is not prepared, but just executed.
In the second case, the SQL is prepared and run as a cached execution
plan (I think).
Now. It would seem that the second approach would be painfully
slow. But I'm not sure that I'm seeing this.
Currently I have <5 users. As always, this might change...
Before I start going about coding and testing lots of stuff I thought
I would ask for some historical experiences someone might have had
when comparing these two approaches and if there are inflection
points between the performance in terms of the number of tokens or
simultaneous users.
I should add that the tokens are either indexed or primary indexed
but in both cases, unique, and not guaranteed to exist in every case.
Initially it seems that the WHERE IN (...) approach takes a turn for
the worse when the list gets very large.
It also seems to do comparatively worse when the number of tokens is
very small.
But I can't claim any scientifically sound basis for making this
distinction.
Any experiences someone would like to share?
From | Date | Subject | |
---|---|---|---|
Next Message | Oleg Bartunov | 2007-04-06 13:10:48 | Re: SQL WHERE: many sql or large IN() |
Previous Message | Dominik Żyła | 2007-04-06 10:47:12 | Database replication. |