From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Matthew Wakeling <matthew(at)flymine(dot)org> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: RAID arrays and performance |
Date: | 2008-09-19 15:09:56 |
Message-ID: | 21255.1221836996@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Matthew Wakeling <matthew(at)flymine(dot)org> writes:
> In order to improve the performance, I made the system look ahead in the
> source, in groups of a thousand entries, so instead of running:
> SELECT * FROM table WHERE field = 'something';
> a thousand times, we now run:
> SELECT * FROM table WHERE field IN ('something', 'something else'...);
> with a thousand things in the IN. Very simple query. It does run faster
> than the individual queries, but it still takes quite a while. Here is an
> example query:
Your example shows the IN-list as being sorted, but I wonder whether you
actually are sorting the items in practice? If not, you might try that
to improve locality of access to the index.
Also, parsing/planning time could be part of your problem here with 1000
things to look at. Can you adjust your client code to use a prepared
query? I'd try
SELECT * FROM table WHERE field = ANY($1::text[])
(or whatever the field datatype actually is) and then push the list
over as a single parameter value using array syntax. You might find
that it scales to much larger IN-lists that way.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Matthew Wakeling | 2008-09-19 15:25:30 | Re: RAID arrays and performance |
Previous Message | Matthew Wakeling | 2008-09-19 14:59:12 | Re: RAID arrays and performance |