From: | Steve Atkins <steve(at)blighty(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: optimizing large query with IN (...) |
Date: | 2004-03-10 17:11:04 |
Message-ID: | 20040310171104.GA1666@gp.word-to-the-wise.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Mar 10, 2004 at 02:02:23PM -0300, Marcus Andree S. Magalhaes wrote:
> Hmm... from the 'performance' point of view, since the data comes from
> a quite complex select statement, Isn't it better/quicker to have this
> select replaced by a select into and creating a temporary database?
Definitely - why loop the data into the application and back out again
if you don't need to?
> > The problem, as I understand it, is that 7.4 introduced massive
> > improvements in handling moderately large in() clauses, as long as they
> > can fit in sort_mem, and are provided by a subselect.
> >
> > So, creating a temp table with all the values in it and using in() on
> > the temp table may be a win:
> >
> > begin;
> > create temp table t_ids(id int);
> > insert into t_ids(id) values (123); <- repeat a few hundred times
> > select * from maintable where id in (select id from t_ids);
> > ...
Cheers,
Steve
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2004-03-10 17:35:47 | Re: Cluster and vacuum performance |
Previous Message | Marcus Andree S. Magalhaes | 2004-03-10 17:02:23 | Re: optimizing large query with IN (...) |