Speed up a query - using a temp table

From: yusuf0478(at)netscape(dot)net (David Wendy)
To: pgsql-admin(at)postgresql(dot)org
Subject: Speed up a query - using a temp table
Date: 2003-04-07 20:31:11
Message-ID: 535ddc4b.0304071231.3a23f419@posting.google.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

I have a query of the form:

select * from (
select contact.id
, contact.name
, (select from detail inner join myDetail using (detail.id) ) as cost
from contact
inner join user using (user_id) and user_id = 1
) as OUTTABLE
where cost > 30;

The problem is that all the tables in the query are big, so it takes a
long time to get results from OUTTABLE (around 20 seconds) . When I
add the where clause at the end (cost > 30), the query becomes too
long (around 200 seconds sometimes).

To make the query run faster, I thought of creating a temp table to
store OUTTABLE, and then just filter on the temp table using cost >
30. That would improve the speed of the query tremendously (total time
was around 23 seconds) . The problem with that approach is that I'm
writing an application, and more than one user might need to run the
query at the same time. (Different user should get different results
because they have different user ids, and other numbers in the query
would be different). Also, I can't just create and drop temp tables in
an enterprise application (we are not using ejbs in our applications
either), it seems like bad style.

Anyhow, how could I make this query faster so that the values of
OUTTABLE is retrieved first, then the where clause is evaluated to
filter?

Thanks in advance.

Browse pgsql-admin by date

  From Date Subject
Next Message Gaetano Mendola 2003-04-07 23:58:25 Re: Duplicate primary key and also the OID !!!!
Previous Message Patrick Meylemans 2003-04-07 19:11:57 compile version 7.3.2 --with-CXX