From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Vijay Moses <vijay(dot)moses(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Four table join with million records - performance improvement? |
Date: | 2004-09-14 04:57:51 |
Message-ID: | 2395.1095137871@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Vijay Moses <vijay(dot)moses(at)gmail(dot)com> writes:
> Hi i have four sample tables ename, esal, edoj and esum
> All of them have 1000000 records. Im running the following
> query : select ename.eid, name, sal, doj, summary from
> ename,esal,edoj,esum where ename.eid=esal.eid and ename.eid=edoj.eid
> and ename.eid=esum.eid. Its a join of all four tables which returns
> all 1 million records. The eid field in ename is a Primary Key and the
> eid in all other tables are Foreign Keys. I have created an index for
> all Foreign Keys. This query takes around 16 MINUTES to complete. Can
> this time be reduced?
The indexes will be completely useless for that sort of query; the
reasonable choices are sort/merge or hashjoin. For either one, your
best way to speed it up is to increase sort_mem.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Crowley | 2004-09-14 06:04:33 | Re: Large # of rows in query extremely slow, not using index |
Previous Message | Tom Lane | 2004-09-14 04:52:21 | Re: tblspaces integrated in new postgresql (version 8.0) |