From: | "Zhang, Anna" <azhang(at)verisign(dot)com> |
---|---|
To: | pgsql-admin(at)postgresql(dot)org |
Subject: | tuning SQL |
Date: | 2002-01-29 15:57:01 |
Message-ID: | 5511D658682A7740BA295CCF1E1233A635A83F@vsvapostal2.bkup3 |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi,
I am running a query on postgres 7.1.3 at Red Hat 7.2 (2 CPUs, 1.5G RAM, 2
drive disk array).
select count(*) from contact a, contact_discard b where a.contacthandle <>
b.contacthandle;
Table contact has over 9 million rows, contact_diacard has around 259,000
rows, both tables define contacthandle as primary key.
Here is the query execution plan:
Aggregate (cost=732021.97..732021.97 rows=1 width=24)
-> Hash Join (cost=10035.10..731397.95 rows=249608 width=24)
-> Seq Scan on contact a (cost=0.00..345002.95 rows=9330995 width=12)
-> Hash (cost=9411.08..9411.08 rows=249608 width=12)
-> Seq Scan on contact_disacrd b (cost=0.00..9411.08 rows=249608
width=12)
I started to run this query at 5:00pm yesterday, it still running!!! My
question is Why query plan doesn't use index scan for join, Can we force it
to use index? Or any idea to improve the performance? We have more tables
bigger than contact, and need to join them among? Am I pushing the postgres
to the limit? Help!!!
Shared_buffer = 65536
sort_mem = 32768
Anna Zhang
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Darley | 2002-01-29 16:28:49 | Re: tuning SQL |
Previous Message | Radu-Adrian Popescu | 2002-01-29 14:14:15 | Re: Backup database through web and php |