<html><body><span style="font-family:Verdana; color:#000000; font-size:10pt;"><div>The SARS_ACTS table currently has 37,115,515 rows<br><br>we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree (sars_run_id)<br>we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY KEY (id )<br><br>serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=<a href="http://tr1_.ID">tr1_.ID</a> where tr1_.ALGORITHM='SMAT';<br> QUERY PLAN<br>--------------------------------------------------------------------------------------------------------------------------<br>Aggregate (cost=4213952.17..4213952.18 rows=1 width=0)<br> -> Hash Join (cost=230573.06..4213943.93 rows=3296 width=0)<br> Hash Cond: (this_.SARS_RUN_ID=<a href="http://tr1_.ID">tr1_.ID</a>)<br> -> Seq Scan om sars_acts this_ (cost=0.00..3844241.84 rows=37092284 width=8)<br> -> Hash (cost=230565.81..230565.81 rows=580 width=8)<br> -> Seq Scan on sars_acts_run tr1_ (cost=0.00..230565.81 rows=580 width=8)<br> Filter: ((algorithm)::text = 'SMAT'::text)<br>(7 rows)<br><br>This query executes in approximately 5.3 minutes to complete, very very slow, our users are not happy.<br><br>I did add an index on SARS_ACTS_RUN.ALGORITHM column but it didn't improve the run time. <br>The planner just changed the "Filter:" to an "Index Scan:" improving the cost of the Seq Scan <br>on the sars_acts_run table, but the overall run time remained the same. It seems like the bottleneck <br>is in the Seq Scan on the sars_acts table.<br><br> -> Seq Scan on sars_acts_run tr1_ (cost=0.00..230565.81 rows=580 width=8)<br> Filter: ((algorithm)::text = 'SMAT'::text)<br><br>Does anyone have suggestions about how to speed it up?<br><a target="_blank" href="mailto:pgsql-performance(at)postgresql(dot)org"></a></div></span></body></html>