The SARS_ACTS table currently has 37,115,515 rows

we have indexed: idx_sars_acts_acts_run_id ON SARS_ACTS USING btree (sars_run_id)
we have pk constraint on the SARS_ACTS_RUN table; sars_acts_run_pkey PRIMARY KEY (id )

serverdb=# explain select count(*) as y0_ from SARS_ACTS this_ inner join SARS_ACTS_RUN tr1_ on this_.SARS_RUN_ID=tr1_.ID where tr1_.ALGORITHM='SMAT';
                                                    QUERY PLAN
--------------------------------------------------------------------------------------------------------------------------
Aggregate  (cost=4213952.17..4213952.18 rows=1 width=0)
  -> Hash Join  (cost=230573.06..4213943.93 rows=3296 width=0)
       Hash Cond:  (this_.SARS_RUN_ID=tr1_.ID)
       ->  Seq Scan om sars_acts this_  (cost=0.00..3844241.84 rows=37092284 width=8)
       ->  Hash  (cost=230565.81..230565.81 rows=580 width=8)
              -> Seq Scan on sars_acts_run tr1_  (cost=0.00..230565.81 rows=580 width=8)
                   Filter:  ((algorithm)::text = 'SMAT'::text)
(7 rows)

This query executes in approximately 5.3 minutes to complete, very very slow, our users are not happy.

Does anyone have suggestions about how to speed it up?

thanks