| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Patrick Hatcher <PHatcher(at)macys(dot)com> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Slow query. Any way to speed up? |
| Date: | 2006-01-06 20:24:09 |
| Message-ID: | 22453.1136579049@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Patrick Hatcher <PHatcher(at)macys(dot)com> writes:
> -> Seq Scan on cdm_ddw_tran_item a1
> (cost=0.00..1547562.88 rows=8754773 width=23) (actual
> time=14.219..535704.691 rows=10838135 loops=1)
> Filter: ((((appl_id)::text = 'MCOM'::text)
> OR ((appl_id)::text = 'NET'::text)) AND ((tran_typ_id = 'S'::bpchar) OR
> (tran_typ_id = 'R'::bpchar)))
The bulk of the time is evidently going into this step. You didn't say
how big cdm_ddw_tran_item is, but unless it's in the billion-row range,
an indexscan isn't going to help for pulling out 10 million rows.
This may be about the best you can do :-(
If it *is* in the billion-row range, PG 8.1's bitmap indexscan facility
would probably help.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2006-01-06 20:27:46 | Re: improving write performance for logging |
| Previous Message | Ian Westmacott | 2006-01-06 19:02:57 | Re: improving write performance for logging |