| From: | Chris <dmagick(at)gmail(dot)com> |
|---|---|
| To: | Kevin Kempter <kevink(at)consistentstate(dot)com> |
| Cc: | pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: improving my query plan |
| Date: | 2009-08-21 01:21:45 |
| Message-ID: | 4A8DF6A9.9040705@gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Kevin Kempter wrote:
> Hi all;
>
>
> I have a simple query against two very large tables ( > 800million rows
> in theurl_hits_category_jt table and 9.2 million in the url_hits_klk1
> table )
>
>
> I have indexes on the join columns and I've run an explain.
> also I've set the default statistics to 250 for both join columns. I get
> a very high overall query cost:
If you had an extra where condition it might be different, but you're
just returning results from both tables that match up so doing a
sequential scan is going to be the fastest way anyway.
--
Postgresql & php tutorials
http://www.designmagick.com/
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Scott Marlowe | 2009-08-21 01:32:02 | Re: number of rows estimation for bit-AND operation |
| Previous Message | Greg Stark | 2009-08-21 00:53:54 | Re: Number of tables |