From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Kevin Kempter <kevin(at)kevinkempterllc(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: long-running query - needs tuning |
Date: | 2007-08-23 18:56:19 |
Message-ID: | 10905.1187895379@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Kevin Kempter <kevin(at)kevinkempterllc(dot)com> writes:
> Merge Join (cost=17118858.51..17727442.30 rows=155 width=90)
> Merge Cond: ("outer".customer_id = "inner".customer_id)
> -> GroupAggregate (cost=17118772.93..17727347.34 rows=155 width=8)
> -> Sort (cost=17118772.93..17270915.95 rows=60857208 width=8)
> Sort Key: con.customer_id
> -> Seq Scan on dat_user_contacts con (cost=0.00..7332483.08
> rows=60857208 width=8)
> -> Sort (cost=85.57..88.14 rows=1026 width=74)
> Sort Key: dat_customer_mailbox_counts.customer_id
> -> Seq Scan on dat_customer_mailbox_counts (cost=0.00..34.26
> rows=1026 width=74)
The planner, at least, thinks that all the time will go into the sort
step. Sorting 60M rows is gonna take awhile :-(. What PG version is
this? (8.2 has noticeably faster sort code than prior releases...)
What have you got work_mem set to?
Bad as the sort is, I suspect that the real problem is the
count(distinct) operator, which is going to require *another*
sort-and-uniq step for each customer_id group --- and judging by
the rowcount estimates, at least some of those groups must be
pretty large. (AFAIR this time is not counted in the planner
estimates.) Again, work_mem would have an effect on how fast
that goes.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2007-08-23 19:46:42 | Re: Optimising "in" queries |
Previous Message | Bill Moran | 2007-08-23 18:52:08 | Re: deadlock_timeout parameter in Postgresql.cof |