From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "A Palmblad" <adampalmblad(at)yahoo(dot)ca> |
Cc: | "Postgres Performance" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: SLOW query with aggregates |
Date: | 2004-03-23 20:32:08 |
Message-ID: | 28366.1080073928@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"A Palmblad" <adampalmblad(at)yahoo(dot)ca> writes:
> GroupAggregate (cost=0.00..338300.34 rows=884 width=345) (actual
> time=86943.272..382718.104 rows=3117 loops=1)
> -> Merge Join (cost=0.00..93642.52 rows=1135610 width=345) (actual
> time=0.148..24006.748 rows=1120974 loops=1)
You do not have a planning problem here, and trying to change the plan
is a waste of time. The slowness is in the actual computation of the
aggregate functions; ergo the only way to speed it up is to change what
you're computing. What aggregates are you computing exactly, and over
what datatypes?
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Subbiah, Stalin | 2004-03-23 20:42:33 | Re: [ADMIN] Benchmarking postgres on Solaris/Linux |
Previous Message | Josh Berkus | 2004-03-23 20:13:29 | Re: [ADMIN] Benchmarking postgres on Solaris/Linux |