From: | Greg Stark <gsstark(at)mit(dot)edu> |
---|---|
To: | "A Palmblad" <adampalmblad(at)yahoo(dot)ca> |
Cc: | "Postgres Performance" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: SLOW query with aggregates |
Date: | 2004-03-24 03:42:20 |
Message-ID: | 878yhqx65f.fsf@stark.xeocode.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
"A Palmblad" <adampalmblad(at)yahoo(dot)ca> writes:
> GroupAggregate (cost=0.00..338300.34 rows=884 width=345) (actual
> time=86943.272..382718.104 rows=3117 loops=1)
> -> Merge Join (cost=0.00..93642.52 rows=1135610 width=345) (actual
> time=0.148..24006.748 rows=1120974 loops=1)
I think the reason you're getting a GroupAggregate here instead of a
HashAggregate is that the MergeJoin is already producing the records in the
desired order, so the GroupAggregate doesn't require an extra sort, ie, it's
effectively free.
You might be able to verify this by running the query with
enable_indexscan = off and/or enable_mergejoin = off
some combination of which might get the planner to do a seqscan of the large
table with a hash join to the small table and then a HashAggregate.
If you're reading a lot of the large table the seqscan could be a little
faster, not much though. And given the accurate statistics guesses here the
planner may well have gotten this one right and the seqscan is slower. Can't
hurt to be verify it though.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Mark Kirkwood | 2004-03-24 08:53:31 | Re: [PERFORM] Benchmarking postgres on Solaris/Linux |
Previous Message | Adam Ruth | 2004-03-24 03:31:46 | Re: Databases Vs. Schemas |