From: | Sergey Konoplev <gray(dot)ru(at)gmail(dot)com> |
---|---|
To: | Korisk <Korisk(at)yandex(dot)ru> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: hash aggregation |
Date: | 2012-10-10 21:30:09 |
Message-ID: | CAL_0b1tHHHGz=umGWm=FXkSkt+yCJh-Cb10BJLoD-i3mwvNeaQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Oct 10, 2012 at 9:09 AM, Korisk <Korisk(at)yandex(dot)ru> wrote:
> Hello! Is it possible to speed up the plan?
> Sort (cost=573977.88..573978.38 rows=200 width=32) (actual time=10351.280..10351.551 rows=4000 loops=1)
> Output: name, (count(name))
> Sort Key: hashcheck.name
> Sort Method: quicksort Memory: 315kB
> -> HashAggregate (cost=573968.24..573970.24 rows=200 width=32) (actual time=10340.507..10341.288 rows=4000 loops=1)
> Output: name, count(name)
> -> Seq Scan on public.hashcheck (cost=0.00..447669.16 rows=25259816 width=32) (actual time=0.019..2798.058 rows=25259817 loops=1)
> Output: id, name, value
> Total runtime: 10351.989 ms
AFAIU there are no query optimization solution for this.
It may be worth to create a table hashcheck_stat (name, cnt) and
increment/decrement the cnt values with triggers if you need to get
counts fast.
--
Sergey Konoplev
a database and software architect
http://www.linkedin.com/in/grayhemp
Jabber: gray(dot)ru(at)gmail(dot)com Skype: gray-hemp Phone: +14158679984
From | Date | Subject | |
---|---|---|---|
Next Message | Ondrej Ivanič | 2012-10-10 22:06:09 | Re: shared_buffers/effective_cache_size on 96GB server |
Previous Message | Bruce Momjian | 2012-10-10 21:03:15 | Re: shared_buffers/effective_cache_size on 96GB server |