| From: | Andres Freund <andres(at)anarazel(dot)de> |
|---|---|
| To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: to-do item for explain analyze of hash aggregates? |
| Date: | 2017-04-24 21:07:54 |
| Message-ID: | 20170424210754.dwjgbf2w5ba2ejk6@alap3.anarazel.de |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
On 2017-04-24 21:13:16 +0200, Tomas Vondra wrote:
> On 04/24/2017 08:52 PM, Andres Freund wrote:
> > On 2017-04-24 11:42:12 -0700, Jeff Janes wrote:
> > > The explain analyze of the hash step of a hash join reports something like
> > > this:
> > >
> > > -> Hash (cost=458287.68..458287.68 rows=24995368 width=37) (actual
> > > rows=24995353 loops=1)
> > > Buckets: 33554432 Batches: 1 Memory Usage: 2019630kB
> > >
> > >
> > > Should the HashAggregate node also report on Buckets and Memory Usage? I
> > > would have found that useful several times. Is there some reason this is
> > > not wanted, or not possible?
> >
> > I've wanted that too. It's not impossible at all.
> Why wouldn't that be possible? We probably can't use exactly the same
> approach as Hash, because hashjoins use custom hash table while hashagg uses
> dynahash IIRC. But why couldn't measure the amount of memory by looking at
> the memory context, for example?
Doesn't use dynahash anymore (but a simplehash.h style table) anymore,
but that should actually make it simpler, not harder.
- Andres
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Jeff Janes | 2017-04-24 21:12:29 | Re: DELETE and UPDATE with LIMIT and ORDER BY |
| Previous Message | Andres Freund | 2017-04-24 21:06:31 | Re: to-do item for explain analyze of hash aggregates? |