From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: to-do item for explain analyze of hash aggregates? |
Date: | 2017-04-24 20:55:57 |
Message-ID: | CAMkU=1waOykv0z6XXp_xPeqz+UBYshrc9=gHN5pfHrHQj0+NUA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Mon, Apr 24, 2017 at 12:13 PM, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com
> wrote:
> On 04/24/2017 08:52 PM, Andres Freund wrote:
>
>> On 2017-04-24 11:42:12 -0700, Jeff Janes wrote:
>>
>>> The explain analyze of the hash step of a hash join reports something
>>> like
>>> this:
>>>
>>> -> Hash (cost=458287.68..458287.68 rows=24995368 width=37) (actual
>>> rows=24995353 loops=1)
>>> Buckets: 33554432 Batches: 1 Memory Usage: 2019630kB
>>>
>>>
>>> Should the HashAggregate node also report on Buckets and Memory Usage? I
>>> would have found that useful several times. Is there some reason this is
>>> not wanted, or not possible?
>>>
>>
>> I've wanted that too. It's not impossible at all.
>>
>>
> Why wouldn't that be possible? We probably can't use exactly the same
> approach as Hash, because hashjoins use custom hash table while hashagg
> uses dynahash IIRC. But why couldn't measure the amount of memory by
> looking at the memory context, for example?
>
He said "not impossible", meaning it is possible.
I've added it to the wiki Todo page. (Hopefully that has not doomed it to
be forgotten about)
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2017-04-24 20:59:10 | Re: to-do item for explain analyze of hash aggregates? |
Previous Message | Nikolay Shaplov | 2017-04-24 20:45:04 | Re: pgbench tap tests & minor fixes |