From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: to-do item for explain analyze of hash aggregates? |
Date: | 2017-04-24 19:13:16 |
Message-ID: | 2527f5cb-5992-ae66-f3ec-4aa2396065ec@2ndquadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 04/24/2017 08:52 PM, Andres Freund wrote:
> On 2017-04-24 11:42:12 -0700, Jeff Janes wrote:
>> The explain analyze of the hash step of a hash join reports something like
>> this:
>>
>> -> Hash (cost=458287.68..458287.68 rows=24995368 width=37) (actual
>> rows=24995353 loops=1)
>> Buckets: 33554432 Batches: 1 Memory Usage: 2019630kB
>>
>>
>> Should the HashAggregate node also report on Buckets and Memory Usage? I
>> would have found that useful several times. Is there some reason this is
>> not wanted, or not possible?
>
> I've wanted that too. It's not impossible at all.
>
Why wouldn't that be possible? We probably can't use exactly the same
approach as Hash, because hashjoins use custom hash table while hashagg
uses dynahash IIRC. But why couldn't measure the amount of memory by
looking at the memory context, for example?
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Jeevan Ladhe | 2017-04-24 19:32:53 | Re: DELETE and UPDATE with LIMIT and ORDER BY |
Previous Message | Andres Freund | 2017-04-24 18:59:44 | Re: StandbyRecoverPreparedTransactions recovers subtrans links incorrectly |