From: | Jeff Davis <pgsql(at)j-davis(dot)com> |
---|---|
To: | Taylor Vesely <tvesely(at)pivotal(dot)io>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Adam Lee <ali(at)pivotal(dot)io>, Melanie Plageman <mplageman(at)pivotal(dot)io>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Memory-Bounded Hash Aggregation |
Date: | 2019-11-27 22:58:04 |
Message-ID: | e52707d4665e32867fb6dd181825ef15f853b8bf.camel@j-davis.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, 2019-08-28 at 12:52 -0700, Taylor Vesely wrote:
> Right now the patch always initializes 32 spill partitions. Have you
> given
> any thought into how to intelligently pick an optimal number of
> partitions yet?
Attached a new patch that addresses this.
1. Divide hash table memory used by the number of groups in the hash
table to get the average memory used per group.
2. Multiply by the number of groups spilled -- which I pessimistically
estimate as the number of tuples spilled -- to get the total amount of
memory that we'd like to have to process all spilled tuples at once.
3. Divide the desired amount of memory by work_mem to get the number of
partitions we'd like to have such that each partition can be processed
in work_mem without spilling.
4. Apply a few sanity checks, fudge factors, and limits.
Using this runtime information should be substantially better than
using estimates and projections.
Additionally, I removed some branches from the common path. I think I
still have more work to do there.
I also rebased of course, and fixed a few other things.
Regards,
Jeff Davis
Attachment | Content-Type | Size |
---|---|---|
hashagg-20191127.diff | text/x-patch | 69.2 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2019-11-28 00:06:10 | Re: libpq sslpassword parameter and callback function |
Previous Message | Tom Lane | 2019-11-27 22:57:10 | Modernizing SQL functions' result type coercions |