| From: | Andres Freund <andres(at)anarazel(dot)de> |
|---|---|
| To: | Jeff Davis <pgsql(at)j-davis(dot)com> |
| Cc: | pgsql-bugs(at)postgresql(dot)org |
| Subject: | Re: HashAgg degenerate case |
| Date: | 2024-11-08 16:41:32 |
| Message-ID: | 4tbbgdqqxvmy37fk75p36azkovyhrjhnul46lntj52jlobphf3@nxaqlaqupg2l |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-bugs |
Hi,
On 2024-11-05 16:59:56 -0800, Jeff Davis wrote:
> Fixing it seems fairly easy though: we just need to completely destroy
> the hash table each time and recreate it. Something close to the
> attached patch (rough).
That'll often be *way* slower though. Both because acquiring and faulting-in
memory is far from free and because it'd often lead to starting to grow the
hashtable from a small size again.
I think this patch would lead to way bigger regressions than the occasionally
too large hashtable does. I'm not saying that we shouldn't do something about
that, but I don't think it can be this.
Greetings,
Andres Freund
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2024-11-08 16:46:36 | Re: Leader backend hang on IPC/ParallelFinish when LWLock held at parallel query start |
| Previous Message | Tom Lane | 2024-11-08 14:49:10 | Re: BUG #18692: Segmentation fault when extending a varchar column with a gist index with custom signal length |