From: | Andres Freund <andres(at)anarazel(dot)de> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Show hash / bitmap sizes in EXPLAIN ANALYZE? |
Date: | 2016-09-30 23:37:53 |
Message-ID: | 20160930233753.cjsqswmhblb6wcml@alap3.anarazel.de |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
At the moment in-memory sort and hash nodes show their memory usage in
explain:
│ -> Sort (cost=59.83..62.33 rows=1000 width=4) (actual time=0.512..0.632 rows=1000 loops=1) │
│ Sort Key: a.a │
│ Sort Method: quicksort Memory: 71kB │
│ -> Function Scan on generate_series a (cost=0.00..10.00 rows=1000 width=4) (actual time=0.165..0.305 rows=1000 loops=1) │
and
│ -> Hash (cost=10.00..10.00 rows=1000 width=4) (actual time=0.581..0.581 rows=1000 loops=1) │
│ Buckets: 1024 Batches: 1 Memory Usage: 44kB │
I think we should show something similar for bitmap scans, and for
some execGrouping.c users (at least hash aggregates, subplans and setop
seem good candidates too).
For both categories it's useful to see how close within work_mem a scan
ended up being (to understand how high to set it, and how much the data
can grow till work_mem is excceded), and for execGrouping.c users it's
also very interesting to see the actual memory usage because the limit
is only a very soft one.
Does anybody see a reason not to add that?
Andres
From | Date | Subject | |
---|---|---|---|
Next Message | Jim Nasby | 2016-09-30 23:45:43 | Re: PL/Python adding support for multi-dimensional arrays |
Previous Message | Jim Nasby | 2016-09-30 23:21:15 | Re: Showing parallel status in \df+ |