From: | Andrey Lepikhov <a(dot)lepikhov(at)postgrespro(dot)ru> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com>, Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> |
Cc: | PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Report planning memory in EXPLAIN ANALYZE |
Date: | 2023-08-14 02:52:47 |
Message-ID: | c98f0fbd-c50f-50ba-48bb-f7ab9a0b2122@postgrespro.ru |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 14/8/2023 06:53, David Rowley wrote:
> On Thu, 10 Aug 2023 at 20:33, Ashutosh Bapat
> <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> wrote:
>> My point is what's relevant here is how much net memory planner asked
>> for.
>
> But that's not what your patch is reporting. All you're reporting is
> the difference in memory that's *currently* palloc'd from before and
> after the planner ran. If we palloc'd 600 exabytes then pfree'd it
> again, your metric won't change.
>
> I'm struggling a bit to understand your goals here. If your goal is
> to make a series of changes that reduces the amount of memory that's
> palloc'd at the end of planning, then your patch seems to suit that
> goal, but per the quote above, it seems you care about how many bytes
> are palloc'd during planning and your patch does not seem track that.
>
> With your patch as it is, to improve the metric you're reporting we
> could go off and do things like pfree Paths once createplan.c is done,
> but really, why would we do that? Just to make the "Planning Memory"
> metric looks better doesn't seem like a worthy goal.
>
> Instead, if we reported the context's mem_allocated, then it would
> give us the flexibility to make changes to the memory context code to
> have the metric look better. It might also alert us to planner
> inefficiencies and problems with new code that may cause a large spike
> in the amount of memory that gets allocated. Now, I'm not saying we
> should add a patch that shows mem_allocated. I'm just questioning if
> your proposed patch meets the goals you're trying to achieve. I just
> suggested that you might want to consider something else as a metric
> for your memory usage reduction work.
Really, the current approach with the final value of consumed memory
smooths peaks of memory consumption. I recall examples likewise massive
million-sized arrays or reparameterization with many partitions where
the optimizer consumes much additional memory during planning.
Ideally, to dive into the planner issues, we should have something like
a report-in-progress in the vacuum, reporting on memory consumption at
each subquery and join level. But it looks too much for typical queries.
--
regards,
Andrey Lepikhov
Postgres Professional
From | Date | Subject | |
---|---|---|---|
Next Message | Thomas Munro | 2023-08-14 02:58:26 | Re: pgbench with libevent? |
Previous Message | Thomas Munro | 2023-08-14 02:41:56 | Extending SMgrRelation lifetimes |