From: | Amit Langote <Langote_Amit_f8(at)lab(dot)ntt(dot)co(dot)jp> |
---|---|
To: | Joe Conway <mail(at)joeconway(dot)com>, Andreas Seltenreich <andreas(dot)seltenreich(at)credativ(dot)de>, pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Excessive memory usage in multi-statement queries w/ partitioning |
Date: | 2019-05-27 06:41:50 |
Message-ID: | fba6fe6e-5bd8-998e-526a-098ddbfd5a21@lab.ntt.co.jp |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
On 2019/05/24 21:18, Joe Conway wrote:
> On 5/24/19 1:47 AM, Amit Langote wrote:
>> I too have had similar thoughts on the matter. If the planner had built
>> all its subsidiary data structures in its own private context (or tree of
>> contexts) which is reset once a plan for a given query is built and passed
>> on, then there wouldn't be an issue of all of that subsidiary memory
>> leaking into MessageContext. However, the problem may really be that
>> we're subjecting the planner to use cases that it wasn't perhaps designed
>> to perform equally well under -- running it many times while handling the
>> same message. It is worsened by the fact that the query in question is
>> something that ought to have been documented as not well supported by the
>> planner; David has posted a documentation patch for that [1]. PG 12 has
>> alleviated the situation to a large degree, so you won't see the OOM
>> occurring for this query, but not for all queries unfortunately.
>
> I admittedly haven't followed this thread too closely, but if having 100
> partitions causes out of memory on pg11, that sounds like a massive
> regression to me.
You won't run out of memory if you are running just one query per message,
but that's not the case in this discussion. With multi-query submissions
like in this case, memory taken up by parsing and planning of *all*
queries adds up to a single MessageContext, so can lead to OOM if there
are enough queries to load up MessageContext beyond limit. The only point
I was trying to make in what I wrote is that reaching OOM of this sort is
easier with partitioning, because of the age-old behavior that planning
UPDATE/DELETE queries on inherited tables (and so partitioned tables)
needs tons of memory that grows as the number of child tables / partitions
increases.
We fixed things in PG 12, at least for partitioning, so that as long as a
query needs to affect only a small number of partitions of the total
present, its planning will use only a fixed amount of CPU and memory, so
increasing the number of partitions won't lead to explosive growth in
memory used. You might be able to tell however that that effort had
nothing to do improving the situation with multi-query submissions.
Thanks,
Amit
From | Date | Subject | |
---|---|---|---|
Next Message | Daniel Gustafsson | 2019-05-27 07:22:42 | Re: Why does pg_checksums -r not have a long option? |
Previous Message | Fabien COELHO | 2019-05-27 06:32:21 | Re: Why does pg_checksums -r not have a long option? |