From: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
---|---|
To: | Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>, Noah Misch <noah(at)leadboat(dot)com>, Amit Khandekar <amitdkhan(dot)pg(at)gmail(dot)com>, Alvaro Herrera from 2ndQuadrant <alvherre(at)alvh(dot)no-ip(dot)org>, Andres Freund <andres(at)anarazel(dot)de>, Juan José Santamaría Flecha <juanjo(dot)santamaria(at)gmail(dot)com>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>, Robert Haas <robertmhaas(at)gmail(dot)com>, Thomas Munro <thomas(dot)munro(at)gmail(dot)com> |
Subject: | Re: logical decoding : exceeded maxAllocatedDescs for .spill files |
Date: | 2020-02-18 05:47:42 |
Message-ID: | CAA4eK1Lb78yPFSZda3EuUKxPVHZazNjzZAV3t=0e1gaA=a_u3A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Feb 7, 2020 at 5:32 PM Kuntal Ghosh <kuntalghosh(dot)2007(at)gmail(dot)com> wrote:
>
> On Tue, Feb 4, 2020 at 2:40 PM Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> wrote:
> >
> > I don't think we can just back-patch that part of code as it is linked
> > to the way we are maintaining a cache (~8MB) for frequently allocated
> > objects. See the comments around the definition of
> > max_cached_tuplebufs. But probably, we can do something once we reach
> > such a limit, basically, once we know that we have already allocated
> > max_cached_tuplebufs number of tuples of size MaxHeapTupleSize, we
> > don't need to allocate more of that size. Does this make sense?
> >
>
> Yeah, this makes sense. I've attached a patch that implements the
> same. It solves the problem reported earlier. This solution will at
> least slow down the process of going OOM even for very small sized
> tuples.
>
The patch seems to be in right direction and the test at my end shows
that it resolves the issue. One minor comment:
* those. Thus always allocate at least MaxHeapTupleSize. Note that tuples
* generated for oldtuples can be bigger, as they don't have out-of-line
* toast columns.
+ *
+ * But, if we've already allocated the memory required for building the
+ * cache later, we don't have to allocate memory more than the size of the
+ * tuple.
*/
How about modifying the existing comment as: "Most tuples are below
MaxHeapTupleSize, so we use a slab allocator for those. Thus always
allocate at least MaxHeapTupleSize till the slab cache is filled. Note
that tuples generated for oldtuples can be bigger, as they don't have
out-of-line toast columns."?
Have you tested this in 9.6 and 9.5?
--
With Regards,
Amit Kapila.
EnterpriseDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2020-02-18 05:50:17 | Re: logical decoding : exceeded maxAllocatedDescs for .spill files |
Previous Message | Moon, Insung | 2020-02-18 05:45:02 | Re: Flexible pglz_stategy values and delete const. |