Re: Out of Memory errors are frustrating as heck!

From: Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com>
To: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Justin Pryzby <pryzby(at)telsasoft(dot)com>, Gunther <raj(at)gusw(dot)net>, pgsql-performance(at)lists(dot)postgresql(dot)org, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com>
Subject: Re: Out of Memory errors are frustrating as heck!
Date: 2019-04-20 20:53:56
Message-ID: 20190420205356.ksmdm7wlbtwazse2@development
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Sat, Apr 20, 2019 at 04:46:03PM -0400, Tom Lane wrote:
>Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
>> I think it's really a matter of underestimate, which convinces the planner
>> to hash the larger table. In this case, the table is 42GB, so it's
>> possible it actually works as expected. With work_mem = 4MB I've seen 32k
>> batches, and that's not that far off, I'd day. Maybe there are more common
>> values, but it does not seem like a very contrived data set.
>
>Maybe we just need to account for the per-batch buffers while estimating
>the amount of memory used during planning. That would force this case
>into a mergejoin instead, given that work_mem is set so small.
>

How would that solve the issue of underestimates like this one?

regards

--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Tomas Vondra 2019-04-20 21:13:20 Re: Out of Memory errors are frustrating as heck!
Previous Message Tom Lane 2019-04-20 20:47:59 Re: Out of Memory errors are frustrating as heck!