From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | Justin Pryzby <pryzby(at)telsasoft(dot)com>, Gunther <raj(at)gusw(dot)net>, pgsql-performance(at)lists(dot)postgresql(dot)org, Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Subject: | Re: Out of Memory errors are frustrating as heck! |
Date: | 2019-04-20 20:46:03 |
Message-ID: | 14066.1555793163@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> writes:
> I think it's really a matter of underestimate, which convinces the planner
> to hash the larger table. In this case, the table is 42GB, so it's
> possible it actually works as expected. With work_mem = 4MB I've seen 32k
> batches, and that's not that far off, I'd day. Maybe there are more common
> values, but it does not seem like a very contrived data set.
Maybe we just need to account for the per-batch buffers while estimating
the amount of memory used during planning. That would force this case
into a mergejoin instead, given that work_mem is set so small.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-04-20 20:47:59 | Re: Out of Memory errors are frustrating as heck! |
Previous Message | Tomas Vondra | 2019-04-20 20:36:50 | Re: Out of Memory errors are frustrating as heck! |