| From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
|---|---|
| To: | Kohei KaiGai <kaigai(at)kaigai(dot)gr(dot)jp>, Simon Riggs <simon(at)2ndquadrant(dot)com> |
| Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
| Subject: | Re: DBT-3 with SF=20 got failed |
| Date: | 2015-08-20 02:15:47 |
| Message-ID: | 55D53853.9050106@2ndquadrant.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hello KaiGain-san,
On 08/19/2015 03:19 PM, Kohei KaiGai wrote:
> Unless we have no fail-safe mechanism when planner estimated too
> large number of tuples than actual needs, a strange estimation will
> consume massive amount of RAMs. It's a bad side effect.
> My previous patch didn't pay attention to the scenario, so needs to
> revise the patch.
I agree we need to put a few more safeguards there (e.g. make sure we
don't overflow INT when counting the buckets, which may happen with the
amounts of work_mem we'll see in the wild soon).
But I think we should not do any extensive changes to how we size the
hashtable - that's not something we should do in a bugfix I think.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Amit Langote | 2015-08-20 02:16:37 | Re: Declarative partitioning |
| Previous Message | Tomas Vondra | 2015-08-20 02:07:47 | Re: DBT-3 with SF=20 got failed |