From: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
---|---|
To: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: DBT-3 with SF=20 got failed |
Date: | 2015-08-19 11:12:34 |
Message-ID: | CANP8+jJYDFQU3-A1YG8oTRcX6zmN9cn7wJUSDRShz+pdXeUdVw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 12 June 2015 at 00:29, Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> wrote:
> I see two ways to fix this:
>
> (1) enforce the 1GB limit (probably better for back-patching, if that's
> necessary)
>
> (2) make it work with hash tables over 1GB
>
> I'm in favor of (2) if there's a good way to do that. It seems a bit
> stupid not to be able to use fast hash table because there's some
> artificial limit. Are there any fundamental reasons not to use the
> MemoryContextAllocHuge fix, proposed by KaiGai-san?
If there are no objections, I will apply the patch for 2) to HEAD and
backpatch to 9.5.
--
Simon Riggs http://www.2ndQuadrant.com/
<http://www.2ndquadrant.com/>
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Kohei KaiGai | 2015-08-19 11:55:40 | Re: DBT-3 with SF=20 got failed |
Previous Message | David Rowley | 2015-08-19 10:35:03 | Re: DBT-3 with SF=20 got failed |