From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Heikki Linnakangas <hlinnakangas(at)vmware(dot)com> |
Cc: | Tomas Vondra <tv(at)fuzzy(dot)cz>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: bad estimation together with large work_mem generates terrible slow hash joins |
Date: | 2014-09-10 18:31:14 |
Message-ID: | CA+TgmoYsXfrFeFoyz7SCqA7gi6nF6+qH8OGMvZM7_yovouWQrw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Sep 10, 2014 at 2:25 PM, Heikki Linnakangas
<hlinnakangas(at)vmware(dot)com> wrote:
> The dense-alloc-v5.patch looks good to me. I have committed that with minor
> cleanup (more comments below). I have not looked at the second patch.
Gah. I was in the middle of doing this. Sigh.
>> * the chunks size is 32kB (instead of 16kB), and we're using 1/4
>> threshold for 'oversized' items
>>
>> We need the threshold to be >=8kB, to trigger the special case
>> within AllocSet. The 1/4 rule is consistent with ALLOC_CHUNK_FRACTION.
>
> Should we care about the fact that if there are only a few tuples, we will
> nevertheless waste 32kB of memory for the chunk? I guess not, but I thought
> I'd mention it. The smallest allowed value for work_mem is 64kB.
I think we should change the threshold here to 1/8th. The worst case
memory wastage as-is ~32k/5 > 6k.
--
Robert Haas
EnterpriseDB: http://www.enterprisedb.com
The Enterprise PostgreSQL Company
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2014-09-10 18:36:05 | Re: B-Tree support function number 3 (strxfrm() optimization) |
Previous Message | Heikki Linnakangas | 2014-09-10 18:25:32 | Re: bad estimation together with large work_mem generates terrible slow hash joins |