From: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
---|---|
To: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: installcheck failing on psql_crosstab |
Date: | 2016-06-07 03:31:59 |
Message-ID: | CAB7nPqSf+Z-tOuudUHGsbO=LD9GKR7Add53mOMUxZNh9g3gWVw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Jun 7, 2016 at 12:28 AM, Alvaro Herrera
<alvherre(at)2ndquadrant(dot)com> wrote:
> Tom Lane wrote:
>> Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
>
>> > I can't imagine that the server is avoiding hash aggregation on a 1MB
>> > work_mem limit for data that's a few dozen of bytes. Is it really doing
>> > that?
>>
>> Yup:
>
> Aha. Thanks for testing.
>
>> Now that you mention it, this does seem a bit odd, although I remember
>> that there's a pretty substantial fudge factor in there when we have
>> no statistics (which we don't in this example). If I ANALYZE ctv_data
>> then it sticks to the hashagg plan all the way down to 64kB work_mem.
>
> Hmm, so we could solve the complaint by adding an ANALYZE. I'm open to
> that; other opinions?
We could just enforce work_mem to 64kB and then reset it.
--
Michael
From | Date | Subject | |
---|---|---|---|
Next Message | Sridhar N Bamandlapally | 2016-06-07 03:37:45 | Re: [HACKERS] OUT parameter and RETURN table/setof |
Previous Message | Kyotaro HORIGUCHI | 2016-06-07 03:14:31 | Re: Parallel pg_dump's error reporting doesn't work worth squat |