From: | Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, PostgreSQL mailing lists <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: installcheck failing on psql_crosstab |
Date: | 2016-06-06 15:28:37 |
Message-ID: | 20160606152837.GA391408@alvherre.pgsql |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Tom Lane wrote:
> Alvaro Herrera <alvherre(at)2ndquadrant(dot)com> writes:
> > I can't imagine that the server is avoiding hash aggregation on a 1MB
> > work_mem limit for data that's a few dozen of bytes. Is it really doing
> > that?
>
> Yup:
Aha. Thanks for testing.
> Now that you mention it, this does seem a bit odd, although I remember
> that there's a pretty substantial fudge factor in there when we have
> no statistics (which we don't in this example). If I ANALYZE ctv_data
> then it sticks to the hashagg plan all the way down to 64kB work_mem.
Hmm, so we could solve the complaint by adding an ANALYZE. I'm open to
that; other opinions?
--
Álvaro Herrera http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2016-06-06 15:30:54 | Re: Changed SRF in targetlist handling |
Previous Message | Andres Freund | 2016-06-06 15:28:08 | Re: Reviewing freeze map code |