From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #5608: array_agg() consumes too much memory |
Date: | 2010-08-10 03:16:28 |
Message-ID: | 20060.1281410188@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-hackers |
"Itagaki Takahiro" <itagaki(dot)takahiro(at)gmail(dot)com> writes:
> I encountered "out of memory" error in large
> GROUP BY query with array_agg(). The server log
> was filled by the following messages:
> accumArrayResult: 8192 total in 1 blocks; 7800 free (0 chunks); 392
> used
> Should we choose smaller size of initial memory in accumArrayResult()?
That's not really going to help much, considering that the planner's
estimated memory use per hash aggregate is only a few dozen bytes.
We have to get that estimate in sync with reality or the problem will
remain.
Eventually it might be nice to have some sort of way to specify the
estimate to use for any aggregate function --- but for a near-term
fix maybe we should just hard-wire a special case for array_agg in
count_agg_clauses_walker(). I'd be inclined to leave the array_agg
code as-is and teach the planner to assume ALLOCSET_DEFAULT_INITSIZE
per array_agg aggregate.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2010-08-10 04:07:46 | Unable to create serial column even with permissions |
Previous Message | Robert Haas | 2010-08-10 02:05:32 | Re: BUG #5603: pg_tblspc and pg_twoface directories get deleted when starting up service |
From | Date | Subject | |
---|---|---|---|
Next Message | Joshua Tolley | 2010-08-10 03:24:08 | Re: grouping sets - updated patch |
Previous Message | Boxuan Zhai | 2010-08-10 03:03:26 | Re: MERGE Specification |