| From: | "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com> |
|---|---|
| To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
| Cc: | "Relyea, Mike" <Mike(dot)Relyea(at)xerox(dot)com>, pgsql-general(at)postgresql(dot)org, Qingqing Zhou <zhouqq(at)cs(dot)toronto(dot)edu> |
| Subject: | Re: Out of memory error in 8.1.0 Win32 |
| Date: | 2006-06-22 18:34:46 |
| Message-ID: | 449AE2C6.5080601@blackducksoftware.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general pgsql-hackers |
Tom Lane wrote:
>
> Misestimated hash aggregation, perhaps? What is the query and what does
> EXPLAIN show for it? What have you got work_mem set to?
oom_test=> \d oom_tab
Table "public.oom_tab"
Column | Type | Modifiers
--------+---------+-----------
val | integer |
oom_test=> explain select val,count(*) from oom_tab group by val;
QUERY PLAN
-------------------------------------------------------------------------
HashAggregate (cost=1163446.13..1163448.63 rows=200 width=4)
-> Seq Scan on oom_tab (cost=0.00..867748.42 rows=59139542 width=4)
The row estimitate for oom_tab is close to the actual value. Most of
the values are unique, however, so the result should have around 59M
rows too.
I've tried it with work_mem set to 32M, 512M, 1G, and 2G. It fails in
all cases, but it hits the failure point quicker with work_mem=32M.
-- todd
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Tom Lane | 2006-06-22 18:40:13 | Re: auto-vacuum & Negative "anl" Values |
| Previous Message | Clarence | 2006-06-22 18:23:48 | Re: VACUUM hanging on idle system |
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Alvaro Herrera | 2006-06-22 18:37:19 | Re: vacuum, performance, and MVCC |
| Previous Message | Tom Lane | 2006-06-22 18:33:16 | Re: [CORE] GPL Source and Copyright Questions |