From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | "Todd A(dot) Cook" <tcook(at)blackducksoftware(dot)com> |
Cc: | "Relyea, Mike" <Mike(dot)Relyea(at)xerox(dot)com>, pgsql-general(at)postgresql(dot)org, Qingqing Zhou <zhouqq(at)cs(dot)toronto(dot)edu> |
Subject: | Re: Out of memory error in 8.1.0 Win32 |
Date: | 2006-06-22 18:53:04 |
Message-ID: | 2413.1151002384@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general pgsql-hackers |
"Todd A. Cook" <tcook(at)blackducksoftware(dot)com> writes:
> oom_test=> explain select val,count(*) from oom_tab group by val;
> QUERY PLAN
> -------------------------------------------------------------------------
> HashAggregate (cost=1163446.13..1163448.63 rows=200 width=4)
> -> Seq Scan on oom_tab (cost=0.00..867748.42 rows=59139542 width=4)
> The row estimitate for oom_tab is close to the actual value. Most of
> the values are unique, however, so the result should have around 59M
> rows too.
Well, that's the problem right there :-(. Have you ANALYZEd this table?
I think 200 is the default estimate for number of groups in the absence
of any ANALYZE stats, but it should surely not be that far off if it's
got real stats to play with.
If you need to make the query not fail without stats, you could set
enable_hashagg false, but I wouldn't recommend that as a production
choice (unless you set it just for this one query).
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2006-06-22 18:55:56 | Re: Out of memory error in 8.1.0 Win32 |
Previous Message | Andrew Gould | 2006-06-22 18:52:03 | OT: publicly available databases? |
From | Date | Subject | |
---|---|---|---|
Next Message | Greg Stark | 2006-06-22 18:55:56 | Re: Out of memory error in 8.1.0 Win32 |
Previous Message | Greg Stark | 2006-06-22 18:49:56 | Re: vacuum, performance, and MVCC |