| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | "Mark Woodward" <pgsql(at)mohawksoft(dot)com> |
| Cc: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Re: PostgreSQL 8.0.6 crash |
| Date: | 2006-02-09 15:36:15 |
| Message-ID: | 1452.1139499375@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
"Mark Woodward" <pgsql(at)mohawksoft(dot)com> writes:
> -> HashAggregate (cost=106527.68..106528.68 rows=200 width=32)
> Filter: (count(ucode) > 1)
> -> Seq Scan on cdtitles (cost=0.00..96888.12 rows=1927912
> width=32)
> Well, shouldn't hash aggregate respect work memory limits?
If the planner thought there were 1.7M distinct values, it wouldn't have
used hash aggregate ... but it only thinks there are 200, which IIRC is
the default assumption. Have you ANALYZEd this table lately?
Meanwhile, I'd strongly recommend turning off OOM kill. That's got to
be the single worst design decision in the entire Linux kernel.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Thomas Hallgren | 2006-02-09 15:41:35 | Re: User Defined Types in Java |
| Previous Message | Tom Lane | 2006-02-09 15:30:30 | Re: Feature request - Add microsecond as a time unit for interval |