From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Rick Gigger <rick(at)alpinenetworking(dot)com> |
Cc: | "Mark Woodward" <pgsql(at)mohawksoft(dot)com>, "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: PostgreSQL 8.0.6 crash |
Date: | 2006-02-10 00:44:23 |
Message-ID: | 6332.1139532263@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Rick Gigger <rick(at)alpinenetworking(dot)com> writes:
> However if hashagg truly does not obey the limit that is supposed to
> be imposed by work_mem then it really ought to be documented. Is
> there a misunderstanding here and it really does obey it? Or is
> hashagg an exception but the other work_mem associated operations
> work fine? Or is it possible for them all to go out of bounds?
hashagg is the exception. It should be fixed, not documented, but no
one's got round to that.
One point to consider is that if the planner's estimate is as far off
as exhibited in the OP's example, a hashagg that does spill to disk
is likely to take so long that he'll be back here complaining that
the query never terminates ;-). In most practical situations, I think
exceeding work_mem is really the best solution, as long as it's not
by more than 10x or 100x. It's when the estimate is off by many
orders of magnitude that you've got a problem. Running out of memory
is not necessarily the worst response ... as long as the system doesn't
kill the process in response to that.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Q Beukes | 2006-02-10 00:56:07 | Re: pg_hba.conf alternative |
Previous Message | Rick Gigger | 2006-02-10 00:30:17 | Re: PostgreSQL 8.0.6 crash |