From: | "Mark Woodward" <pgsql(at)mohawksoft(dot)com> |
---|---|
To: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | "Rick Gigger" <rick(at)alpinenetworking(dot)com>, "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: PostgreSQL 8.0.6 crash |
Date: | 2006-02-10 14:57:12 |
Message-ID: | 16555.24.91.171.78.1139583432.squirrel@mail.mohawksoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> Rick Gigger <rick(at)alpinenetworking(dot)com> writes:
>> However if hashagg truly does not obey the limit that is supposed to
>> be imposed by work_mem then it really ought to be documented. Is
>> there a misunderstanding here and it really does obey it? Or is
>> hashagg an exception but the other work_mem associated operations
>> work fine? Or is it possible for them all to go out of bounds?
>
> hashagg is the exception. It should be fixed, not documented, but no
> one's got round to that.
Well, it is clearly a pathalogical condition. Fixed? Sure, but someone
should document it so that others don't stumble across it.
>
> One point to consider is that if the planner's estimate is as far off
> as exhibited in the OP's example, a hashagg that does spill to disk
> is likely to take so long that he'll be back here complaining that
> the query never terminates ;-).
That's not fair, now is it? This isn't about the OP (me), it is about
PostgreSQL behaving badly.
> In most practical situations, I think
> exceeding work_mem is really the best solution, as long as it's not
> by more than 10x or 100x. It's when the estimate is off by many
> orders of magnitude that you've got a problem. Running out of memory
> is not necessarily the worst response ... as long as the system doesn't
> kill the process in response to that.
I don't agree with you here. Many PostgreSQL installations use PostgreSQL
as part of a larger whole. Adjusting "work_mem" should give the admin some
control over the memory footprint of the system. It is documented as the
limit a specific function path will use before spilling to disk.
I set up a lot of systems and I write a lot of software that uses
PostgreSQL. Periodically I run across features/problems/limitations of
PostgreSQL and post them.
This was/is an example of where the behavior of PostgreSQL is clearly
unacceptable. OK, yes, this problem goes away with an ANALYZE, but it
isn't clear how anyone could have known this, and unexpected behavior is
bad in any product.
In your statement, "he'll be back here complaining that the query never
terminates," that's not true. A long query typically gets examined with
explain (or in Oracle, explain plan) and evaluated from there. When the
process exhibits runaway memory use, that's a problem.
From | Date | Subject | |
---|---|---|---|
Next Message | Marko Kreen | 2006-02-10 15:06:50 | Re: FW: PGBuildfarm member snake Branch HEAD Status changed from OK to ContribCheck failure |
Previous Message | Stephan Szabo | 2006-02-10 14:48:02 | Re: how is that possible |