From: | "Mark Woodward" <pgsql(at)mohawksoft(dot)com> |
---|---|
To: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: PostgreSQL 8.0.6 crash |
Date: | 2006-02-09 19:49:08 |
Message-ID: | 16886.24.91.171.78.1139514548.squirrel@mail.mohawksoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> On Thu, Feb 09, 2006 at 02:03:41PM -0500, Mark Woodward wrote:
>> > "Mark Woodward" <pgsql(at)mohawksoft(dot)com> writes:
>> >> Again, regardless of OS used, hashagg will exceed "working memory" as
>> >> defined in postgresql.conf.
>> >
>> > So? If you've got OOM kill enabled, it can zap a process whether it's
>> > strictly adhered to work_mem or not. The OOM killer is entirely
>> capable
>> > of choosing a victim process whose memory footprint hasn't changed
>> > materially since it started (eg, the postmaster).
>>
>> Sorry, I must strongly disagree here. The postgresql.conf "working mem"
>> is
>> a VERY IMPORTANT setting, it is intended to limit the consumption of
>> memory by the postgresql process. Often times PostgreSQL will work along
>
> Actually, no, it's not designed for that at all.
I guess that's a matter of opinion.
>
>> side other application servers on the same system, infact, may be a
>> sub-part of application servers on the same system. (This is, in fact,
>> how
>> it is used on one of my site servers.)
>>
>> Clearly, if the server will use 1000 times this number (Set for 1024K,
>> but
>> exceeds 1G) this is broken, and it may cause other systems to fail or
>> perform very poorly.
>>
>> If it is not something that can be fixed, it should be clearly
>> documented.
>
> work_mem (integer)
>
> Specifies the amount of memory to be used by internal sort
> operations and hash tables before switching to temporary disk files.
> The value is specified in kilobytes, and defaults to 1024 kilobytes
> (1 MB). Note that for a complex query, several sort or hash
> operations might be running in parallel; each one will be allowed to
> use as much memory as this value specifies before it starts to put
> data into temporary files. Also, several running sessions could be
> doing such operations concurrently. So the total memory used could
> be many times the value of work_mem; it is necessary to keep this
> fact in mind when choosing the value. Sort operations are used for
> ORDER BY, DISTINCT, and merge joins. Hash tables are used in hash
> joins, hash-based aggregation, and hash-based processing of IN
> subqueries.
>
> So it says right there that it's very easy to exceed work_mem by a very
> large amount. Granted, this is a very painful problem to deal with and
> will hopefully be changed at some point, but it's pretty clear as to how
> this works.
Well, if you read that paragraph carefully, I'll admit that I was a little
too literal in my statement apliying it to the "process" and not specific
functions within the process, but in the documentation:
"each one will be allowed to use as much memory as this value specifies
before it starts to put data into temporary files."
According to the documentation the behavior of hashagg is broken. It did
not use up to this amount and then start to use temporary files, it used
1000 times this limit and was killed by the OS.
I think it should be documented as the behavior is unpredictable.
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2006-02-09 19:49:19 | Re: [GENERAL] Sequences/defaults and pg_dump |
Previous Message | Tom Lane | 2006-02-09 19:35:34 | Re: PostgreSQL 8.0.6 crash |