From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Neil Conway <neilc(at)samurai(dot)com> |
Cc: | Alvaro Herrera <alvherre(at)dcc(dot)uchile(dot)cl>, Paul Tillotson <pntil(at)shentel(dot)net>, David Esposito <pgsql-general(at)esposito(dot)newnetco(dot)com>, pgsql-general <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Performance tuning on RedHat Enterprise Linux 3 |
Date: | 2004-12-07 04:55:39 |
Message-ID: | 10281.1102395339@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Neil Conway <neilc(at)samurai(dot)com> writes:
> On Mon, 2004-12-06 at 22:19 -0300, Alvaro Herrera wrote:
>> AFAIK this is indeed the case with hashed aggregation, which uses the
>> sort_mem (work_mem) parameter to control its operation, but for which it
>> is not a hard limit.
> Hmmm -- I knew we didn't implement disk-spilling for hashed aggregation,
> but I thought we had _some_ sane means to avoid consuming a lot of
> memory if we got the plan completely wrong.
The *sort* code is fairly good about respecting sort_mem. The *hash*
code is not so good.
> We definitely ought to fix this.
Bear in mind that the price of honoring sort_mem carefully is
considerably far from zero. (Or, if you know how to do it cheaply,
let's see it ...)
The issue with the hash code is that it sets size parameters on the
basis of the estimated input row count; the memory usage error factor
is basically inversely proportional to the error in the planner's row
estimate. The seriously bad cases I've seen reported were directly
due to horribly-out-of-date planner table size estimates. A large part
of the rationale for applying that last-minute 8.0 change in relpages/
reltuples handling was to try to suppress the worst cases in hashtable
size estimation.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2004-12-07 05:01:07 | Re: hooks for supporting third party blobs? |
Previous Message | Michael Fuhr | 2004-12-07 04:50:40 | Re: Rules |