From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Greg Stark <gsstark(at)mit(dot)edu>, Mark Woodward <pgsql(at)mohawksoft(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: PostgreSQL 8.0.6 crash |
Date: | 2006-02-09 19:53:44 |
Message-ID: | 20060209195344.GE4474@ns.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
* Tom Lane (tgl(at)sss(dot)pgh(dot)pa(dot)us) wrote:
> Greg Stark <gsstark(at)mit(dot)edu> writes:
> > It doesn't seem like a bad idea to have a max_memory parameter that if a
> > backend ever exceeded it would immediately abort the current
> > transaction.
>
> See ulimit (or local equivalent).
As much as setting ulimit in shell scripts is fun, I have to admit that
I really don't see it happening very much. Having Postgres set a ulimit
for itself may not be a bad idea and would perhaps provide a "least
suprise" for new users. Perhaps shared_buffers + 10*work_mem +
maintenance_work_mem + max_stack_depth? Then errors from running out of
memory could provide a 'HINT: Memory consumption went well over allowed
work_mem, perhaps you need to run ANALYZE or raise work_mem?'.
Just some thoughts,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2006-02-09 19:55:17 | Re: [GENERAL] Sequences/defaults and pg_dump |
Previous Message | Alvaro Herrera | 2006-02-09 19:52:33 | Re: PostgreSQL 8.0.6 crash |