| From: | "Lim Berger" <straightfwd007(at)gmail(dot)com> |
|---|---|
| To: | "Sander Steffann" <s(dot)steffann(at)computel(dot)nl> |
| Cc: | "Postgresql General List" <pgsql-general(at)postgresql(dot)org> |
| Subject: | Re: "Out of memory" errors.. |
| Date: | 2007-08-15 02:45:54 |
| Message-ID: | 69d2538f0708141945y73410459j4612eb7e7dae21e0@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-general |
> If this is only a PostgreSQL database server, don't limit the postgres user.
> Don't tweak these limits unless you know exactly what you are doing.
Unfortunately, it is not. It has other applications. Including Apache
and so on. I tried not setting the ulimits at all, but it seems to be
required for the system (by other requirements). So I would like to
know optimal mappings between ulimits and postgres.
> PS: "maintenance_work_mem" is completely unrelated to "max user processes"
> or "open files", it's related to the allowed memory size.
>
Sorry, but this was suggested in this thread earlier. So how should I
make sure that the vacuum analyze on slightly large tables is allowed
without running out of memory? Would "shared_buffer" in conf be
relevant, but I doubt it.
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Ron Olson | 2007-08-15 03:12:53 | Blobs in Postgresql |
| Previous Message | novnov | 2007-08-15 02:45:45 | Re: Trigger not working as expected, first row gets a null value |