From: | Karl Czajkowski <karlcz(at)isi(dot)edu> |
---|---|
To: | John McKown <john(dot)archie(dot)mckown(at)gmail(dot)com> |
Cc: | PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Memory usage per session |
Date: | 2016-07-08 19:00:33 |
Message-ID: | 20160708190033.GA30183@moraine.isi.edu |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Jul 08, John McKown modulated:
...
> I think the "problem" that he is having is fixable only by changing how
> PostgreSQL itself works. His problem is a PL/pgSQL function which is
> 11K lines in length. When invoked, this function is "compiled" into a
> large tokenized parse tree. This parse tree is only usable in the
> session which invoked the the function. Apparently this parse tree
> takes a lot of memory.
Right. I'm not entirely sure the original poster wants to hear
practical solutions, but I think there are three ways that someone
familar with Postgres would avoid this type of problem:
1. Connection pooling. Set the max connection limit and other tuning
parameters appropriately for your workload and available
resources. Don't get into a state as was described (thousands of
open connections and only hundreds "active"). Make your client
applications more disciplined.
2. Buy more RAM. You can easily grow to 512GB in just basic
dual-socket servers these days. This hardware cost may well
be worth it to avoid human labor costs.
3. Rewrite or refactor such complex stored procedures in a different
programming language such as C or Python, so your PL/pgsql stored
procedures remain small glue around libraries of code. Postgres
makes it very trivial to extend the system with such procedural
libraries.
Karl
From | Date | Subject | |
---|---|---|---|
Next Message | John R Pierce | 2016-07-08 19:07:47 | Re: Memory usage per session |
Previous Message | John R Pierce | 2016-07-08 18:58:08 | Re: RHEL 7 |