From: | Stephen Frost <sfrost(at)snowman(dot)net> |
---|---|
To: | "Ben Zeev, Lior" <lior(dot)ben-zeev(at)hp(dot)com> |
Cc: | Atri Sharma <atri(dot)jiit(at)gmail(dot)com>, Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: PostgreSQL Process memory architecture |
Date: | 2013-05-27 13:29:02 |
Message-ID: | 20130527132902.GR8597@tamriel.snowman.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Lior,
* Ben Zeev, Lior (lior(dot)ben-zeev(at)hp(dot)com) wrote:
> Yes, The memory utilization per PostgreSQL backend process is when running queries against this tables,
> For example: select * from test where num=2 and c2='abc'
> When It start it doesn't consume to much memory,
> But as it execute against more and more indexes the memory consumption grows
It might be interesting, if possible for you, to recompile PG with
-DCATCACHE_FORCE_RELEASE, which should cause PG to immediately release
cached information when it's no longer being used. You'll be trading
memory usage for CPU cycles, of course, but it might be better for your
situation. We may still be able to do better than what we're doing
today, but I'm still suspicious that you're going to run into other
issues with having 500 indexes on a table anyway.
Thanks,
Stephen
From | Date | Subject | |
---|---|---|---|
Next Message | Ben Zeev, Lior | 2013-05-27 13:30:45 | Re: PostgreSQL Process memory architecture |
Previous Message | Ben Zeev, Lior | 2013-05-27 13:27:54 | Re: PostgreSQL Process memory architecture |