From: | Shianmiin <Shianmiin(at)gmail(dot)com> |
---|---|
To: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: PostgreSQL backend process high memory usage issue |
Date: | 2011-04-12 23:09:21 |
Message-ID: | 1302649761326-4299348.post@n5.nabble.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-general |
Merlin Moncure-2 wrote:
>
>
> I think you may have uncovered a leak (I stand corrected).
>
> The number of schemas in your test is irrelevant -- the leak is
> happening in proportion to the number of views (set via \setrandom
> tidx 1 10). At 1 I don't think it exists at all -- at 100 memory use
> grows very fast.
>
> Postgresql memory architecture is such that process local permanent
> memory is extremely cautiously allocated typically for caching
> purposes (like a tiny tablespace cache). Temporarily, you can see
> some per process memory allocations for hashing and sorting which you
> can control with work_mem, and for maintenance purposes (create index)
> with maintenance_work_mem. This memory is always given back on
> transaction end however. In normal state of affairs, it's almost
> impossible to run postgres out of memory unless you oversubscribe
> work_mem and/or shared_buffers.
>
> There are some operations in postgres which are notorious for
> exhausting *shared* memory, like creating a lot of schemas and tables
> in a single transaction. However long term memory growth in resident
> memory is a serious issue and needs to be tracked down and fixed.
>
> merlin
>
>
I am not sure I agree that this is a leaking. Although the memory usage
grows fast with more views selected, they don't grow infinitely. If we let
the tests keep running, the memory usage stables (still a function of the
number of views hit), e.g. In the tests, in the 100 schemas * 100
tables/views per schema scenario, it stables at 1.5 GB while in 50 schemas *
50 tables/views per schema scenario, it stables at 478 MB.
creating large number of schemas/tables/views within a single transaction is
not a real case and hence it's not a concern. However, in the test case, if
we want to create 100 schemas with 100 tables/views with stock
configuration, the only parameter need to be relaxed is
max_locks_per_transaction (effectively, the upper bound of lock slots will
be max_locks_per_transaction * max_connections + max_prepared_transactions),
e.g. in my experiment, set max_locks_per_transaction from 64 (default) to
256 will work here.
In databases like Microsoft SQL Server or Oracle, the database instance
allocates a range of memory for storing global plan caches. I have been
trying to look for similar or corresponding mechanism in PostgreSQL but so
far I haven't found anything yet. I doubt in PostgreSQL the query plans are
cached in each backend process and are not shared among other backend
processes. This is fine if the number of query plans are not large or the
connection is not long-lived. However, it's a real concern in a usage
scenario that implementing single-db-multiple-schema multitenant model with
long-lived connection pool. I think it would be a good thing if PostgreSQL
has something like some shared plan cache mechanism or at least it can be
configured max memory the backend process can use.
Could you refer me to someone or tell me what should I do to track down the
issue?
Samuel
--
View this message in context: http://postgresql.1045698.n5.nabble.com/PostgreSQL-backend-process-high-memory-usage-issue-tp4289407p4299348.html
Sent from the PostgreSQL - general mailing list archive at Nabble.com.
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-04-12 23:20:44 | Re: BUG #5970: intersect and collation on types |
Previous Message | Merlin Moncure | 2011-04-12 22:13:01 | Re: [GENERAL] PostgreSQL backend process high memory usage issue |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2011-04-13 00:14:50 | Re: Revisiting UPDATE FROM ... ORDER BY not respected |
Previous Message | Carlo Stonebanks | 2011-04-12 23:08:01 | Revisiting UPDATE FROM ... ORDER BY not respected |