From: | Merlin Moncure <mmoncure(at)gmail(dot)com> |
---|---|
To: | Shianmiin <Shianmiin(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org, pgsql-bugs <pgsql-bugs(at)postgresql(dot)org> |
Subject: | Re: [GENERAL] PostgreSQL backend process high memory usage issue |
Date: | 2011-04-12 22:13:01 |
Message-ID: | BANLkTikTY=o2s4HzE0L5+wiqUDVEu-EaKQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs pgsql-general |
On Tue, Apr 12, 2011 at 12:48 PM, Shianmiin <Shianmiin(at)gmail(dot)com> wrote:
>
> Merlin Moncure-2 wrote:
>>
>>
>> I am not seeing your results. I was able to run your test on a stock
>> config (cut down to 50 schemas though) on a vm with 512mb of memory.
>> What is your shared buffers set to?
>>
>>
>
> The shared buffers was set to 32MB as in default postgresql.conf
>
> To save you some time and make the conversion moves faster, I re-do a serial
> of tests on a VM with 512MB of memory so that we have a common base. Here is
> the test results and observations:
>
> 1. Setup: VM with 512MB of memory, CentOS 5.5 Final, PostgreSQL 9.0.3, a
> fresh db cluster with everything default in all config files. For your
> reference, I uploaded my postgresql.conf and top output here
> http://postgresql.1045698.n5.nabble.com/file/n4298807/postgresql.conf
> postgresql.conf
> http://postgresql.1045698.n5.nabble.com/file/n4298807/01_Top_-_VM_with_512MB.gif
> 01_Top_-_VM_with_512MB.gif
>
> 2. using psql to connect to the instance, here is the top output
> http://postgresql.1045698.n5.nabble.com/file/n4298807/02_Top_-_VM_with_512MB_-_fresh_connection_.gif
> 02_Top_-_VM_with_512MB_-_fresh_connection_.gif
>
> 3. The follow tests is on a db that initialized with 50 schemas, each with
> 50 tables/views
>
> 3a. single thread test with command "pgbench memoryusagetest -c 1 -j 1 -T
> 6000 -f test.sql"
> http://postgresql.1045698.n5.nabble.com/file/n4298807/03_Top_-_VM_with_512MB_-_single_thread_test.gif
> 03_Top_-_VM_with_512MB_-_single_thread_test.gif
>
> observations:
> (backend process 1) VIRT 478 MB, RES 401 MB SHR 32 MB.
> 0% waiting
> ==> The single backend process pretty much use up all the physical memory
> and maybe some swap spaces.
> ==> In the original test, 100 schemas with 100 tables/views per schema, the
> process use 1.5 GB
> In this test, 50 schemas with 50 tables/views per schema, the process
> use 1.5 / 4 = 478 MB.
> the memory used is somehow proportional to the number of objects in
> the database.
>
> 3b. two threads test with command "pgbench memoryusagetest -c 2 -j 2 -T
> 6000 -f test.sql"
> http://postgresql.1045698.n5.nabble.com/file/n4298807/04_Top_-_VM_with_512MB_-_two_threads_test.gif
> 04_Top_-_VM_with_512MB_-_two_threads_test.gif
>
> observations:
> (backend process 1) VIRT 476 MB, RES 320 MB SHR 9724 KB.
> (backend process 2) VIRT 478 MB, RES 82 MB SHR 6308 KB.
> 37.4%waiting
> ==> the physically memory were all used up by the two backend processes,
> plus 775 MB swap space used. The virtual (physical + swap) of each process
> is the same as in the single thread test, i.e. ~ 470MB
> ==> please note that there is considerable %waiting here and kswapd0
> starting to work a little hard
>
> 3c. three threads test with command "pgbench memoryusagetest -c 3 -j 3 -T
> 6000 -f test.sql"
> http://postgresql.1045698.n5.nabble.com/file/n4298807/05_Top_-_VM_with_512MB_-_three_threads_test.gif
> 05_Top_-_VM_with_512MB_-_three_threads_test.gif
>
> observations:
> (backend process 1) VIRT 468 MB, RES 299 MB SHR 18 MB.
> (backend process 2) VIRT 418 MB, RES 61 MB SHR 13 MB.
> (backend process 3) VIRT 421 MB, RES 61 MB SHR 13 MB.
> 42.8%waiting
> ==> all physical memory is used and more swap spaces are used, I didn't let
> it run long enough to see if the VIRT all go up to 470 MB since when
> swapping is considerable, the tests slows down and the VIRT growth slows
> down too. The VIRT still in the same range, i.e. 400-ish MB.
> ==> the % waiting gets higher and kswapd0 work harder and the tests run
> slower
>
> 3d. four threads test with command "pgbench memoryusagetest -c 4 -j 4 -T
> 6000 -f test.sql"
> http://postgresql.1045698.n5.nabble.com/file/n4298807/06_Top_-_VM_with_512MB_-_four_threads_test.gif
> 06_Top_-_VM_with_512MB_-_four_threads_test.gif
> observations:
>
> Observations:
> (backend process 1) VIRT 424 MB, RES 196 MB SHR 21 MB.
> (backend process 2) VIRT 416 MB, RES 83 MB SHR 15 MB.
> (backend process 3) VIRT 418 MB, RES 86 MB SHR 16 MB.
> (backend process 4) VIRT 466 MB, RES 66 MB SHR 16 MB.
> 47.8%waiting
> ==> all physical memory is used and more swap spaces are used, I didn't let
> it run long enough to see if the VIRT all go up to 470 MB since when
> swapping is considerable, the tests slows down and the VIRT growth slows
> down too. The VIRT still in the same range, i.e. 400-ish MB.
> ==> the % waiting gets higher and kswapd0 work even harder and the tests run
> even slower
>
> 3e. A crash test: 60 threads test with command "pgbench memoryusagetest
> -c 60 -j 60 -T 6000 -f test.sql"
> http://postgresql.1045698.n5.nabble.com/file/n4298807/07_Top_-_VM_with_512MB_-_60_threads_crashing_test.gif
> 07_Top_-_VM_with_512MB_-_60_threads_crashing_test.gif
> Observations:
> (backend process 1) VIRT 148 MB, RES 14 MB SHR 7852 KB.
> (backend process 2) VIRT 149 MB, RES 13 MB SHR 7636 KB.
> ...
> 63.9%waiting
> ==> as expected all physical memory is used and here swap space is used up
> too. Each backend process just get as much VIRT memory as they can. After
> running it a while, the system reach a point that everything was almost
> freeze, then the pgbench process got killed and system back to working
> state. The PostgreSQL reception is running fine though.
>
> Here is the questions and concerns:
>
> 1. while running the single-thread test, it shows that the PosgreSQL backend
> process allocates 478 MB and majority of them are private memory. What are
> those memory used for? It looks like it's some kind of cache and probably
> related to query plan?
>
> 2. Is there a way we can configure the max memory that a PostgreSQL backend
> process can allocate? The concern is, from the test results, when there are
> a considerable number of objects in the database (it's not unusual in a
> single-db-multi-schema multitenant data model) with a pool of long-lived
> connections, the memory usage of the corresponding PostgreSQL backend
> process will grow over time into all available memory (physical + swap) and
> results in considerable swapping and make the system really slow. The only
> way I can find to release those allocated (esp. private) memory from backend
> process is to disconnect. The concern is that those private memory may
> contain some rarely used data (assuming it's used for query plan related
> purpose) that it may be more efficient to release those memory for other
> backend process to use. That force us that if we go this way, to get around
> this issue, we have to build some kind of connection recycling logic to make
> sure the system not running into serious memory thrashing situation. It
> would be nice to have some way to configure how backend process uses memory,
> or is there other ways to see this issue?
>
> Being not knowing PostgreSQL well, sorry I made some assumptions here to try
> to make it clear about my concerns. Any insides or suggestions or
> corrections is welcomed.
I think you may have uncovered a leak (I stand corrected).
The number of schemas in your test is irrelevant -- the leak is
happening in proportion to the number of views (set via \setrandom
tidx 1 10). At 1 I don't think it exists at all -- at 100 memory use
grows very fast.
Postgresql memory architecture is such that process local permanent
memory is extremely cautiously allocated typically for caching
purposes (like a tiny tablespace cache). Temporarily, you can see
some per process memory allocations for hashing and sorting which you
can control with work_mem, and for maintenance purposes (create index)
with maintenance_work_mem. This memory is always given back on
transaction end however. In normal state of affairs, it's almost
impossible to run postgres out of memory unless you oversubscribe
work_mem and/or shared_buffers.
There are some operations in postgres which are notorious for
exhausting *shared* memory, like creating a lot of schemas and tables
in a single transaction. However long term memory growth in resident
memory is a serious issue and needs to be tracked down and fixed.
merlin
From | Date | Subject | |
---|---|---|---|
Next Message | Shianmiin | 2011-04-12 23:09:21 | Re: PostgreSQL backend process high memory usage issue |
Previous Message | Jeff Wu | 2011-04-12 20:18:47 | BUG #5974: UNION construct type cast gives poor error message |
From | Date | Subject | |
---|---|---|---|
Next Message | Joel Stevenson | 2011-04-12 22:47:28 | Re: Column storage (EXTERNAL/EXTENDED) settings for bytea/text column |
Previous Message | Mike Orr | 2011-04-12 21:09:57 | Re: pg_restore |