From: | Peter Slavov <pet(dot)slavov(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #12910: Memory leak with logical decoding |
Date: | 2015-04-09 15:11:27 |
Message-ID: | 5526969F.4050704@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Hi Andres,
I prepared a test case, that can reproduce the problem.
This is the link to the SQL File
<https://my.pcloud.com/publink/show?code=XZHuLXZomc1T6UJy3XUUJY19wHPejNjNjJk>
.
I am just creating few tables and filling then with data then deleting
and then filling again;
This will produce tables with size ~1.5 GB 2 times and delete them ones.
This queries in the logical decoding
of course will be one row for each inserted and deleted row. Which will
produce ~ 15GB of data for one transactions.
I get the data from terminal like thins:
psql test_case -c "select * from
pg_logical_slot_get_changes('testing', null, 1);" 1>//tmp/replay.sql
You can see that when the big transaction is red from the WAL files all
this 15GB is going to the RAM and swap space.
Eventually if you get enough RAM and swap the data will be compiled and
after that written to the file.
I know that this setup is not optimal, but it can reproduce the big
memory usage.
Greetings,
Peter Slavov
На 6.04.2015 в 16:50, Andres Freund написа:
> Hi,
>
> I'm on holidays right now, so my answers will be delayed.
>
> On 2015-04-06 15:35:19 +0300, Peter Slavov wrote:
>> Before I start I can see that the pg_xlog directory is 7.2 GB size.
>> This give me some idea that the size of the changes cannot be much bigger
>> than that.
> There's no such easy correlation. That said, there pretty much never
> should be a case where so much memory is needed.
>
>> After I start ti get the transactions changes one by one with select * from
>> pg_logical_slot_get_changes('<slot name>', null, 1),
> As I said before, it's *not* a good idea to consume transactions
> one-by-one. The startup of the decoding machinery is quite expensive. If
> you want more control about how much data you get you should use the
> streaming interface.
>
>> Maybe I am not understanding something, but is this normal?
> It's definitely not normal. It's unfortunately also hard to diagnose
> based on the information so far. Any chance you can build a reproducible
> test case?
>
> Greetings,
>
> Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2015-04-09 17:34:55 | Re: BUG #12910: Memory leak with logical decoding |
Previous Message | eshkinkot | 2015-04-09 14:02:18 | BUG #13010: After promote postgres try to send old timeline WALs to archive |