From: | Peter Slavov <pet(dot)slavov(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | pgsql-bugs(at)postgresql(dot)org |
Subject: | Re: BUG #12910: Memory leak with logical decoding |
Date: | 2015-04-16 12:53:40 |
Message-ID: | 552FB0D4.1060401@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Hi,
Can you tell me, at which point after testing this can me included in
some branch and after in some officials PostgreSQL version ?
Regards,
Peter Slavov
На 9.04.2015 в 20:34, Andres Freund написа:
> Hi,
>
> On 2015-04-09 18:11:27 +0300, Peter Slavov wrote:
>> I prepared a test case, that can reproduce the problem.
> Yup. I can reproduce it... I did not (yet) have the time to run the test
> to completion, but i believe the attached patch should fix the problem
> (and also improve performance a bit...).
>
> Using the SQL interface for such large transactions isn't going to be
> fun as all of the data, due to the nature of the set returning function
> implementation in postgres, will be additionally written into a
> tuplestore. The streaming interface doesn't have that behaviour.
>
> Additionally it's probably not a good idea to stream such a large
> resultset via SELECT using psql - IIRC it'll try to store all that data
> in memory :). Try something like
> \copy (select * from pg_logical_slot_peek_changes('testing', null, 1)) TO /tmp/f
> or such.
>
> Greetings,
>
> Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2015-04-16 13:12:39 | Re: BUG #12910: Memory leak with logical decoding |
Previous Message | Bruce Momjian | 2015-04-16 02:27:10 | Re: BUG #13060: no pg_update in /usr/pgsql-9.4/bin |