From: | khan Affan <bawag773(at)gmail(dot)com> |
---|---|
To: | James Pang <jamespang886(at)gmail(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: logical replication out of memory |
Date: | 2024-07-31 07:06:22 |
Message-ID: | CAF4emOki6NYzJL8xAjWB6r6RL71PGWkRFFY-Kx1RKEuLmLupOw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
PostgreSQL's built-in pgoutput plugin doesn't provide a direct mechanism to
filter WAL records based on table-level filtering.
Adjust the replication slot retention period and you can also increase the
number of parallel workers for logical decoding (if supported by your
PostgreSQL version) to distribute the workload.
Thanks
Affan
On Wed, Jul 31, 2024 at 11:48 AM James Pang <jamespang886(at)gmail(dot)com> wrote:
> We use built-in pgoutput and a client application did an HOT update to
> a column , that data type is "text" and real data length is 756711422
> bytes. it's pg logical decoding throw out of memory error when decode
> "WAL records belong to table" , and string buffer total size exceed 1GB.
> But this table is NOT on publication list and not in replication either,
> possible to make logical decoding to ignore the wal records that belong to
> "NOT in replication list" tables ? that can help reduce this kind of
> error.
>
> Thanks,
>
> James
>
>
>>
>>
>>
>> *From:* khan Affan <bawag773(at)gmail(dot)com>
>> *Sent:* Wednesday, July 31, 2024 1:42 PM
>> *To:* James Pang <jamespang886(at)gmail(dot)com>
>> *Cc:* pgsql-performance(at)lists(dot)postgresql(dot)org
>> *Subject:* Re: logical replication out of memory
>>
>>
>>
>> Hi Pang
>>
>> The text column is exceptionally large, Your server must be out of
>> memory, Such a process ran out of memory while handling a large text column
>> update.
>>
>> I suggest using an S3 bucket for such files, Consider increasing the
>> memory-related configuration parameters, like work_mem,
>> maintenance_work_mem or even the server's overall memory allocation if
>> possible.
>>
>> Or increase the shared buffer size.
>>
>> If everything doesn't work, use physical replication to cope with it. 😄
>>
>> in last let me know the datatype your are using for this column.
>>
>>
>>
>> On Wed, Jul 31, 2024 at 7:18 AM James Pang <jamespang886(at)gmail(dot)com>
>> wrote:
>>
>> We use built-in pgoutput and a client application did an HOT update to a
>> column , that data type is "text" and real length is 756711422. But this
>> table is NOT on publication list, possible to make logical decoding ignore
>> "WAL records belong to tables that's not in publication list" ?
>>
>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801(at)pgpodb:[3603770]:[10-1]:pgreps_13801ERROR:
>> out of memory
>>
>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801(at)pgpodb:[3603770]:[11-1]:pgreps_13801DETAIL:
>> Cannot enlarge string buffer containing 378355896 bytes by 756711422 more
>> bytes.
>>
>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801(at)pgpodb:[3603770]:[12-1]:pgreps_13801CONTEXT:
>> slot "pgreps_13801", output plugin "pgoutput", in the change callback,
>> associated LSN 3D/318438E0
>>
>> 2024-07-31 00:01:02.795 UTC:xxx.xxxx.xxx.xxx(33068):repl13801(at)pgpodb:[3603770]:[13-1]:pgreps_13801STATEMENT:
>> START_REPLICATION SLOT "pgreps_13801" LOGICAL 3C/F24C74D0 (proto_version
>> '1', publication_names 'pgreps_13801')
>>
>>
>>
>> Thanks,
>>
>>
>>
>> James
>>
>>
From | Date | Subject | |
---|---|---|---|
Next Message | Laurenz Albe | 2024-07-31 09:45:00 | Re: proposal: schema variables |
Previous Message | Pavel Stehule | 2024-07-31 07:04:08 | Re: proposal: schema variables |