BUG #16943: Permanent restart of the cluster after "EXECUTE 'alter subscription ... refresh publication' "

From: PG Bug reporting form <noreply(at)postgresql(dot)org>
To: pgsql-bugs(at)lists(dot)postgresql(dot)org
Cc: ma41h(at)yandex(dot)ru
Subject: BUG #16943: Permanent restart of the cluster after "EXECUTE 'alter subscription ... refresh publication' "
Date: 2021-03-25 08:14:25
Message-ID: 16943-69c39908af248f65@postgresql.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

The following bug has been logged on the website:

Bug reference: 16943
Logged by: Permanent restart of the cluster
Email address: ma41h(at)yandex(dot)ru
PostgreSQL version: 11.11
Operating system: Debian 10
Description:

postgres=# select version();
version

--------------------------------------------------------------------------------------------------------------------
PostgreSQL 11.11 (Debian 11.11-1.pgdg100+1) on x86_64-pc-linux-gnu,
compiled by gcc (Debian 8.3.0-6) 8.3.0, 64-bit
(1 строка)

CREATE OR REPLACE FUNCTION information_schema.play_ddl_event() RETURNS
trigger
LANGUAGE plpgsql
AS $$
DECLARE
rec RECORD;
repl text;
BEGIN
select query,backend_xid from information_schema.ddl_events where
backend_xid = NEW.backend_xid and event_tag = 'play' into rec;
IF NEW.backend_xid = rec.backend_xid and NEW.query = rec.query THEN
NEW.export_time = now();
NEW.event_tag = 'no play';
RETURN NEW;
END IF;
NEW.event_tag = 'play';
EXECUTE NEW.query;

EXECUTE 'alter subscription logic_repl_all_sub refresh publication' ;

NEW.export_time = now();
RETURN NEW;
END;
$$;

CREATE TRIGGER play_ddl_event
BEFORE INSERT OR UPDATE ON information_schema.ddl_events
FOR EACH ROW
EXECUTE PROCEDURE information_schema.play_ddl_event();

ALTER TABLE information_schema.ddl_events ENABLE REPLICA TRIGGER
play_ddl_event;

//////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////////

2021-03-25 11:02:57.948 MSK :[30903]:LOG: database system is ready to
accept connections
2021-03-25 11:02:57.950 MSK :[30912]:LOG: TimescaleDB background worker
launcher connected to shared catalogs
2021-03-25 11:02:57.970 MSK :[30912]:LOG: no available background worker
slots
2021-03-25 11:02:57.970 MSK :[30912]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:02:57.970 MSK :[30912]:LOG: no available background worker
slots
2021-03-25 11:02:57.970 MSK :[30912]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:02:57.970 MSK :[30912]:LOG: no available background worker
slots
2021-03-25 11:02:57.970 MSK :[30912]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:02:58.127 MSK :[30913]:LOG: pg_cron scheduler started
2021-03-25 11:02:58.156 MSK :[30915]:LOG: logical replication apply worker
for subscription "logic_repl_all_sub" has started
2021-03-25 11:02:58.486 MSK :[30923]::[unknown]:[unknown]:[]:LOG:
incomplete startup packet
2021-03-25 11:05:53.595 MSK :[30971]::logic_repl:postgres:[idle]:LOG:
statement: CREATE OR REPLACE FUNCTION information_schema.play_ddl_event()
RETURNS trigger
LANGUAGE plpgsql
AS $$
DECLARE
rec RECORD;
repl text;
BEGIN
select query,backend_xid from information_schema.ddl_events where
backend_xid = NEW.backend_xid and event_tag = 'play' into rec;
IF NEW.backend_xid = rec.backend_xid and NEW.query = rec.query THEN
NEW.export_time = now();
NEW.event_tag = 'no play';
RETURN NEW;
END IF;
NEW.event_tag = 'play';
EXECUTE NEW.query;

EXECUTE 'alter subscription logic_repl_all_sub refresh publication' ;

NEW.export_time = now();
RETURN NEW;
END;
$$;

2021-03-25 11:06:09.447 MSK :[30971]::logic_repl:postgres:[idle]:LOG:
statement: ALTER TABLE information_schema.ddl_events ENABLE REPLICA TRIGGER
play_ddl_event;
2021-03-25 11:07:29.724 MSK :[30903]:LOG: background worker "logical
replication worker" (PID 30915) was terminated by signal 11: Segmentation
fault
2021-03-25 11:07:29.724 MSK :[30903]:LOG: terminating any other active
server processes
2021-03-25 11:07:29.728 MSK :[30971]::logic_repl:postgres:[idle]:WARNING:
terminating connection because of crash of another server process
2021-03-25 11:07:29.728 MSK :[30971]::logic_repl:postgres:[idle]:DETAIL:
The postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2021-03-25 11:07:29.728 MSK :[30971]::logic_repl:postgres:[idle]:HINT: In a
moment you should be able to reconnect to the database and repeat your
command.
2021-03-25 11:07:29.730 MSK :[30948]::postgres:postgres:[idle]:WARNING:
terminating connection because of crash of another server process
2021-03-25 11:07:29.730 MSK :[30948]::postgres:postgres:[idle]:DETAIL: The
postmaster has commanded this server process to roll back the current
transaction and exit, because another server process exited abnormally and
possibly corrupted shared memory.
2021-03-25 11:07:29.730 MSK :[30948]::postgres:postgres:[idle]:HINT: In a
moment you should be able to reconnect to the database and repeat your
command.
2021-03-25 11:07:29.732 MSK :[30909]:WARNING: terminating connection
because of crash of another server process
2021-03-25 11:07:29.732 MSK :[30909]:DETAIL: The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2021-03-25 11:07:29.732 MSK :[30909]:HINT: In a moment you should be able
to reconnect to the database and repeat your command.
2021-03-25 11:07:29.735 MSK :[30903]:LOG: archiver process (PID 30910)
exited with exit code 2
2021-03-25 11:07:29.739 MSK :[30903]:LOG: all server processes terminated;
reinitializing
2021-03-25 11:07:29.769 MSK :[31085]:LOG: database system was interrupted;
last known up at 2021-03-25 11:02:57 MSK
2021-03-25 11:07:29.885 MSK :[31085]:LOG: recovered replication state of
node 1 to 28/5447558
2021-03-25 11:07:29.885 MSK :[31085]:LOG: database system was not properly
shut down; automatic recovery in progress
2021-03-25 11:07:29.889 MSK :[31085]:LOG: redo starts at 10/D5000098
2021-03-25 11:07:29.900 MSK :[31085]:LOG: redo done at 10/D5015F38
2021-03-25 11:07:29.901 MSK :[31085]:LOG: last completed transaction was at
log time 2021-03-25 11:06:09.479437+03
2021-03-25 11:07:29.918 MSK :[30903]:LOG: database system is ready to
accept connections
2021-03-25 11:07:29.925 MSK :[31092]:LOG: TimescaleDB background worker
launcher connected to shared catalogs
2021-03-25 11:07:29.936 MSK :[31092]:LOG: no available background worker
slots
2021-03-25 11:07:29.936 MSK :[31092]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:07:29.936 MSK :[31092]:LOG: no available background worker
slots
2021-03-25 11:07:29.936 MSK :[31092]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:07:29.936 MSK :[31092]:LOG: no available background worker
slots
2021-03-25 11:07:29.936 MSK :[31092]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:07:29.969 MSK :[31093]:LOG: pg_cron scheduler started
2021-03-25 11:07:29.970 MSK :[31095]:LOG: logical replication apply worker
for subscription "logic_repl_all_sub" has started
2021-03-25 11:07:30.088 MSK :[30903]:LOG: background worker "logical
replication worker" (PID 31095) was terminated by signal 11: Segmentation
fault
2021-03-25 11:07:30.088 MSK :[30903]:LOG: terminating any other active
server processes
2021-03-25 11:07:30.092 MSK :[31089]:WARNING: terminating connection
because of crash of another server process
2021-03-25 11:07:30.092 MSK :[31089]:DETAIL: The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2021-03-25 11:07:30.092 MSK :[31089]:HINT: In a moment you should be able
to reconnect to the database and repeat your command.
2021-03-25 11:07:30.095 MSK :[30903]:LOG: archiver process (PID 31090)
exited with exit code 2
2021-03-25 11:07:30.096 MSK :[30903]:LOG: all server processes terminated;
reinitializing
2021-03-25 11:07:30.121 MSK :[31103]:LOG: database system was interrupted;
last known up at 2021-03-25 11:07:29 MSK
2021-03-25 11:07:30.240 MSK :[31103]:LOG: recovered replication state of
node 1 to 28/5447558
2021-03-25 11:07:30.242 MSK :[31103]:LOG: database system was not properly
shut down; automatic recovery in progress
2021-03-25 11:07:30.249 MSK :[31103]:LOG: invalid record length at
10/D5016078: wanted 24, got 0
2021-03-25 11:07:30.249 MSK :[31103]:LOG: redo is not required
2021-03-25 11:07:30.269 MSK :[30903]:LOG: database system is ready to
accept connections
2021-03-25 11:07:30.277 MSK :[31109]:LOG: TimescaleDB background worker
launcher connected to shared catalogs
2021-03-25 11:07:30.288 MSK :[31109]:LOG: no available background worker
slots
2021-03-25 11:07:30.288 MSK :[31109]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:07:30.288 MSK :[31109]:LOG: no available background worker
slots
2021-03-25 11:07:30.288 MSK :[31109]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:07:30.288 MSK :[31109]:LOG: no available background worker
slots
2021-03-25 11:07:30.288 MSK :[31109]:HINT: Consider increasing
max_worker_processes in tandem with timescaledb.max_background_workers.
2021-03-25 11:07:30.302 MSK :[31112]:LOG: logical replication apply worker
for subscription "logic_repl_all_sub" has started
2021-03-25 11:07:30.309 MSK :[31110]:LOG: pg_cron scheduler started
2021-03-25 11:07:30.427 MSK :[30903]:LOG: background worker "logical
replication worker" (PID 31112) was terminated by signal 11: Segmentation
fault
2021-03-25 11:07:30.427 MSK :[30903]:LOG: terminating any other active
server processes
2021-03-25 11:07:30.432 MSK :[31107]:WARNING: terminating connection
because of crash of another server process
2021-03-25 11:07:30.432 MSK :[31107]:DETAIL: The postmaster has commanded
this server process to roll back the current transaction and exit, because
another server process exited abnormally and possibly corrupted shared
memory.
2021-03-25 11:07:30.432 MSK :[31107]:HINT: In a moment you should be able
to reconnect to the database and repeat your command.
2021-03-25 11:07:30.435 MSK :[30903]:LOG: all server processes terminated;
reinitializing
2021-03-25 11:07:30.456 MSK :[31120]:LOG: database system was interrupted;
last known up at 2021-03-25 11:07:30 MSK
2021-03-25 11:07:30.517 MSK :[31120]:LOG: recovered replication state of
node 1 to 28/5447558
2021-03-25 11:07:30.517 MSK :[31120]:LOG: database system was not properly
shut down; automatic recovery in progress
2021-03-25 11:07:30.521 MSK :[31120]:LOG: invalid record length at
10/D50160E8: wanted 24, got 0
2021-03-25 11:07:30.521 MSK :[31120]:LOG: redo is not required

Browse pgsql-bugs by date

  From Date Subject
Next Message Soni M 2021-03-26 03:01:06 Unused triggers and trigger firing always on partitioned table
Previous Message Fujii Masao 2021-03-25 02:59:49 Re: BUG #16931: source code problem about commit_ts