From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Cc: | Thomas Munro <tmunro(at)postgresql(dot)org> |
Subject: | PANIC: could not fsync file "pg_multixact/..." since commit dee663f7843 |
Date: | 2020-11-04 01:32:05 |
Message-ID: | 20201104013205.icogbi773przyny5@development |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
While running some multixact-oriented stress tests, I noticed that
commit dee663f7843:
Defer flushing of SLRU files.
Previously, we called fsync() after writing out individual pg_xact,
pg_multixact and pg_commit_ts pages due to cache pressure, leading to
regular I/O stalls in user backends and recovery. Collapse requests for
the same file into a single system call as part of the next checkpoint,
as we already did for relation files, using the infrastructure developed
by commit 3eb77eba. This can cause a significant improvement to
recovery performance, especially when it's otherwise CPU-bound.
...
seems to trigger this issue:
[17820] LOG: checkpoint starting: wal
[17820] PANIC: could not fsync file "pg_multixact/offsets/06E0": No such file or directory
[17818] LOG: checkpointer process (PID 17820) was terminated by signal 6: Aborted
[17818] LOG: terminating any other active server processes
which is then followed by this during recovery:
[18599] LOG: redo starts at 1F/FF098138
[18599] LOG: file "pg_multixact/offsets/0635" doesn't exist, reading as zeroes
[18599] CONTEXT: WAL redo at 1F/FF09A218 for MultiXact/CREATE_ID: 104201060 offset 1687158668 nmembers 3: 2128819 (keysh) 2128823 (keysh) 2128827 (keysh)
[18599] LOG: file "pg_multixact/members/7DE3" doesn't exist, reading as zeroes
[18599] CONTEXT: WAL redo at 1F/FF09A218 for MultiXact/CREATE_ID: 104201060 offset 1687158668 nmembers 3: 2128819 (keysh) 2128823 (keysh) 2128827 (keysh)
[18599] LOG: redo done at 2A/D4D8BFB0 system usage: CPU: user: 265.57 s, system: 12.43 s, elapsed: 278.06 s
[18599] LOG: checkpoint starting: end-of-recovery immediate
[18599] PANIC: could not fsync file "pg_multixact/offsets/06E0": No such file or directory
[17818] LOG: startup process (PID 18599) was terminated by signal 6: Aborted
[17818] LOG: aborting startup due to startup process failure
[17818] LOG: database system is shut down
at which point the cluster is kaput, of course.
It's clearly the fault of dee663f7843 - 4 failures out of 4 attempts on
that commit, and after switching to ca7f8e2b86 it goes away.
Reproducing it is pretty simple, but it takes a bit of time. Essentially
do this:
create table t (a int primary key);
insert into t select i from generate_series(1,1000) s(i);
and then run
SELECT * FROM t FOR KEY SHARE;
from pgbench with many concurrent clients. I do this:
pgbench -n -c 32 -j 8 -f select.sql -T 86400 test
After a while (~1h on my machine) the pg_multixact gets over 10GB, which
triggers a more aggressive cleanup (per MultiXactMemberFreezeThreshold).
My guess is that this discards some of the files, but checkpointer is
not aware of that, or something like that. Not sure.
Attached are backtraces from the two crashes - regular and during
recovery. Not sure how interesting / helpful that is, it probably does
not say much about how we got there.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
Attachment | Content-Type | Size |
---|---|---|
backtraces.txt | text/plain | 3.1 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-11-04 01:33:27 | Re: Collation versioning |
Previous Message | Michael Paquier | 2020-11-04 01:32:04 | Re: Online checksums verification in the backend |