From: | Michael Harris <harmic(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Tomas Vondra <tomas(at)vondra(dot)me>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: FileFallocate misbehaving on XFS |
Date: | 2024-12-11 01:09:22 |
Message-ID: | CADofcAX8eRgGHgRkC8RHmr1fAmaCEXg5xKAgfPFkRi9Nn-L4Lg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Andres
On Wed, 11 Dec 2024 at 03:09, Andres Freund <andres(at)anarazel(dot)de> wrote:
> I think it's implied, but I just want to be sure: This was one of the affected
> systems?
Yes, correct.
> Any chance to get df output? I'm mainly curious about the number of used
> inodes.
Sorry, I could swear I had included that already! Here it is:
# df /var/opt
Filesystem 1K-blocks Used Available Use% Mounted on
/dev/mapper/ippvg-ipplv 4197492228 3803866716 393625512 91% /var/opt
# df -i /var/opt
Filesystem Inodes IUsed IFree IUse% Mounted on
/dev/mapper/ippvg-ipplv 419954240 1568137 418386103 1% /var/opt
> Could you show the mount options that end up being used?
> grep /var/opt /proc/mounts
/dev/mapper/ippvg-ipplv /var/opt xfs
rw,relatime,attr2,inode64,logbufs=8,logbsize=32k,noquota 0 0
These seem to be the defaults.
> I assume you have never set XFS options for the PG directory or files within
> it?
Correct.
> Could you show
> xfs_io -r -c lsattr -c stat -c statfs /path/to/directory/with/enospc
-p--------------X pg_tblspc/16402/PG_16_202307071/49163/1132925906.4
fd.path = "pg_tblspc/16402/PG_16_202307071/49163/1132925906.4"
fd.flags = non-sync,non-direct,read-only
stat.ino = 4320612794
stat.type = regular file
stat.size = 201211904
stat.blocks = 393000
fsxattr.xflags = 0x80000002 [-p--------------X]
fsxattr.projid = 0
fsxattr.extsize = 0
fsxattr.cowextsize = 0
fsxattr.nextents = 165
fsxattr.naextents = 0
dioattr.mem = 0x200
dioattr.miniosz = 512
dioattr.maxiosz = 2147483136
fd.path = "pg_tblspc/16402/PG_16_202307071/49163/1132925906.4"
statfs.f_bsize = 4096
statfs.f_blocks = 1049373057
statfs.f_bavail = 98406378
statfs.f_files = 419954240
statfs.f_ffree = 418386103
statfs.f_flags = 0x1020
geom.bsize = 4096
geom.agcount = 4
geom.agblocks = 262471424
geom.datablocks = 1049885696
geom.rtblocks = 0
geom.rtextents = 0
geom.rtextsize = 1
geom.sunit = 0
geom.swidth = 0
counts.freedata = 98406378
counts.freertx = 0
counts.freeino = 864183
counts.allocino = 2432320
> I'd try monitoring the per-ag free space over time and see if the the ENOSPC
> issue is correlated with one AG getting full. 'freesp' is probably too
> expensive for that, but it looks like
> xfs_db -r -c agresv /dev/nvme6n1
> should work?
>
> Actually that output might be interesting to see, even when you don't hit the
> issue.
I will see if I can set that up.
> How many partitions are there for each of the tables? Mainly wondering because
> of the number of inodes being used.
It is configurable and varies from site to site. It could range from 7
up to maybe 60.
> Are all of the active tables within one database? That could be relevant due
> to per-directory behaviour of free space allocation.
Each pg instance may have one or more application databases. Typically
data is being written into all of them (although sometimes a database
will be archived, with no new data going into it).
You might be onto something though. The system I got the above prints
from is only experiencing this issue in one directory - that might not
mean very much though, it only has 2 databases and one of them looks
like it is not receiving imports.
But another system I can access has multiple databases with ongoing
imports, yet all the errors bar one relate to one directory.
I will collect some data from that system and post it shortly.
Cheers
Mike
From | Date | Subject | |
---|---|---|---|
Next Message | Zhijie Hou (Fujitsu) | 2024-12-11 02:13:50 | RE: Memory leak in WAL sender with pgoutput (v10~) |
Previous Message | David Rowley | 2024-12-11 00:49:42 | Re: Add ExprState hashing for GROUP BY and hashed SubPlans |