From: | Tomas Vondra <tomas(dot)vondra(at)2ndquadrant(dot)com> |
---|---|
To: | Joao Junior <jcoj2006(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Delete huge Table under XFS |
Date: | 2019-10-06 20:54:52 |
Message-ID: | 20191006205452.5vx3qqxwjbxc6jzh@development |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Thu, Sep 19, 2019 at 07:00:01PM +0200, Joao Junior wrote:
>A table with 800 gb means 800 files of 1 gb. When I use truncate or drop
>table, xfs that is a log based filesystem, will write lots of data in its
>log and this is the problem. The problem is not postgres, it is the way
>that xfs works with big files , or being more clear, the way that it
>handles lots of files.
>
I'm a bit skeptical about this explanation. Yes, XFS has journalling,
but only for metadata - and I have a hard time believing deleting 800
files (or a small multiple of that) would write "lots of data" into the
jornal, and noticeable performance issues. I wonder how you concluded
this is actually the problem.
That being said, TRUNCATE is unlikely to perform better than DROP,
because it also deletes all the files at once. What you might try is
dropping the indexes one by one, and then the table. That should delete
files in smaller chunks.
regards
--
Tomas Vondra http://www.2ndQuadrant.com
PostgreSQL Development, 24x7 Support, Remote DBA, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tomas Vondra | 2019-10-06 21:06:18 | Re: Out of Memory errors are frustrating as heck! |
Previous Message | Tomas Vondra | 2019-10-06 20:37:53 | Re: Slow PostgreSQL 10.6 query |