From: | Joao Junior <jcoj2006(at)gmail(dot)com> |
---|---|
To: | pgsql-performance(at)postgresql(dot)org |
Subject: | Delete huge Table under XFS |
Date: | 2019-09-19 15:59:55 |
Message-ID: | CABnPa_h2f5342mF6yM9pAR1=nV24hKwiBAXhmiS9zOEKmvUQAg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Hi,
I am running Postgresql 9.6 XFS as filesystem , kernel Linux 2.6.32.
I have a table that Is not being use anymore, I want to drop it.
The table is huge, around 800GB and it has some index on it.
When I execute the drop table command it goes very slow, I realised that
the problem is the filesystem.
It seems that XFS doesn't handle well big files, there are some
discussion about it in some lists.
I have to find a way do delete the table in chunks.
My first attempt was:
Iterate from the tail of the table until the beginning.
Delete some blocks of the table.
Run vacuum on it
iterate again....
The plan is delete some amount of blocks at the end of the table, in chunks
of some size and vacuum it waiting for vacuum shrink the table.
it seems work, the table has been shrink but each vacuum takes a huge
amount of time, I suppose it is because of the index. there is another
point, the index still huge and will be.
I am thinking of another way of doing this.
I can get the relfilenode of the table, in this way I can get the files
that belongs to the table and simply delete batches of files in a way that
don't put so much load on disk.
Do the same for the index.
Once I delete all table's files and index's files, I could simply execute
the command drop table and the entries from the catalog would deleted.
I would appreciate any kind of comments.
thanks!
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2019-09-19 16:28:22 | Re: comparing output of internal pg tables of referenced tables |
Previous Message | Mariel Cherkassky | 2019-09-19 15:50:44 | comparing output of internal pg tables of referenced tables |