From: | Rémi Cura <remi(dot)cura(at)gmail(dot)com> |
---|---|
To: | PostgreSQL General <pgsql-general(at)postgresql(dot)org> |
Subject: | bloated postgres data folder, clean up |
Date: | 2016-02-29 17:56:44 |
Message-ID: | CAJvUf_ugk3pbcVYjKFHYRCdfkSGdDZ+yCzMyZPHepOh=Loijfw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hey dear list,
after a fex years of experiments and crash,
I ended up with a grossly bloated postgres folder.
I had about 8 Go of useless files.
All is in a virtualbox, so I'm sure to be able to reproduce exactly, and
fried my postgres folder a couple of time before getting it right.
Julien (Rouhaud) helped me to find those useless files via SQL.
The idea is to list files in postgres directory with `pg_ls_dir`, then to
check that the dir name correspond to something useful (using
pg_relation_filenode).
------------------
https://gist.github.com/Remi-C/926eaee04d61a7245eb8
------------------
To be sure I export the found files list,
then use oid2name to check that no file is recognized.
files can then be deleted (using plpythonu in my case).
So far a vacuum full analyze raise no errors.
Warning : for this to work, the SQL query must be sent while connected to
the database to clean.
Hope this may be useful
Cheers,
Rémi-C
From | Date | Subject | |
---|---|---|---|
Next Message | Geoff Winkless | 2016-02-29 18:05:37 | Re: multicolumn index and setting effective_cache_size using human-readable-numbers |
Previous Message | Tom Lane | 2016-02-29 17:39:24 | Re: Only owners can ANALYZE tables...seems overly restrictive |