From: | "Daniel Verite" <daniel(at)manitou-mail(dot)org> |
---|---|
To: | "Jim Hurne" <jhurne(at)us(dot)ibm(dot)com> |
Cc: | "PostgreSQL General" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | RE: autovacuum failing on pg_largeobject and disk usage of the pg_largeobject growing unchecked |
Date: | 2020-06-22 22:00:37 |
Message-ID: | 0e1db9da-a9c2-4555-9202-d88c4edc8e18@manitou-mail.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Jim Hurne wrote:
> We are of course going to continue to try different things, but does
> anyone have any other suggestions on what we should be looking at or what
> settings we might want to adjust?
If you can arrange a maintenance window, a faster way to rebuild
pg_largeobject when it contains mostly empty pages can be to:
- export the contents into files:
for id in select oid from pg_largeobject_metadata
loop
perform lo_export(id, '/tmp-path/'||id::text);
end loop;
- SET allow_system_table_mods to ON; (needs a restart)
- truncate table pg_largeobject, pg_largeobject_metadata;
- reimport the files with the same OIDs
for id in select pg_ls_dir('/tmp-path/')
loop
perform lo_import('/tmp-path/' || id::text, id::oid);
end loop;
- remove the files in /tmp-path
- Set allow_system_table_mods back to OFF and restart again, unless
you don't need that safety check and prefer to leave it permanently to ON
to avoid the restarts.
With less than 60MB of actual contents, all this might take no more
than a few minutes, as these operations don't need to fully scan
pg_largeobject, which is what is problematic with vacuum.
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: https://www.manitou-mail.org
Twitter: @DanielVerite
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-06-22 22:00:54 | Re: DISTINCT on jsonb fields and Indexes |
Previous Message | Adrian Klaver | 2020-06-22 21:56:19 | Re: scram-sha-256 encrypted password in pgpass |