From: | "Simon Windsor" <simon(dot)windsor(at)cornfield(dot)me(dot)uk> |
---|---|
To: | <pgsql-general(at)postgresql(dot)org> |
Subject: | Large Objects and and Vacuum |
Date: | 2011-12-30 23:54:57 |
Message-ID: | 000001ccc74e$70c8b500$525a1f00$@cornfield.me.uk |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi
I am struggling with the volume and number of XML files a new application is
storing. The table pg_largeobjects is growing fast, and despite the efforts
of vacuumlo, vacuum and auto-vacuum it keeps on growing in size.
The main tables that hold large objects are partitioned and every few days I
drop partition tables older than seven days, but despite all this, the
system is growing in size and not releasing space back to the OS.
Using either vacuum full or cluster to fix pg_largeobjects will require a
large amount of work space which I do not have on this server.
Is there another method of scanning postgres tables, moving active blocks
and releasing store back to the OS?
Failing this, I can see an NFS mount being required.
Simon
Simon Windsor
Eml: <mailto:simon(dot)windsor(at)cornfield(dot)org(dot)uk> simon(dot)windsor(at)cornfield(dot)org(dot)uk
Tel: 01454 617689
Mob: 07590 324560
"There is nothing in the world that some man cannot make a little worse and
sell a little cheaper, and he who considers price only is that man's lawful
prey."
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2011-12-31 00:06:21 | Re: streaming replication vacuum |
Previous Message | David Johnston | 2011-12-30 23:38:13 | Re: join and having clause |