Re: Data files became huge with no apparent reason

From: Martijn van Oosterhout <kleptog(at)svana(dot)org>
To: Dario Fumagalli <dfumagalli(at)tin(dot)it>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Data files became huge with no apparent reason
Date: 2002-08-28 09:12:58
Message-ID: 20020828191258.A27041@svana.org
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Wed, Aug 28, 2002 at 08:45:39AM +0200, Dario Fumagalli wrote:
> [(Please disregard my previous post, this is more updated)]
>
> Hello to all,
>
> I'm returned from a 2 weeks vacation and I'm now facing a problem that
> is effectively blocking a critical database server.

[snip]

> That is this DB is 500+ MB!
>
> And to think that I perform VACUUM ANALYZE at least weekly!
> The only period I didn't VACUUM the database was during the mentioned 2
> weeks vacation.

Weekly?!? Daily at the very least. Put it in a cronjob.

> The database filled its partition and the backend crashed.
> This morning, when I was told of the malfunction, I tried to restart the
> daemon with:
>
> /etc/rc.d/init.d/postgres start
>
> as always and it blew up.
> Then I tried manually and it told me about a FATAL 2 error (searching on
> the archives revealed me it means "disk full" and from that I understood
> that the database grew too large).

Well, all sorts of things go funny when the disk is full. You didn't mention
what version you're running.

> Now, the questions are:
> - How is it possible this exceptional growth (N.B. the database server
> has all the default values set in its configuration files - i.e. was not
> "optimized")?

VACUUM more often. The more you do between vacuums, the bigger your database
gets.

> - More urgently: why now it takes 30 seconds to perform a select
> count(*) on 900 records (no other clients connected), while in the past
> it was almost instantaneous? All database operations are now slow as
> dogs. And I have to live with this 52 MB until the technician comes with
> a new disk (1 week... he is in vacation now).

Just do a vacuum on the largest table. Do an ls -l in the largest directory
and find the largest table. Do a VACUUM VERBOSE on just that table and post
the results.

> - Why do the backend crashed immediately if I try to VACUUM (ANALYZE)
> it? I think the backend claims additional disk space for this operation
> and fills the disk again, but I'm not sure.

Please paste log output.

> - And last, but not least... is it possible to restore the situation
> without loosing data (backup is 3 weeks old)? I'm able to start the
> daemon and perform SQL operations, but I don't know how to make
> PostgreSQL release the disk space after I dumped the database in order
> to reload it. And I fear an InitDB will destroy db users and their
> privileges.

If you want to do a complete restart, do a pg_dumpall to a file (check the
result). Drop all the databases and reload the backup. That will free
everyting.

Hope this helps,

--
Martijn van Oosterhout <kleptog(at)svana(dot)org> http://svana.org/kleptog/
> There are 10 kinds of people in the world, those that can do binary
> arithmetic and those that can't.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Patrik Kudo 2002-08-28 09:43:50 Two large files instead of one?
Previous Message Christoph Dalitz 2002-08-28 08:24:23 Re: How to get a list of procedures and triggers