From: | Craig Ringer <craig(at)postnewspapers(dot)com(dot)au> |
---|---|
To: | Steve Clark <sclark(at)netwolves(dot)com> |
Cc: | Peter Eisentraut <peter_e(at)gmx(dot)net>, pgsql-general(at)postgresql(dot)org, Dan Armbrust <daniel(dot)armbrust(dot)list(at)gmail(dot)com> |
Subject: | Re: recover corrupt DB? |
Date: | 2009-05-01 12:42:53 |
Message-ID: | 49FAEE4D.90801@postnewspapers.com.au |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
> On all our servers we have a cron job that runs daily and reports disk
> usage stats.
> Maybe you need something similar.
Of course. I have Cacti running to monitor disk usage on all my servers.
That doesn't help if a user creates several duplicates of a huge table,
or otherwise gobbles disk space. There's always the *potential* to run
out of disk space, and I'm concerned that Pg doesn't handle that
gracefully. I agree it shouldn't happen, but Pg shouldn't mangle the DB
when it does, either.
--
Craig Ringer
From | Date | Subject | |
---|---|---|---|
Next Message | Peter Geoghegan | 2009-05-01 12:56:00 | Re: Re: Connecting to a postgreSQL database with windows CE over wi-fi; failing gracefully |
Previous Message | Bill Moran | 2009-05-01 12:40:05 | Re: Re: Mapping output from a SEQUENCE into something non-repeating/colliding but random-looking? |