Re: Zeroed out pages at the end of the database files

From: Jerry Sievers <gsievers19(at)comcast(dot)net>
To: Drazen Kacar <drazen(dot)kacar(at)oradian(dot)com>
Cc: pgsql-admin(at)postgresql(dot)org
Subject: Re: Zeroed out pages at the end of the database files
Date: 2013-10-31 20:18:16
Message-ID: 86k3gtnqif.fsf@jerry.enova.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-admin

Drazen Kacar <drazen(dot)kacar(at)oradian(dot)com> writes:

> Hello,
>
> I'm running PostgreSQL 9.2.4 on Ubuntu 12.04.3 LTS. It's being used by an application that has recently gone from demo (very, very low usage) to production (low, but
> constant usage). It runs the autovacuum daemon and zero_damaged_pages configuration parameter is off.
>
> Since Postgres 9.2 doesn't have block checksums I implemented a poor man's consistency checking: on the backup server every backup is unpacked and pg_filedump <db_file>
> | grep Error is run on every database file. I have never seen any error reported by that, until the last Sunday the first one appeared.

Are you running pg_filedump against either or both of a. live DB,
b. unpacked hot-backup that hasn't been started and allowed to reach
consistent recovery state?

If so, then I don't presume you'll have a reliable outcome though if
this had been done in the past on a system with very little throughput,
it might work by luck.

> It turned out that pg_filedump reported errors for 34 database files, but that was because all those files had one or more database pages (8k) at the end of the file
> zeroed out. So pg_filedump couldn't find the proper header and thus reported an error. This was also the case on the main server, so it wasn't some error present on the
> backup server only. Database files stay in this condition for a time (I'm not sure how long) and then the zeroed out pages disappear. Postgres doesn't report any kind
> of an error.
>
> I want to ask if this behaviour is normal.
>
> I know that autovacuum can truncate the db file if it happens that the last page(s) don't contain live data, so this behaviour could be that, but it seems weird that
> the pages would first be zeroed out and then, some time later, the file truncated. Why not truncate it immediately?
>
> My checking incantation is:
>
> cd $PGDATA
> for f in `find base global -type f -name '[0-9]*' \! -empty`; do
> ??? pg_filedump $f | grep -q Error
> ??? [[ $? -eq 0 ]] && echo $f
> done
>
> And later, to verify that all errors are zeroed-out pages at the end of the file:
>
> pg_filedump -d <file>
>

--
Jerry Sievers
Postgres DBA/Development Consulting
e: postgres(dot)consulting(at)comcast(dot)net
p: 312.241.7800

In response to

Browse pgsql-admin by date

  From Date Subject
Next Message Rune Gullstrom 2013-11-01 05:53:24 Connecting to Oracle from Postgresql 9.2 on Windows
Previous Message Евгений Селявка 2013-10-31 16:25:22 Re: postgresql 9.1 out of memory