Re: Recovery/Restore and Roll Forward Question.

From: Bruce McAlister <bruce(dot)mcalister(at)blueface(dot)ie>
To: Richard Huxton <dev(at)archonet(dot)com>
Subject: Re: Recovery/Restore and Roll Forward Question.
Date: 2007-06-21 10:16:51
Message-ID: 467A5013.1040907@blueface.ie
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

Richard Huxton wrote:
> Bruce McAlister wrote:
>> Hi All,
>>
>> Is it at all possible to "roll forward" a database with archive logs
>> when it has been recovered using a dump?
>>
>> Assuming I have the archive_command and archive_timeout parameters set
>> on our "live" system, then I follow these steps:
>>
>> [1] pg_dump -d database > /backup/database.dump,
>> [2] initdb new instance on recovery machine,
>> [3] psql -f ./database.dump,
>> [4] shutdown new recovered db,
>> [5] create recovery.conf,
>> [6] copy WAL's from time of backup till time of recovery to temp dir
>> [7] start postgresql
>
> No. WALs track disk blocks not table-rows, so you need a file-level
> backup of the original installation.

Ahh okay, that makes a little more sense now. I thought they actually
contained the query and that was replayed to the database being recovered.

>
>> In my mind I think I will have some problems somewhere along the way,
>> however I don't know enough about the internals of PostgreSQL to
>> actually see if there are additional steps I need to follow.
>>
>> In our environment it takes approx 2 hours to perform a PIT backup of
>> our live system:
>>
>> [1] select pg_start_backup('labe;')
>> [2] cpio & compress database directory (exclude wals)
>> [3] select pg_stop_backup()
>>
>> However, if we perform a plain dump (pg_dump/pg_dumpall) we can dump the
>> whole lot in 15 minutes. For us this is more efficient.
>
> It sounds like there's something strange with your setup if it's quicker
> for pg_dump to read your data than cp. Do you have *lots* of indexes, or
> perhaps a lot of dead rows? What's the bottleneck with cpio+compress -
> cpu/disk/network?

Thats exactly what I think. There is something strange going on. At the
moment I think it is the disk I am writing the data to that is slow,
possibly due to the fact that it is mounted up as "forcedirectio", so as
not to interfere with the file system cache which we want to have mainly
pg datafiles in, and the RAID controller has this particular logical
driver configured as write-through, so there is no buffering in-between.
The cpu's and network are not the problem here (2 x Dual Core Opterons
and Quad Gigabit Ethernet, total cpu usage is around 10%, NIC's are
pushing around 3Mbit/s over each).

It's not all that big to be honest, the total database size is around
11GB and I'm currently recking my head to find out how to improve the
backup times, and not adversely affect our running instance. I just
recently tried to use UFS snapshots, but the backing store filled up
before i could complete a backup of the snapshot. I need to find a way
to improve the write speed of our destination disk. I have another
question in this pg group about autovacuum that is not running on one of
our database tables which adds an average of around 2.1GB of bloat to
the database each day. I've now (today) scheduled a cron job every 10
minutes to get around this in the meantime. Hopefully that should reduce
the amount of data backed up by 2GB when we get to the bottom of that
issue (autovacuum)

>
>> The question is, how can we roll forward from our time of pg_dump, to
>> our most recent WAL (in case of failure - touch wood).
>
> Can't be done I'm afraid.

Thanks, I'll have to stick with PIT backups and find a way to improve
the speed.

>

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Richard Huxton 2007-06-21 10:30:29 Re: Recovery/Restore and Roll Forward Question.
Previous Message PFC 2007-06-21 09:56:44 Re: Accent insensitive search