Re: Large Database Restore

From: Ron Johnson <ron(dot)l(dot)johnson(at)cox(dot)net>
To: pgsql-general(at)postgresql(dot)org
Subject: Re: Large Database Restore
Date: 2007-05-17 21:16:54
Message-ID: 464CC646.6060802@cox.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

-----BEGIN PGP SIGNED MESSAGE-----
Hash: SHA1

Yes, but that's not always a valid assumption.

And still PITR must update the index at each "insert", which is much
slower than the "bulk load then create index" of pg_dump.

On 05/17/07 16:01, Ben wrote:
> Yes, but the implication is that large databases probably don't update
> every row between backup periods.
>
> On Thu, 17 May 2007, Ron Johnson wrote:
>
> On 05/17/07 11:04, Jim C. Nasby wrote:
> [snip]
>>>>
>>>> Ultimately though, once your database gets past a certain size, you
>>>> really want to be using PITR and not pg_dump as your main recovery
>>>> strategy.
>
> But doesn't that just replay each transaction? It must manage the
> index nodes during each update/delete/insert, and multiple UPDATE
> statements means that you hit the same page over and over again.

- --
Ron Johnson, Jr.
Jefferson LA USA

Give a man a fish, and he eats for a day.
Hit him with a fish, and he goes away for good!

-----BEGIN PGP SIGNATURE-----
Version: GnuPG v1.4.6 (GNU/Linux)

iD8DBQFGTMZGS9HxQb37XmcRAuMPAKCMfQxwJhGoVVKw/VGM4rai7pBnTwCgliwc
CfnCseBnXep4prffuqnQPNc=
=xE0J
-----END PGP SIGNATURE-----

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Tom Lane 2007-05-17 21:39:21 Re: short ciruit logic of plpgsql in 8.2
Previous Message Hannes Dorbath 2007-05-17 21:15:56 Re: 8.0, UTF8, and CLIENT_ENCODING