From: | Phoenix Kiula <phoenix(dot)kiula(at)gmail(dot)com> |
---|---|
To: | Robins Tharakan <robins(dot)tharakan(at)comodo(dot)com> |
Cc: | PG-General Mailing List <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: Incremental backup with RSYNC or something? |
Date: | 2011-11-13 13:23:10 |
Message-ID: | CAFWfU=udscXL10zaNDoYjF4QTULNfNKo1CVU0VJFtwUAmuvgoQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Sun, Nov 13, 2011 at 8:42 PM, Robins Tharakan
<robins(dot)tharakan(at)comodo(dot)com> wrote:
> Hi,
>
> Well, the 'complex' stuff is only as there for larger or high-traffic DBs.
> Besides at 60GB that is a largish DB in itself and you should begin to try
> out a few other backup methods nonetheless. That is moreso, if you are
> taking entire DB backups everyday, you would save a considerable lot on
> (backup) storage.
Thanks. I usually keep only the last 6 days of it. And monthly backups
as of Day 1. So it's not piling up or anything.
What "other methods" do you recommend? That was in fact my question.
Do I need to install some modules?
> Anyway, as for pgdump, we have a DB 20x bigger than you mention (1.3TB) and
> it takes only half a day to do a pgdump+gzip (both). One thing that comes to
> mind, how are you compressing? I hope you are doing this in one operation
> (or at least piping pgdump to gzip before writing to disk)?
I'm gzipping with this command (this is my backup.sh)--
BKPFILE=/backup/pg/dbback-${DATA}.sql
pg_dump MYDB -U MYDB_MYDB -f ${BKPFILE}
gzip --fast ${BKPFILE}
Is this good enough? Sadly, this takes up over 97% of the CPU when it's running!
From | Date | Subject | |
---|---|---|---|
Next Message | Gregg Jaskiewicz | 2011-11-13 13:26:35 | Re: CLONE DATABASE (with copy on write?) |
Previous Message | Robins Tharakan | 2011-11-13 12:42:19 | Re: Incremental backup with RSYNC or something? |