From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | RaviKumar(dot)Mandala(at)versata(dot)com |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Database backup mechanism |
Date: | 2007-02-09 14:17:43 |
Message-ID: | 45CC8287.8070500@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
RaviKumar(dot)Mandala(at)versata(dot)com wrote:
>
> Hi Folks,
>
> We have a requirement to deal with large databases of the size
> Terabytes when we go into production. What is the best database
> back-up mechanism and possible issues?
>
> pg_dump can back-up database but the dump file is limited by OS
> file-size limit. What about the option of compressing the dump file?
> How much time does it generally take for large databases? I heard,
> that it would be way too long (even one or days). I haven't tried it
> out, though.
>
> What about taking zipped back-up of the database directory? We tried
> this out but the checkpoint data in pg_xlogs directory is also being
> backed-up. Since these logs keeps on increasing from day1 of database
> creation, the back_up size if increasing drastically.
> Can we back-up certain subdirectories without loss of information or
> consistency..?
>
> Any quick comments/suggestions in this regard would be very helpful.
>
Please ask in the correct forum, either pgsql-general or pgsql-admin.
This list is strictly for discussion of development of postgres, not
usage questions.
(If all you need is a pg_dump backup, maybe you could just pipe its
output to something like 'split -a 5 -b 1000m - mybackup')
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2007-02-09 14:20:45 | Re: Variable length varlena headers redux |
Previous Message | Mark Cave-Ayland | 2007-02-09 14:11:23 | Re: Hierarchical Queries--Status |