| From: | RaviKumar(dot)Mandala(at)versata(dot)com |
|---|---|
| To: | pgsql-hackers(at)postgresql(dot)org |
| Subject: | Database backup mechanism |
| Date: | 2007-02-09 07:15:07 |
| Message-ID: | OFED7CC96C.6C7BD5CC-ON8625727D.0026DFB1-6525727D.0027B5D6@trilogy.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-hackers |
Hi Folks,
We have a requirement to deal with large databases of the size Terabytes
when we go into production. What is the best database back-up mechanism
and possible issues?
pg_dump can back-up database but the dump file is limited by OS file-size
limit. What about the option of compressing the dump file? How much time
does it generally take for large databases? I heard, that it would be way
too long (even one or days). I haven't tried it out, though.
What about taking zipped back-up of the database directory? We tried this
out but the checkpoint data in pg_xlogs directory is also being backed-up.
Since these logs keeps on increasing from day1 of database creation, the
back_up size if increasing drastically.
Can we back-up certain subdirectories without loss of information or
consistency..?
Any quick comments/suggestions in this regard would be very helpful.
Thanks in advance,
Ravi Kumar Mandala
| From | Date | Subject | |
|---|---|---|---|
| Next Message | J. Andrew Rogers | 2007-02-09 07:17:19 | Re: Proposal: Commit timestamp |
| Previous Message | Kris Jurka | 2007-02-09 06:24:08 | Re: Proposal: Commit timestamp |