From: | Steven Lembark <lembark(at)wrkhors(dot)com> |
---|---|
To: | pgsql-general(at)lists(dot)postgresql(dot)org |
Cc: | lembark(at)wrkhors(dot)com |
Subject: | Re: Backup PostgreSQL from RDS straight to S3 |
Date: | 2019-09-19 06:03:44 |
Message-ID: | 20190919010344.7f7c9bb7.lembark@wrkhors.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
s3fs available on linux allows mounting S3 directly as a local
filesystem. At that point something like:
pg_dump ... | gzip -9 -c > /mnt/s3-mount-point/$basename.pg_dump.gz;
will do the deed nicely. If your S3 volume is something like
your_name_here.com/pg_dump then you could parallize it by dumping
separate databases into URL's based on the date and database name:
tstamp=$(date +%Y.%m.%d-%H.%M.%S);
gzip='/bin/gzip -9 -v';
dump='/opt/postgres/bin/pg_dump -blah -blah -blah';
for i in your database list
do
echo "Dump: '$i'";
$dump $i | $gzip > /mnt/pg-backups/$tstamp/$i.dump.gz &
done
# at this point however many databases are dumping...
wait;
echo "Goodnight.";
If you prefer to only keep a few database backups (e.g., a rolling
weekly history) then use the day-of-week for the tstamp; if you
want to keep fewer then $(( $(date +%s) / 86400 % $num_backups))
will do (leap-second notwhithstanding).
Check rates to see which AWS location is cheapest for the storage
and procesing to gzip the content. Also check the CPU charges for
zipping vs. storing the data -- it may be cheaper in the long run
to use "gzip --fast" with smaller, more repeatetive content than
to pay the extra CPU charges for "gzip --best".
--
Steven Lembark 3646 Flora Place
Workhorse Computing St. Louis, MO 63110
lembark(at)wrkhors(dot)com +1 888 359 3508
From | Date | Subject | |
---|---|---|---|
Next Message | Matthias Apitz | 2019-09-19 10:30:39 | PGPASSWORD in crypted form, for example BlowFish or SHA-256 |
Previous Message | Ron | 2019-09-19 05:04:51 | Re: PostgreSQL License |