From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | "pgsql-generallists(dot)postgresql(dot)org" <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Backup |
Date: | 2024-10-16 19:55:30 |
Message-ID: | CANzqJaBfaOvbjJYYCPPC-JKFEw=pZTGkfwL3+-t2fjP7B_9U+g@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 16, 2024 at 3:37 PM Andy Hartman <hartman60home(at)gmail(dot)com>
wrote:
> I am very new to Postgres and have always worked in the mssql world. I'm
> looking for suggestions on DB backups. I currently have a DB used to store
> Historical information that has images it's currently around 100gig.
>
> I'm looking to take a monthly backup as I archive a month of data at a
> time. I am looking for it to be compressed and have a machine that has
> multiple cpu's and ample memory.
>
> Suggestions on things I can try ?
> I did a pg_dump using these parms
> --format=t --blobs lobarch
>
> it ran my device out of storage:
>
> pg_dump: error: could not write to output file: No space left on device
>
> I have 150gig free on my backup drive... can obviously add more
>
> looking for the quickest and smallest backup file output...
>
> Thanks again for help\suggestions
>
Step 1: redesign your DB to *NOT* use large objects. It's an old, slow and
unmaintained data type. The data type is what you should use.
Step 2: show us the "before" df output, the whole pg_dump command, and the
"after" df output when it fails. "du -c --max-depth=0 $PGDATA/base" also
very useful.
And tell us what version you're using.
--
Death to <Redacted>, and butter sauce.
Don't boil me, I'm still alive.
<Redacted> crustacean!
From | Date | Subject | |
---|---|---|---|
Next Message | Achilleas Mantzios | 2024-10-16 19:59:47 | Re: Backup |
Previous Message | Tomas Vondra | 2024-10-16 19:52:44 | Re: Backup |