From: | Rajesh Kumar <rajeshkumar(dot)dba09(at)gmail(dot)com> |
---|---|
To: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
Cc: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Postgres storage migration |
Date: | 2023-12-08 19:05:13 |
Message-ID: | CAJk5Atb-Vei+=sCYc+i1HPxZyrAOB=F+LDr-jjQ+8fdzpNBYjQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Pg_dump -a -v -j2 -Z0 -Fd -d dbname -f dumpfilename
On Fri, 8 Dec, 2023, 5:04 PM Ron Johnson, <ronljohnsonjr(at)gmail(dot)com> wrote:
> On Fri, Dec 8, 2023 at 4:44 AM Rajesh Kumar <rajeshkumar(dot)dba09(at)gmail(dot)com>
> wrote:
>
>> Hi
>>
>> We are using openshift environment and postgres version 15.2. We want to
>> change storage from ceph to local storage. So, Storage team made default
>> storage as local. Now, I created a new cluster with same postgres version
>> and I am planning to take backup from old cluster to new cluster. Size is
>> 100gb. Ram 24gb, cpu 2.
>>
>> My question is, is there a combination of pg_dump and pg_restore that
>> takes less time to complete the task?
>>
>> Last time it took more than 8hours. We were taking schema only dump using
>> dumpall . Then data only backup using pg_dump in directory format.
>>
>
> 8 hours is really slow for just 100GB of database. What exact command are
> you using?
>
> Paul Smith is right, though: Just shut down the instance, copy the files,
> and start up the instance with new "-D" location.
> (Will need to edit postgresql.conf if it defines file locations.)
>
From | Date | Subject | |
---|---|---|---|
Next Message | Achilleas Mantzios | 2023-12-08 19:06:14 | Re: application_name backend (not) reporting back to the client : pgbouncer, PgSQL 16.1, pgbouncer 1.21.0 |
Previous Message | Tom Lane | 2023-12-08 16:56:12 | Re: application_name backend (not) reporting back to the client : pgbouncer, PgSQL 16.1, pgbouncer 1.21.0 |