| From: | Ron Johnson <ronljohnsonjr(at)gmail(dot)com> |
|---|---|
| To: | Rajesh Kumar <rajeshkumar(dot)dba09(at)gmail(dot)com> |
| Cc: | Pgsql-admin <pgsql-admin(at)lists(dot)postgresql(dot)org> |
| Subject: | Re: Postgres storage migration |
| Date: | 2023-12-09 05:08:13 |
| Message-ID: | CANzqJaB1crxv5ueGvCf_e_s=kSCG7VZUHm2gi6ncVxa+SEHXNQ@mail.gmail.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-admin |
You're not being very helpful to the people who are trying to help you.
This works for me *every* time:
Source:
pg_dumpall -gvf ${BackupDir}/globals.sql 2> /dev/null
pg_dump -j ${Threads} -Z${ZLvl} -v -C -Fd --file=${SrcBackupDir}/$DB $DB 2>
${DB}_pgdump.log
Destination:
(Edit globals.sql and apply only the required statements to the target
instance.)
pg_restore -v --jobs=$Threads --clean --create -Fd --dbname=postgres
${TargetBackupDir}/$DB
On Fri, Dec 8, 2023 at 11:31 PM Rajesh Kumar <rajeshkumar(dot)dba09(at)gmail(dot)com>
wrote:
> We are facing error while restoring..So, I used dumpall to dump only
> schema and restored schema first . Then dumping data.
>
> On Sat, 9 Dec 2023, 01:30 Ron Johnson, <ronljohnsonjr(at)gmail(dot)com> wrote:
>
>> In --format=d mode,--file= is the *directory* where the data files are
>> stored, not regular file.
>>
>> This works for me:
>> pg_dump -j${Threads} -Z${ZLvl} -v -C -Fd --file=${BackupDir}/$DB $DB 2>
>> ${DB}_pgdump.log
>>
>> (Also, why are you just dumping the data, if you need to migrate the
>> whole database?
>>
>> On Fri, Dec 8, 2023 at 2:05 PM Rajesh Kumar <rajeshkumar(dot)dba09(at)gmail(dot)com>
>> wrote:
>>
>>> Pg_dump -a -v -j2 -Z0 -Fd -d dbname -f dumpfilename
>>>
>>> On Fri, 8 Dec, 2023, 5:04 PM Ron Johnson, <ronljohnsonjr(at)gmail(dot)com>
>>> wrote:
>>>
>>>> On Fri, Dec 8, 2023 at 4:44 AM Rajesh Kumar <
>>>> rajeshkumar(dot)dba09(at)gmail(dot)com> wrote:
>>>>
>>>>> Hi
>>>>>
>>>>> We are using openshift environment and postgres version 15.2. We want
>>>>> to change storage from ceph to local storage. So, Storage team made default
>>>>> storage as local. Now, I created a new cluster with same postgres version
>>>>> and I am planning to take backup from old cluster to new cluster. Size is
>>>>> 100gb. Ram 24gb, cpu 2.
>>>>>
>>>>> My question is, is there a combination of pg_dump and pg_restore that
>>>>> takes less time to complete the task?
>>>>>
>>>>> Last time it took more than 8hours. We were taking schema only dump
>>>>> using dumpall . Then data only backup using pg_dump in directory format.
>>>>>
>>>>
>>>> 8 hours is really slow for just 100GB of database. What exact command
>>>> are you using?
>>>>
>>>> Paul Smith is right, though: Just shut down the instance, copy the
>>>> files, and start up the instance with new "-D" location.
>>>> (Will need to edit postgresql.conf if it defines file locations.)
>>>>
>>>
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Achilleas Mantzios | 2023-12-09 06:52:36 | Re: application_name backend (not) reporting back to the client : pgbouncer, PgSQL 16.1, pgbouncer 1.21.0 |
| Previous Message | Rajesh Kumar | 2023-12-09 02:56:38 | Re: Postgres storage migration |