From: | Ron <ronljohnsonjr(at)gmail(dot)com> |
---|---|
To: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: Upgrade from PG12 to PG |
Date: | 2023-07-20 16:59:25 |
Message-ID: | 9502f877-c699-4f28-4bb3-4cd3753c14da@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Don't use pg_dumpall. Use this instead:
pg_dump --format=directory --jobs=X --verbose
On 7/20/23 08:46, Jef Mortelle wrote:
> Hi,
>
> Many thanks for your answer.
>
> So: not possible to have very little downtime if you have a database with
> al lot rows containing text as datatype, as pg_upgrade needs 12hr for 24
> milj rows in pg_largeobject.
>
> Testing now with pg_dumpall en pg_restore ....
>
>
> I think, postgresql should take this in high priority to resolve this
> problem.
>
> I have to make a choice in the near future: Postgres or Oracle, and that
> database would have a lot of datatype text.
> Database would have 1 TB.
> It seems me a little bit tricky/dangerous to use Postgres, just for being
> able to upgrade to a newer version.
>
> Kind regards.
>
> On 20/07/2023 13:43, Ilya Kosmodemiansky wrote:
>> Hi Jef,
>>
>>
>> On Thu, Jul 20, 2023 at 1:23 PM Jef Mortelle <jefmortelle(at)gmail(dot)com> wrote:
>>> Looking at the dump file: man many lines like SELECT
>>> pg_catalog.lo_unlink('100000');
>>>
>>>
>>> I have the same issue with /usr/lib/postgresql15/bin/pg_upgrade -v -p
>>> 5431 -P 5432 -k
>>>
>>>
>>> Whats going on ?
>> pg_upgrade is known to be problematic with large objects.
>> Please take a look here to start with:
>> https://www.postgresql.org/message-id/20210309200819.GO2021%40telsasoft.com
>>
>>>
>>> Kind regards
>>>
>>>
>>>
>
>
--
Born in Arizona, moved to Babylonia.
From | Date | Subject | |
---|---|---|---|
Next Message | Jef Mortelle | 2023-07-20 17:05:54 | Re: Upgrade from PG12 to PG |
Previous Message | Ron | 2023-07-20 16:54:49 | Re: How to schedule long running SQL job |