From: | Jef Mortelle <jefmortelle(at)gmail(dot)com> |
---|---|
To: | pgsql-admin(at)lists(dot)postgresql(dot)org |
Subject: | Re: Upgrade from PG12 to PG |
Date: | 2023-07-20 13:46:19 |
Message-ID: | dc88d14d-6d0f-2d67-ecfc-c7495bf1c22b@gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Hi,
Many thanks for your answer.
So: not possible to have very little downtime if you have a database
with al lot rows containing text as datatype, as pg_upgrade needs 12hr
for 24 milj rows in pg_largeobject.
Testing now with pg_dumpall en pg_restore ....
I think, postgresql should take this in high priority to resolve this
problem.
I have to make a choice in the near future: Postgres or Oracle, and that
database would have a lot of datatype text.
Database would have 1 TB.
It seems me a little bit tricky/dangerous to use Postgres, just for
being able to upgrade to a newer version.
Kind regards.
On 20/07/2023 13:43, Ilya Kosmodemiansky wrote:
> Hi Jef,
>
>
> On Thu, Jul 20, 2023 at 1:23 PM Jef Mortelle <jefmortelle(at)gmail(dot)com> wrote:
>> Looking at the dump file: man many lines like SELECT
>> pg_catalog.lo_unlink('100000');
>>
>>
>> I have the same issue with /usr/lib/postgresql15/bin/pg_upgrade -v -p
>> 5431 -P 5432 -k
>>
>>
>> Whats going on ?
> pg_upgrade is known to be problematic with large objects.
> Please take a look here to start with:
> https://www.postgresql.org/message-id/20210309200819.GO2021%40telsasoft.com
>
>>
>> Kind regards
>>
>>
>>
From | Date | Subject | |
---|---|---|---|
Next Message | M Sarwar | 2023-07-20 14:07:03 | Re: How to schedule long running SQL job |
Previous Message | Jef Mortelle | 2023-07-20 13:42:37 | Re: Upgrade from PG12 to PG |