From: | Alexander Korotkov <aekorotkov(at)gmail(dot)com> |
---|---|
To: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
Cc: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Nathan Bossart <nathandbossart(at)gmail(dot)com>, Michael Banck <mbanck(at)gmx(dot)net>, Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>, vignesh C <vignesh21(at)gmail(dot)com>, "Kumar, Sachin" <ssetiya(at)amazon(dot)com>, Robins Tharakan <tharakan(at)gmail(dot)com>, Jan Wieck <jan(at)wi3ck(dot)info>, Bruce Momjian <bruce(at)momjian(dot)us>, Andrew Dunstan <andrew(at)dunslane(dot)net>, Magnus Hagander <magnus(at)hagander(dot)net>, Peter Eisentraut <peter(dot)eisentraut(at)enterprisedb(dot)com>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: pg_upgrade failing for 200+ million Large Objects |
Date: | 2024-07-26 21:42:23 |
Message-ID: | CAPpHfdsZ3L8gBggXmpDJN=TiPHmLohiszV1Bv-SQKYiJEFV8wA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Fri, Jul 26, 2024 at 11:36 PM Justin Pryzby <pryzby(at)telsasoft(dot)com> wrote:
> On Wed, Jul 24, 2024 at 09:17:51AM -0500, Justin Pryzby wrote:
> > With partitioning, we have a lot of tables, some of them wide (126
> > partitioned tables, 8942 childs, total 1019315 columns).
>
> On Fri, Jul 26, 2024 at 10:53:30PM +0300, Alexander Korotkov wrote:
> > It would be nice to identify such cases and check which memory contexts are
> > growing and why.
>
> I reproduced the problem with this schema:
>
> SELECT format('CREATE TABLE p(i int, %s) PARTITION BY RANGE(i)', array_to_string(a, ', ')) FROM (SELECT array_agg(format('i%s int', i))a FROM generate_series(1,999)i);
> SELECT format('CREATE TABLE t%s PARTITION OF p FOR VALUES FROM (%s)TO(%s)', i,i,i+1) FROM generate_series(1,999)i;
>
> This used over 4 GB of RAM.
> 3114201 pryzbyj 20 0 5924520 4.2g 32476 T 0.0 53.8 0:27.35 postgres: pryzbyj postgres [local] UPDATE
>
> The large context is:
> 2024-07-26 15:22:19.280 CDT [3114201] LOG: level: 1; CacheMemoryContext: 5211209088 total in 50067 blocks; 420688 free (14 chunks); 5210788400 used
>
> Note that there seemed to be no issue when I created 999 tables without
> partitioning:
>
> SELECT format('CREATE TABLE t%s(LIKE p)', i,i,i+1) FROM generate_series(1,999)i;
Thank you! That was quick.
I'm looking into this.
------
Regards,
Alexander Korotkov
Supabase
From | Date | Subject | |
---|---|---|---|
Next Message | Michael Paquier | 2024-07-26 22:24:33 | Re: XID formatting and SLRU refactorings (was: Add 64-bit XIDs into PostgreSQL 15) |
Previous Message | Melanie Plageman | 2024-07-26 21:07:59 | Re: Vacuum ERRORs out considering freezing dead tuples from before OldestXmin |