From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | pgsql-hackers(at)lists(dot)postgresql(dot)org |
Subject: | Re: Overflow of attmissingval is not handled gracefully |
Date: | 2022-02-28 23:36:14 |
Message-ID: | 8278b099-dc1b-234d-2ac2-39a7cc19b585@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 2/28/22 18:21, Tom Lane wrote:
> Consider this admittedly-rather-contrived example:
>
> regression=# create table foo(f1 int);
> CREATE TABLE
> regression=# alter table foo add column bar text default repeat('xyzzy', 1000000);
> ERROR: row is too big: size 57416, maximum size 8160
>
> Since the table contains no rows at all, this is a surprising
> failure. The reason for it of course is that pg_attribute
> has no TOAST table, so it can't store indefinitely large
> attmissingval fields.
>
> I think the simplest answer, and likely the only feasible one for
> the back branches, is to disable the attmissingval optimization
> if the proposed value is "too large". Not sure exactly where the
> threshold for that ought to be, but maybe BLCKSZ/8 could be a
> starting offer.
>
>
WFM. After all, it's taken several years for this to surface. Is it
based on actual field experience?
cheers
andrew
--
Andrew Dunstan
EDB: https://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2022-02-28 23:46:49 | Re: Overflow of attmissingval is not handled gracefully |
Previous Message | Tom Lane | 2022-02-28 23:21:54 | Overflow of attmissingval is not handled gracefully |