From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> |
Cc: | pgsql-bugs(at)lists(dot)postgresql(dot)org |
Subject: | Re: Large objects and out-of-memory |
Date: | 2020-12-21 18:27:25 |
Message-ID: | 543675.1608575245@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Konstantin Knizhnik <k(dot)knizhnik(at)postgrespro(dot)ru> writes:
> The following sequence of command cause backend's memory to exceed 10Gb:
> INSERT INTO image1 SELECT lo_creat(-1) FROM generate_series(1,10000000);
> REASSIGN OWNED BY alice TO testlo;
[ shrug... ] You're asking to change the ownership of 10000000 objects.
This is not going to be a cheap operation. AFAIK it's not going to be
any more expensive than changing the ownership of 10000000 tables, or
any other kind of object.
The argument for allowing large objects to have per-object ownership and
permissions in the first place was that useful scenarios wouldn't have a
huge number of them (else you'd run out of disk space, if they're actually
"large"), so we needn't worry too much about the overhead.
We could possibly bound the amount of space used in the inval queue by
switching to an "invalidate all" approach once we got to an unreasonable
amount of space. But this will do nothing for the other costs involved,
and I'm not really sure it's worth adding complexity for.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2020-12-21 18:31:32 | Re: BUG #16079: Question Regarding the BUG #16064 |
Previous Message | Jeff Janes | 2020-12-21 17:26:17 | Re: BUG #16079: Question Regarding the BUG #16064 |