From: | "Daniel Verite" <daniel(at)manitou-mail(dot)org> |
---|---|
To: | "Andres Freund" <andres(at)anarazel(dot)de> |
Cc: | "Tom Lane" <tgl(at)sss(dot)pgh(dot)pa(dot)us>,"Adrian Klaver" <adrian(dot)klaver(at)aklaver(dot)com>,"Alexander Shutyaev" <shutyaev(at)gmail(dot)com>,"pgsql-general" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: pg_upgrade and wraparound |
Date: | 2018-06-12 11:32:05 |
Message-ID: | ed7d86a1-b907-4f53-9f6e-63482d2f2bac@manitou-mail.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Andres Freund wrote:
> I'm not entirely clear why pg_restore appears to use a separate
> transaction for each large object, surely exascerbating the problem.
To make sure that per-object locks don't fill up the shared
lock table?
There might be hundreds of thousands of large objects.
If it had to restore N objects per transaction, would it know
how to compute N that is large enough to be effective
and small enough not to exhaust the shared table?
Best regards,
--
Daniel Vérité
PostgreSQL-powered mailer: http://www.manitou-mail.org
Twitter: @DanielVerite
From | Date | Subject | |
---|---|---|---|
Next Message | David G. Johnston | 2018-06-12 12:29:57 | Re: Semantics around INSERT INTO with SELECT and ORDER BY. |
Previous Message | Bo Peng | 2018-06-12 09:09:21 | Re: Add to watchdog cluster request is rejected by node |