From: | tomohiro hiramitsu <hiramit(dot)tm(at)gmail(dot)com> |
---|---|
To: | Magnus Hagander <magnus(at)hagander(dot)net> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Fujii Masao <masao(dot)fujii(at)oss(dot)nttdata(dot)com>, skoposov(at)ed(dot)ac(dot)uk, PostgreSQL mailing lists <pgsql-bugs(at)lists(dot)postgresql(dot)org> |
Subject: | Re: BUG #16722: PG hanging on COPY when table has close to 2^32 toasts in the table. |
Date: | 2020-12-28 07:31:31 |
Message-ID: | CAOR2cEYsErs93CKh02V63fBvF7A_tvYrHAv3FiQ6bVniJZkmbQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-bugs |
Hi
> Oh meh. Yeah, I didn't think about the actual implementation of doing it
:/
>
> That said, I'd really prefer to see *some* kind of counter that could
> get people noticing this a bit earlier, rather than just bugging them
> in the logfile once it's gone over a threshold. A statistics counter
> maybe, but I guess that'd have to be tracked at a per-table level to
> be really useful, which would make it potentially fairly expensive to
> keep around...
I think the statistics on OID assignments per-table is a good idea, but how
about simply logging the event to notify users as the first step?
I think it's easy to implement and helpful for users.
First of all, I would like to provide a way for the user to know that it is
taking a long time to generate the OID, and provide an opportunity for the
user to deal with this problem with statement_timeout etc.
This is an interim measure, but I think it's better than it is.
Since it is complicated to handle WaitEvent and statistics for each table,
I first considered a patch that simply outputs logs, giving priority to
avoiding a hang state without outputting logs.
* v01-0001-OID_log_output.patch
In this patch, GetNewOidWithIndex outputs a log when it loops too many
times to assign a new OID.
Also, if the log is output even once, it will be output to the log when
the OID is assigned.
* This patch does not cause an error even if it loops too many times.
How much time can be tolerated in the OID search loop depends on the
user, so I think that the user should decide the time to make an error with
statement_timeout.
* The log output interval threshold increases by "* = 2"(exponential
backoff) and the maximum number of log outputs is set to 5.
The reason for setting this threshold is to reduce the number of log
outputs and output the log before stopping at statement_timeout etc. I
think the appropriate value for this threshold is controversial.
Since it is not necessary to keep logging, the upper limit is set to 5
times.
* The log output interval threshold is set to 1 million for the first
log output.
In my environment, it takes about 4 seconds to loop 1 million times,
and the 5th log is output after about 2 minutes.I think this threshold is
just right considering statements_timeout, but it's controversial.
Log output does not need to continue output during OID assignment. I think
the first few and the last log are enough.
I will add this patch to the commitfest. I look forward to your feedback
about the patch.
By the way
How about distributing OIDs so that the GetNewOidWithIndex function doesn't
take long to find an available OID?
For example, skip one and assign an OID.
As a side effect of this method, the OID progresses faster.
(Maybe the btree index will be fragmented faster)
It may be better to include skips only for TOAST chunk_id, as this method
may have other side effects.
regards.
--
Tomohiro Hiramitsu
NTT Open Source Software Center
2020年12月21日(月) 16:44 Magnus Hagander <magnus(at)hagander(dot)net>:
> On Wed, Nov 18, 2020 at 8:27 PM Andres Freund <andres(at)anarazel(dot)de> wrote:
> >
> > Hi,
> >
> > On 2020-11-18 11:57:05 +0100, Magnus Hagander wrote:
> > > This definitely sounds like something that's worth putting out as a
> > > wait event. Even before you start traversing millions of OIDs it might
> > > gradually start to show up, and being able to monitor that would
> > > definitely be useful.
> >
> > I don't think this is likely to work well as a wait event. All the index
> > traversals etc will do IO, acquire locks, etc, which will overwrite the
> > wait event and reset it to nothing once done.
>
> Oh meh. Yeah, I didn't think about the actual implementation of doing it :/
>
> That said, I'd really prefer to see *some* kind of counter that could
> get people noticing this a bit earlier, rather than just bugging them
> in the logfile once it's gone over a threshold. A statistics counter
> maybe, but I guess that'd have to be tracked at a per-table level to
> be really useful, which would make it potentially fairly expensive to
> keep around...
>
> --
> Magnus Hagander
> Me: https://www.hagander.net/
> Work: https://www.redpill-linpro.com/
>
>
>
>
>
Attachment | Content-Type | Size |
---|---|---|
v1-0001-GetNewOidWithIndex_log_output.patch | application/octet-stream | 2.4 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | PG Bug reporting form | 2020-12-28 12:49:43 | BUG #16794: BEFORE UPDATE FOR EACH ROW triggers on partitioned tables can break tuple moving UPDATEs |
Previous Message | PG Bug reporting form | 2020-12-28 06:41:47 | BUG #16793: Libxml2 contains a null pointer dereference flaw in xpath.c |