From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | Ashutosh Bapat <ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> |
Cc: | Amit Langote <amitlangote09(at)gmail(dot)com>, Ashutosh Bapat <ashutosh(dot)bapat(at)2ndquadrant(dot)com>, PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: making update/delete of inheritance trees scale better |
Date: | 2020-05-12 23:51:54 |
Message-ID: | CAApHDvr5AX-rdZAx5yFr1nUF6-ZABCgKe2wy0Gn36Exik1nBHw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, 13 May 2020 at 00:54, Ashutosh Bapat
<ashutosh(dot)bapat(dot)oss(at)gmail(dot)com> wrote:
>
> On Mon, May 11, 2020 at 8:11 PM Amit Langote <amitlangote09(at)gmail(dot)com> wrote:
> > > Per row overhead would be incurred for every row whereas the plan time
> > > overhead is one-time or in case of a prepared statement almost free.
> > > So we need to compare it esp. when there are 2000 partitions and all
> > > of them are being updated.
> >
> > I assume that such UPDATEs would be uncommon.
>
> Yes, 2000 partitions being updated would be rare. But many rows from
> the same partition being updated may not be that common. We have to
> know how much is that per row overhead and updating how many rows it
> takes to beat the planning time overhead. If the number of rows is
> very large, we are good.
Rows from a non-parallel Append should arrive in order. If you were
worried about the performance of finding the correct ResultRelInfo for
the tuple that we just got, then we could just cache the tableOid and
ResultRelInfo for the last row, and if that tableoid matches on this
row, just use the same ResultRelInfo as last time. That'll save
doing the hash table lookup in all cases, apart from when the Append
changes to the next child subplan. Not sure exactly how that'll fit
in with the foreign table discussion that's going on here though.
Another option would be to not use tableoid and instead inject an INT4
Const (0 to nsubplans) into each subplan's targetlist that serves as
the index into an array of ResultRelInfos.
As for which ResultRelInfos to initialize, couldn't we just have the
planner generate an OidList of all the ones that we could need.
Basically, all the non-pruned partitions. Perhaps we could even be
pretty lazy about building those ResultRelInfos during execution too.
We'd need to grab the locks first, but, without staring at the code, I
doubt there's a reason we'd need to build them all upfront. That
would help in cases where pruning didn't prune much, but due to
something else in the WHERE clause, the results only come from some
small subset of partitions.
David
From | Date | Subject | |
---|---|---|---|
Next Message | Alvaro Herrera | 2020-05-13 00:16:52 | Re: pg13: xlogreader API adjust |
Previous Message | Tom Lane | 2020-05-12 23:41:19 | Re: Our naming of wait events is a disaster. |