From: | Craig Ringer <craig(at)2ndquadrant(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Rajeev rastogi <rajeev(dot)rastogi(at)huawei(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Improving executor performance |
Date: | 2016-06-29 22:45:36 |
Message-ID: | CAMsr+YHMOgZMas=Y1H9gKQ90rsLBgjeE8WcFKAHbYm-PtXmw-A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 30 June 2016 at 02:32, Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2016-06-28 10:01:28 +0000, Rajeev rastogi wrote:
> > >3) Our 1-by-1 tuple flow in the executor has two major issues:
> >
> > Agreed, In order to tackle this IMHO, we should
> > 1. Makes the processing data-centric instead of operator centric.
> > 2. Instead of pulling each tuple from immediate operator, operator can
> push the tuple to its parent. It can be allowed to push until it sees any
> operator, which cannot be processed without result from other operator.
> > More details from another thread:
> >
> https://www.postgresql.org/message-id/BF2827DCCE55594C8D7A8F7FFD3AB77159A9B904@szxeml521-mbs.china.huawei.com
>
> I doubt that that's going to be ok in the generic case (memory usage,
> materializing too much, "bushy plans", merge joins)
Yeah. You'd likely start landing up with Haskell-esque predictability of
memory use. Given how limited and flawed work_mem handling etc already is,
that doesn't sound like an appealing direction to go in. Not without a
bunch of infrastructure to manage queue sizes and force work into batches
to limit memory use, anyway.
--
Craig Ringer http://www.2ndQuadrant.com/
PostgreSQL Development, 24x7 Support, Training & Services
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2016-06-29 22:54:37 | Re: primary_conninfo missing from pg_stat_wal_receiver |
Previous Message | Craig Ringer | 2016-06-29 22:33:57 | Re: pgbench unable to scale beyond 100 concurrent connections |