Re: BUG #14808: V10-beta4, backend abort

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com>
Cc: Michael Paquier <michael(dot)paquier(at)gmail(dot)com>, Andrew Gierth <rhodiumtoad(at)postgresql(dot)org>, PostgreSQL mailing lists <pgsql-bugs(at)postgresql(dot)org>, Kevin Grittner <kgrittn(at)gmail(dot)com>
Subject: Re: BUG #14808: V10-beta4, backend abort
Date: 2017-09-13 23:54:48
Message-ID: 1683.1505346888@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-bugs

Thomas Munro <thomas(dot)munro(at)enterprisedb(dot)com> writes:
> Incidentally, understanding that made me wonder why we don't have a
> binary chunk-oriented in-memory-up-to-some-size-then-spill-to-disk
> spooling mechanism that could be used for the trigger queue itself
> (which currently doesn't know how to spill to disk and therefore can
> take your server out), including holding these tuple images directly
> (instead of spilling just the tuples in synchronised order with the
> in-memory trigger queue).

The past discussions about spilling the trigger queue have generally
concluded that by the time your event list was long enough to cause
serious pain, you already had a query that was never gonna complete.
That may be getting less true as time goes on, but I'm not sure ---
seems like RAM capacity is growing faster than CPU speed. Anyway,
that's why it never got done.

Given the addition of transition tables, I suspect there will be
even less motivation to fix it: the right thing to do with mass
updates will be to use a TT with an after-statement trigger, and
that fixes it by putting the bulk data into a spillable tuplestore.

regards, tom lane

In response to

Browse pgsql-bugs by date

  From Date Subject
Next Message Pierre-Emmanuel André 2017-09-14 07:43:32 Re: BUG #14814: Documentation errors for OpenBSD
Previous Message Tom Lane 2017-09-13 23:45:46 Re: BUG #14808: V10-beta4, backend abort