From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Filip Rembiałkowski <filip(dot)rembialkowski(at)gmail(dot)com>, Pgsql Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: proposal: make NOTIFY list de-duplication optional |
Date: | 2016-02-06 01:49:52 |
Message-ID: | 7647.1454723392@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Robert Haas <robertmhaas(at)gmail(dot)com> writes:
> On Fri, Feb 5, 2016 at 10:17 AM, Filip Rembiakowski
> <filip(dot)rembialkowski(at)gmail(dot)com> wrote:
>> - new GUC in "Statement Behaviour" section, notify_duplicate_removal
> I agree with what Merlin said about this:
> http://www.postgresql.org/message-id/CAHyXU0yoHe8Qc=yC10AHU1nFiA1tbHsg+35Ds-oEueUapo7t4g@mail.gmail.com
Yeah, I agree that a GUC for this is quite unappetizing.
One idea would be to build a hashtable to aid with duplicate detection
(perhaps only once the pending-notify list gets long).
Another thought is that it's already the case that duplicate detection is
something of a "best effort" activity; note for example the comment in
AsyncExistsPendingNotify pointing out that we don't collapse duplicates
across subtransactions. Would it be acceptable to relax the standards
a bit further? For example, if we only checked for duplicates among the
last N notification list entries (for N say around 100), we'd probably
cover just about all the useful cases, and the runtime would stay linear.
The data structure isn't tremendously conducive to that, but it could be
done.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Noah Misch | 2016-02-06 04:52:41 | Re: LLVM Address Sanitizer (ASAN) and valgrind support |
Previous Message | Tom Lane | 2016-02-06 01:34:13 | Explanation for bug #13908: hash joins are badly broken |