From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Gianni Ciolli <gianni(dot)ciolli(at)2ndquadrant(dot)it> |
Cc: | pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Proposed fix for NOTIFY performance degradation |
Date: | 2011-04-23 19:44:13 |
Message-ID: | 4306.1303587853@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Gianni Ciolli <gianni(dot)ciolli(at)2ndquadrant(dot)it> writes:
> [ proposes lobotomization of duplicate-elimination behavior in NOTIFY ]
I think this change is likely to be penny-wise and pound-foolish.
The reason the duplicate check is in there is that things like triggers
may just do "NOTIFY my_table_changed". If the trigger is fired N times
in a transaction, and you don't have duplicate-elimination in NOTIFY,
then you get N duplicate messages to no purpose. And the expense of
actually sending (and processing) those messages is a lot higher than
suppressing them would be.
With the proposed change, the simplest case of just one such trigger is
still covered, but not two or more. I don't think this is good enough.
It's basically throwing the responsibility on the application programmer
to avoid duplicates --- and in most scenarios, it will cost much more
to suppress duplicates in PL code than to do it here.
When I started to read this patch I was hoping to see some clever scheme
for detecting dups at lower cost than what we currently do, like perhaps
hashing. I'm not impressed with just abandoning the responsibility,
though.
One idea we might consider is to offer two forms of NOTIFY, one that
suppresses dups and one that doesn't, so that in cases where the app
programmer knows his code doesn't generate (many) dups he can tell us
not to bother checking.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Reini Urban | 2011-04-23 19:58:08 | Re: Fix for Perl 5.14 |
Previous Message | Tom Lane | 2011-04-23 17:17:14 | Some TODO items for collations |