From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Simon Riggs <simon(at)2ndquadrant(dot)com> |
Cc: | PostgreSQL-development <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Optimising Foreign Key checks |
Date: | 2013-06-05 09:37:51 |
Message-ID: | CAM-w4HPsgKt9eUag8yB44sPnSwS2rUtyN_pF-212e5QxP-2j0A@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Sat, Jun 1, 2013 at 9:41 AM, Simon Riggs <simon(at)2ndquadrant(dot)com> wrote:
> COMMIT;
> The inserts into order_line repeatedly execute checks against the same
> ordid. Deferring and then de-duplicating the checks would optimise the
> transaction.
>
> Proposal: De-duplicate multiple checks against same value. This would
> be implemented by keeping a hash of rows that we had already either
> inserted and/or locked as the transaction progresses, so we can use
> the hash to avoid queuing up after triggers.
Fwiw the reason we don't do that now is that the rows might be later
deleted within the same transaction (or even the same statement I
think). If they are then the trigger needs to be skipped for that row
but still needs to happen for other rows. So you need to do some kind
of book-keeping to keep track of that. The easiest way was just to do
the check independently for each row. I think there's a comment about
this in the code.
I think you're right that this should be optimized because in the vast
majority of cases you don't end up deleting rows and we're currently
doing lots of redundant checks. But you need to make sure you don't
break the unusual case entirely.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Hannu Krosing | 2013-06-05 09:47:45 | Re: Optimising Foreign Key checks |
Previous Message | Greg Stark | 2013-06-05 09:12:17 | Re: pg_rewind, a tool for resynchronizing an old master after failover |