From: | "Jim C(dot) Nasby" <jnasby(at)pervasive(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Enzo Daddario <enzo(at)pienetworks(dot)com>, PGADMIN <pgsql-admin(at)postgresql(dot)org> |
Subject: | Re: Constraint question |
Date: | 2006-05-15 15:53:03 |
Message-ID: | 20060515155303.GL26212@pervasive.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
On Mon, May 15, 2006 at 11:17:31AM -0400, Tom Lane wrote:
> "Jim C. Nasby" <jnasby(at)pervasive(dot)com> writes:
> > On Sun, May 14, 2006 at 10:50:50PM -0400, Tom Lane wrote:
> >> Enzo Daddario <enzo(at)pienetworks(dot)com> writes:
> >>> I am now required the remove thousands of obsolete records from table
> >>> "X" and even though I have removed all related rows in all related
> >>> tables, the deletion of rows in table "X" is taking WAY TOO LONG.
> >>
> >> You need to put indexes on the referencing columns. Then start a fresh
> >> session for doing the actual deletes (FK check plans are usually cached
> >> per-session).
>
> > Would SET CONSTRAINTS ... DEFERRED not help, or does it still use the
> > same machinery to do the checking, regardless of how much data there is
> > to check?
>
> It's the same machinery. We've speculated about having the thing switch
> over to doing a full-table recheck (comparable to what ADD CONSTRAINT
> FOREIGN KEY does) once the number of pending individual row checks
> exceeds some threshold, but that's not done yet --- and it's not clear
> how to do it in a concurrent fashion, so don't hold your breath ...
I'm assuming that in order for deferred constraints to work we must have
a list of what's changed... couldn't that list be fed into an
appropriate query and then planned accordingly? This would allow the
backend to use the best validation method possible; merge join, hash
join, etc. If the number of changed rows is small, the planner would
probably pick nested loop (I'm assuming that's roughly how the IMMEDIATE
case works); if the number of changed rows is large it'd favor something
else, but at least it wouldn't be re-checking the entire source table.
--
Jim C. Nasby, Sr. Engineering Consultant jnasby(at)pervasive(dot)com
Pervasive Software http://pervasive.com work: 512-231-6117
vcard: http://jim.nasby.net/pervasive.vcf cell: 512-569-9461
From | Date | Subject | |
---|---|---|---|
Next Message | Andy Shellam | 2006-05-15 15:53:16 | Re: Error in MS Access |
Previous Message | Xu, Xiaoyi (Rocky) FSM | 2006-05-15 15:38:47 | Error in MS Access |