From: | Noah Misch <noah(at)leadboat(dot)com> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Dan Robinson <dan(at)drob(dot)us>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Validating CHECK constraints with SPI |
Date: | 2014-10-30 03:48:40 |
Message-ID: | 20141030034840.GA407075@tornado.leadboat.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Wed, Oct 29, 2014 at 10:24:26AM -0400, Tom Lane wrote:
> Dan Robinson <dan(at)drob(dot)us> writes:
> > Since the table is locked to updates while the constraint is validating,
> > this means you have to jump through hoops if you want to add a CHECK
> > constraint to a large table in a production setting. This validation could
> > be considerably faster if we enabled it to use relevant indexes or other
> > constraints. Is there a reason not to make an SPI call here, instead?
>
> This seems like a lot of work for a gain that would only occur sometimes,
> ie if the CHECK happened to correspond to a usable index condition.
> I realize your point is that a clever DBA might intentionally create
> such an index, but I don't think that people would bother in practice.
Consider the case of adding a NOT NULL constraint. Most single-column btree
indexes can quickly determine whether the column contains nulls, so the DBA
may well get the benefit on the strength of an already-present index.
> It's not any simpler, nor faster, than using the existing approach with
> ALTER TABLE ADD CONSTRAINT NOT VALID followed by ALTER TABLE VALIDATE
> CONSTRAINT.
There will be no point in building a throwaway index for this, agreed.
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2014-10-30 04:24:56 | Re: group locking: incomplete patch, just for discussion |
Previous Message | Etsuro Fujita | 2014-10-30 03:30:33 | Re: Improve automatic analyze messages for inheritance trees |