From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Shaun Thomas <sthomas(at)townnews(dot)com> |
Cc: | pgsql-admin(at)postgresql(dot)org |
Subject: | Re: Cost limit. |
Date: | 2001-05-18 16:21:47 |
Message-ID: | 19867.990202907@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-admin |
Shaun Thomas <sthomas(at)townnews(dot)com> writes:
> I can't seem to find it in the docs, so I'll ask here. Is there a
> way in postgres to impose a limit on query costs? I'd noticed some
> horribly disfigured queries hitting my poor database from one of our
> developers, and corrected him. But many of our users are not so
> easily contacted. What I want to know is if there's a configuration
> parameter or source patch that will allow me to disallow a query
> execution if the pre-execution cost estimation is too high.
Given the inherent inaccuracy of the cost estimates, I'd be real
hesitant to rely on them to suppress overly-expensive queries.
What might make sense is a time limit on execution (after x amount
of time, give up and cancel the query). No one's put such a thing
into the backend AFAIK. Note you could implement such a time limit
purely on the client side, which might be an easier and more flexible
way to go.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Shaun Thomas | 2001-05-18 16:41:52 | Re: Cost limit. |
Previous Message | Bruce Momjian | 2001-05-18 16:12:07 | Re: Cost limit. |