From: | decibel <decibel(at)decibel(dot)org> |
---|---|
To: | Pg Hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Multi-pass planner |
Date: | 2009-08-20 15:15:11 |
Message-ID: | AD81F44E-0BBE-4335-9CBC-1C8A6A7E81D4@decibel.org |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
There have been a number of planner improvement ideas that have been
thrown out because of the overhead they would add to the planning
process, specifically for queries that would otherwise be quiet fast.
Other databases seem to have dealt with this by creating plan caches
(which might be worth doing for Postgres), but what if we could
determine when we need a fast planning time vs when it won't matter?
What I'm thinking is that on the first pass through the planner, we
only estimate things that we can do quickly. If the plan that falls
out of that is below a certain cost/row threshold, we just run with
that plan. If not, we go back and do a more detailed estimate.
--
Decibel!, aka Jim C. Nasby, Database Architect decibel(at)decibel(dot)org
Give your computer some brain candy! www.distributed.net Team #1828
From | Date | Subject | |
---|---|---|---|
Next Message | Ygor Degani | 2009-08-20 15:17:07 | Duplicated Keys in PITR |
Previous Message | Ygor Degani | 2009-08-20 15:13:57 | Duplicated Keys in PITR |