From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | David Rowley <dgrowleyml(at)gmail(dot)com> |
Cc: | gzh <gzhcoder(at)126(dot)com>, pgsql-general(at)lists(dot)postgresql(dot)org |
Subject: | Re: Different execution plan between PostgreSQL 8.4 and 12.11 |
Date: | 2022-10-11 13:59:43 |
Message-ID: | 299199.1665496783@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
David Rowley <dgrowleyml(at)gmail(dot)com> writes:
> It feels like something is a bit lacking in our cost model here. I'm
> just not sure what that is.
The example you show is the same old problem that we've understood for
decades: for cost-estimation purposes, we assume that matching rows
are more or less evenly distributed in the table. Their actual
location doesn't matter that much if you're scanning the whole table;
but if you're hoping that a LIMIT will be able to stop after scanning
just a few rows, it does matter.
While it'd be pretty easy to insert some ad-hoc penalty into the
LIMIT estimation to reduce the chance of being fooled this way,
that would also discourage us from using fast-start plans when
they *do* help. So I don't see any easy fix.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Bruce Momjian | 2022-10-11 14:30:53 | Re: Different execution plan between PostgreSQL 8.4 and 12.11 |
Previous Message | Ajin Cherian | 2022-10-11 13:30:37 | Re: Support logical replication of DDLs |