From: | Justin Pryzby <pryzby(at)telsasoft(dot)com> |
---|---|
To: | Paulo Silva <paulojjs(at)gmail(dot)com> |
Cc: | pgsql-performance(at)lists(dot)postgresql(dot)org |
Subject: | Re: Strange behavior of limit clause in complex query |
Date: | 2022-06-08 14:32:47 |
Message-ID: | 20220608143247.GO29853@telsasoft.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Wed, Jun 08, 2022 at 09:44:08AM +0100, Paulo Silva wrote:
> But if I add an ORDER BY and a LIMIT something goes very wrong (Q2):
A somewhat common problem.
A common workaround is to change "ORDER BY a" to something like "ORDER BY a+0"
(if your framework will allow it).
> An EXPLAIN (ANALYZE, BUFFERS) for Q2 returns this:
...
> -> Index Scan Backward using ix_ng_content_date on ng_content "Extent1" (cost=0.43..40616715.85 rows=2231839 width=12) (actual time=11027.808..183839.289 rows=5 loops=1)
> Filter: ((2 = id_status) AND (date_from <= LOCALTIMESTAMP) AND (date_to >= LOCALTIMESTAMP) AND (SubPlan 1))
> Rows Removed by Filter: 4685618
> Buffers: shared hit=15414533 read=564480 written=504
I'm not sure if it would help your original issue, but the rowcount estimate
here is bad - overestimating 2231839 rows instead of 5.
Could you try to determine which of those conditions (id_status, date_from,
date_to, or SubPlan) causes the mis-estimate, or if the estimate is only wrong
when they're combined ?
--
Justin
From | Date | Subject | |
---|---|---|---|
Next Message | Josh | 2022-06-11 23:50:49 | Missed query planner optimization: `n in (select q)` -> `n in (q)` |
Previous Message | Tom Lane | 2022-06-08 14:32:03 | Re: Adding non-selective key to jsonb query @> reduces performance? |