Re: Advice needed: query performance deteriorates by 2000% within 1 minute

From: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
To: Peter Adlersburg <peter(dot)adlersburg(at)gmail(dot)com>
Cc: pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Advice needed: query performance deteriorates by 2000% within 1 minute
Date: 2022-02-24 16:10:51
Message-ID: 1351442.1645719051@sss.pgh.pa.us
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Peter Adlersburg <peter(dot)adlersburg(at)gmail(dot)com> writes:
> Limit (cost=0.00..804.97 rows=10 width=22) (actual
> time=23970.845..25588.432 rows=1 loops=1)
> -> Seq Scan on "order" (cost=0.00..3863.86 rows=48 width=22) (actual
> time=23970.843..25588.429 rows=1 loops=1)
> Filter: (jsonb_to_tsvector('english'::regconfig, content,
> '["all"]'::jsonb) @@ '''1.20709841'''::tsquery)
> Rows Removed by Filter: 9652
> Planning Time: 0.430 ms
> Execution Time: 25588.448 ms

I think the expense here comes from re-executing jsonb_to_tsvector
a lot of times. By default that's estimated as 100 times more expensive
than a simple function (such as addition), but these results make it
seem like that's an understatement. You might try something like

alter function jsonb_to_tsvector(regconfig, jsonb, jsonb) cost 1000;

to further discourage the planner from picking this plan shape.

Possibly the cost estimate for ts_match_tq (the function underlying
this variant of @@) needs to be bumped up as well.

(Bear in mind that pg_dump will not propagate such hacks on
system-defined objects, so you'll need a note to reapply
any such changes after dump/reload or pg_upgrade.)

regards, tom lane

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message MichaelDBA 2022-02-24 18:45:57 Re: Slow plan choice with prepared query
Previous Message Michael Lewis 2022-02-24 15:05:50 Re: Advice needed: query performance deteriorates by 2000% within 1 minute