Re: increasing effective_cache_size slows down join queries by a factor of 4000x

From: Michael Lewis <mlewis(at)entrata(dot)com>
To: A Shaposhnikov <artyom(at)gmail(dot)com>
Cc: Vijaykumar Jain <vijaykumarjain(dot)github(at)gmail(dot)com>, Tomas Vondra <tomas(dot)vondra(at)enterprisedb(dot)com>, pgsql-general <pgsql-general(at)lists(dot)postgresql(dot)org>
Subject: Re: increasing effective_cache_size slows down join queries by a factor of 4000x
Date: 2022-02-03 22:30:22
Message-ID: CAHOFxGpDDSP2kXazrRjuvTWO8r=KK+474uZFL=MrQCjpFwdfhQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

I would guess that autovacuum is analyzing the table and causing the stats
to change which is resulting in different estimates that result in
different plans. Unless you can get the estimate much more accurate, you
won't get far with expecting a stable plan that performs well.

How is data_class_pkey? If you run a query like this, how far off are the
estimates?

explain analyze
select d.time as time,d.id as id, a.query_symbol as query_symbol

from
data as d
join data_class as dc ON dc.data_id = d.id
join class as a ON dc.class_id = a.id
where
d.id > 205284974
order by d.id
limit 1000;

If you run 'analyze data( id );' and then run the query again, do you get a
better estimate? Have you tried adjusting default_stats_target? Are you
running the default value for random_page_cost with SSDs?

I'm seeing Index Only Scan nodes, but a high number of fetches so it seems
like you would benefit from vacuum to update pg_class.relallvisible value.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Abhishek Bhola 2022-02-03 23:02:31 Re: Subscription stuck at initialize state
Previous Message Ron 2022-02-03 21:55:29 Re: pg_cron for vacuum - dynamic table set