Re: Performance loss after upgrading from 12.15 to 17.2

From: Laurenz Albe <laurenz(dot)albe(at)cybertec(dot)at>
To: Tobias Orlamünde <postgres(at)linux-guru(dot)com>, pgsql-performance(at)lists(dot)postgresql(dot)org
Subject: Re: Performance loss after upgrading from 12.15 to 17.2
Date: 2025-02-03 15:59:21
Message-ID: 43ed5a72fdfdd44fa046d8911c496364a3eb9231.camel@cybertec.at
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Mon, 2025-02-03 at 15:11 +0100, Tobias Orlamünde wrote:
> We are currently in the process of upgrading from 12.15 with Timescale
> 2.11 to 17.2 with Timescale 2.17.2
> On our pre-prod env we have already upgraded and noticed a remarkable
> performance issue vs. the so far not upgraded production environment.
> If we run the exact same query in our pre-rpod env, the execution time
> increased from ~ 250 ms to over 377k ms.
> Changing random_page_cost from 1.1 to 4 does not help, but changing
> work_mem from 64 MB to 256 MB helps, whereas any value lower than 256 MB
> does not help. The prod server is configured with work_mem = 50 MB and
> is executing the query in 13 ms
>
> The Hypertable has been created on column received_time with dimension Time.
>
> In general, we see, that, if we are lowering the filter period of
> tick.received_time to ~ 14 days, the query is perfomant (on pre-prod).
> In prod, neither in- nor de-creasing this does significantly change the
> execution time.
>
> My first assumption is, that somehow accessing the compressed chunks is
> eating up all the time (which we could also see in the below's explain
> statements). I somehow tend to point to the compression methods which,
> IIRC, significantly changed from 12 to 17.
>
> Maybe someone could have a look into this and guide me to the right spot
> for further examination or even solving this issue?

The difference is here (there are two instances per query):

Bad:

-> Seq Scan on compress_hyper_6_106_chunk (cost=0.00..6428297.17 rows=2518 width=321) (actual time=196292.784..196292.784 rows=0 loops=1)
Filter: ((_ts_meta_min_1 <= '2025-01-29 14:31:36'::timestamp without time zone) AND (_ts_meta_max_1 > '2025-01-01 14:31:36'::timestamp without time zone) AND ((xx_id)::text =
'XS2991917530'::text))
Rows Removed by Filter: 30492771
Buffers: shared read=5894720

Good:

-> Index Scan using compress_hyper_6_106_chunk_xx_id_xx_feed_id__ts_meta_min_1_idx on compress_hyper_6_106_chunk (cost=0.56..1571.33 rows=2518 width=321) (actual time=0.010..0.010 rows=0 loops=1)
Index Cond: (((xx_id)::text = 'XS2991917530'::text) AND (_ts_meta_min_1 <= '2025-01-29 14:31:36'::timestamp without time zone) AND (_ts_meta_max_1 > '2025-01-01 14:31:36'::timestamp without
time zone))
Buffers: shared hit=4

As a first measure, I would run

ANALYZE compress_hyper_6_106_chunk;

or analyze the partitioned table. It might well be that the statistics are off.

If that doesn't help, it would be interesting to run the query with the low "work_mem"
setting, but with "enable_seqscan = off".

- Does PostgreSQL choose the correct index then?
- What are the cost estimates for the index scan?

Yours,
Laurenz Albe
--
Cybertec | https://www.cybertec-postgresql.com

In response to

Browse pgsql-performance by date

  From Date Subject
Previous Message Tobias Orlamünde 2025-02-03 14:11:05 Performance loss after upgrading from 12.15 to 17.2