From: | David Rowley <dgrowleyml(at)gmail(dot)com> |
---|---|
To: | agharta82(at)gmail(dot)com |
Cc: | PostgreSQL General <pgsql-general(at)lists(dot)postgresql(dot)org> |
Subject: | Re: A way to optimize sql about the last temporary-related row |
Date: | 2024-06-27 16:16:24 |
Message-ID: | CAApHDvpvnw2FKbPxrk58F6vo2-S9d_ooTkwgsRoWTVjmjkc2dg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, 28 Jun 2024, 3:20 am agharta82(at)gmail(dot)com, <agharta82(at)gmail(dot)com>
wrote:
>
> Now the query:
> explain (verbose, buffers, analyze)
> with last_table_ids as materialized(
> select xx from (
> select LAST_VALUE(pk_id) over (partition by integer_field_2 order by
> datetime_field_1 RANGE BETWEEN UNBOUNDED PRECEDING AND UNBOUNDED
> FOLLOWING) xx
> from test_table
> where integer_field_1 = 1
> and datetime_field_1 <= CURRENT_TIMESTAMP
> ) ww group by ww.xx
>
> ),
> last_row_per_ids as (
> select tt.* from last_table_ids lt
> inner join test_table tt on (tt.pk_id = lt.xx)
>
> )
>
> select * /* or count(*) */ from last_row_per_ids;
>
>
> This query, on my PC, takes 46 seconds!!!
>
(Away from laptop and using my phone)
Something like:
select distinct on (integer_field_2) * from test_table where
integer_field_1 = 1 and datetime_field_1 <= CURRENT_TIMESTAMP order by
integer_field_2,datetime_field_1 desc;
Might run a bit faster. However if it's slow due to I/O then maybe not
much faster. Your version took about 5 seconds on my phone and my version
ran in 1.5 seconds.
It's difficult for me to check the results match with each query from my
phone. A quick scan of the first 10 or so records looked good.
If the updated query is still too slow on cold cache then faster disks
might be needed.
David
>
From | Date | Subject | |
---|---|---|---|
Next Message | Adrian Klaver | 2024-06-27 17:03:46 | Re: Alignment check |
Previous Message | Marthin Laubscher | 2024-06-27 16:07:53 | Alignment check |