Re: SELECT is faster on SQL Server

From: Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com>
To: Frank Millman <frank(at)chagford(dot)com>
Cc: Postgres General <pgsql-general(at)postgresql(dot)org>
Subject: Re: SELECT is faster on SQL Server
Date: 2021-03-19 11:10:43
Message-ID: CAFj8pRCFtpejSPYmoSAiCtGV3vUExV6gAU89XkFHSYw-dENirQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

pá 19. 3. 2021 v 11:58 odesílatel Frank Millman <frank(at)chagford(dot)com> napsal:

>
> On 2021-03-19 12:00 PM, Pavel Stehule wrote:
>
>
> In this query the most slow operation is query planning. You try to do
> tests on almost empty tables. This has no practical sense. You should test
> queries on tables with size similar to production size.
>
> Sorry about that. I hope this one is better. Same query, different data
> set.
>
>
> QUERY
> PLAN
>
>
> ------------------------------------------------------------------------------------------------------------------------------------------------------------
> Merge Left Join (cost=1401.00..1401.12 rows=1 width=132) (actual
> time=3.595..3.611 rows=5 loops=1)
> Merge Cond: (a.source_code_id = a_1.source_code_id)
> -> GroupAggregate (cost=673.16..673.18 rows=1 width=36) (actual
> time=1.101..1.108 rows=5 loops=1)
> Group Key: a.source_code_id
> -> Sort (cost=673.16..673.16 rows=1 width=12) (actual
> time=1.092..1.093 rows=5 loops=1)
> Sort Key: a.source_code_id
> Sort Method: quicksort Memory: 25kB
> -> Subquery Scan on a (cost=670.67..673.15 rows=1
> width=12) (actual time=1.008..1.086 rows=5 loops=1)
> Filter: (a.row_num = 1)
> Rows Removed by Filter: 59
> -> WindowAgg (cost=670.67..672.37 rows=62 width=36)
> (actual time=1.006..1.076 rows=64 loops=1)
> -> Sort (cost=670.67..670.82 rows=62
> width=28) (actual time=0.996..1.004 rows=64 loops=1)
> Sort Key: ar_totals.location_row_id,
> ar_totals.function_row_id, ar_totals.source_code_id, ar_totals.tran_date
> DESC
> Sort Method: quicksort Memory: 30kB
> -> Seq Scan on ar_totals
> (cost=0.00..668.82 rows=62 width=28) (actual time=0.012..0.933 rows=64
> loops=1)
> Filter: ((tran_date <=
> '2015-04-30'::date) AND (deleted_id = 0) AND (ledger_row_id = 1))
> Rows Removed by Filter: 840
> -> GroupAggregate (cost=727.85..727.89 rows=2 width=36) (actual
> time=2.490..2.495 rows=5 loops=1)
> Group Key: a_1.source_code_id
> -> Sort (cost=727.85..727.85 rows=3 width=12) (actual
> time=2.485..2.485 rows=5 loops=1)
> Sort Key: a_1.source_code_id
> Sort Method: quicksort Memory: 25kB
> -> Subquery Scan on a_1 (cost=700.70..727.82 rows=3
> width=12) (actual time=1.684..2.479 rows=5 loops=1)
> Filter: (a_1.row_num = 1)
> Rows Removed by Filter: 674
> -> WindowAgg (cost=700.70..719.35 rows=678
> width=36) (actual time=1.682..2.397 rows=679 loops=1)
> -> Sort (cost=700.70..702.40 rows=678
> width=28) (actual time=1.676..1.758 rows=679 loops=1)
> Sort Key: ar_totals_1.location_row_id,
> ar_totals_1.function_row_id, ar_totals_1.source_code_id,
> ar_totals_1.tran_date DESC
> Sort Method: quicksort Memory: 78kB
> -> Seq Scan on ar_totals ar_totals_1
> (cost=0.00..668.82 rows=678 width=28) (actual time=0.007..0.836 rows=679
> loops=1)
> Filter: ((tran_date <
> '2015-09-01'::date) AND (deleted_id = 0) AND (ledger_row_id = 1))
> Rows Removed by Filter: 225
> Planning Time: 0.496 ms
> Execution Time: 3.695 ms
> (34 rows)
>
>
The most slow operation here is seq scan and sort of ar_totals, but still
the 4ms query is pretty fast. Maybe MSSQL server can read data faster. Did
you run VACUUM on your table?

MSSQL has a more simple data format - so maybe seq scan can be faster.

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Christian Ramseyer 2021-03-19 11:47:02 Re: postgresql order lowercase before uppercase
Previous Message hubert depesz lubaczewski 2021-03-19 11:04:38 Re: SELECT is faster on SQL Server