From: | hubert depesz lubaczewski <depesz(at)depesz(dot)com> |
---|---|
To: | Frank Millman <frank(at)chagford(dot)com> |
Cc: | Postgres General <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: SELECT is faster on SQL Server |
Date: | 2021-03-19 11:04:38 |
Message-ID: | 20210319110438.GA21117@depesz.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Fri, Mar 19, 2021 at 12:58:10PM +0200, Frank Millman wrote:
> On 2021-03-19 12:00 PM, Pavel Stehule wrote:
>
> In this query the most slow operation is query planning. You try to do tests on almost empty tables. This has no practical sense.
> You should test queries on tables with size similar to production size.
>
> Sorry about that. I hope this one is better. Same query, different data set.
For starters, I'm not really sure it makes sense to optimize a query
that runs in 3.5 miliseconds!
Having said that, after putting the plan on explain.depesz.com, I got:
https://explain.depesz.com/s/xZel
Which shows that ~ 50% of time was spent in scan on ar_totals and
sorting it.
You seem to have some really weird indexed on ar_totals created (mixed
of nulls ordering).
Why don't you start with simple:
create index q on ar_totals (ledger_row_id, tran_date) where deleted_id = 0;
But, again - either you're overthinking performance of a query that can
run over 200 times per second on single core, or you're testing it with
different data than the one that is really a problem.
Best regards,
depesz
From | Date | Subject | |
---|---|---|---|
Next Message | Pavel Stehule | 2021-03-19 11:10:43 | Re: SELECT is faster on SQL Server |
Previous Message | Frank Millman | 2021-03-19 10:58:10 | Re: SELECT is faster on SQL Server |