From: | Pavel Stehule <pavel(dot)stehule(at)gmail(dot)com> |
---|---|
To: | ar(at)zeit(dot)io |
Cc: | "pgsql-general(at)postgresql(dot)org >> PG-General Mailing List" <pgsql-general(at)postgresql(dot)org> |
Subject: | Re: How to improve sql query to achieve the better plan |
Date: | 2018-09-30 16:45:22 |
Message-ID: | CAFj8pRC2ie6FhKLcFMKurtV6337535gepNARnBgCj9J-O5WWYA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi
ne 30. 9. 2018 v 18:23 odesílatel Arup Rakshit <ar(at)zeit(dot)io> napsal:
> I have the below query which is taking 1873 ms. How can I improve this?
>
> explain analyze select
> sum(coalesce(price_cents, 0)::bigint * coalesce(quantity, 0) * (1 -
> coalesce(workitems.discount, 0)/ 100)) as total_budget_cents,
> sum(coalesce(price_cents, 0)::bigint * coalesce(quantity, 0) * (1 -
> coalesce(workitems.discount, 0)/ 100) + coalesce(additional_cost_cents, 0)
> - coalesce(cost_reduction_cents, 0)) as final_budget_cents,
> projects.id as project_id
> from
> projects
> left join workitems on
> workitems.project_id = projects.id
> where
> workitems.deleted_at is null
> group by
> projects.id
> order by
> project_id asc
>
>
> And explain output is:
>
> Sort (cost=62851.33..62856.07 rows=1897 width=35) (actual
> time=1872.867..1873.003 rows=1229 loops=1)
> Sort Key: projects.id
> Sort Method: quicksort Memory: 145kB
> -> HashAggregate (cost=62719.59..62748.04 rows=1897 width=35) (actual
> time=1871.281..1872.104 rows=1229 loops=1)
> Group Key: projects.id
> -> Hash Right Join (cost=159.68..45386.32 rows=364911 width=35)
> (actual time=2.226..637.936 rows=365784 loops=1)
> Hash Cond: (workitems.project_id = projects.id)
> Filter: (workitems.deleted_at IS NULL)
> Rows Removed by Filter: 257457
> -> Seq Scan on workitems (cost=0.00..36655.53 rows=623353
> width=43) (actual time=0.020..220.215 rows=623175 loops=1)
> -> Hash (cost=135.97..135.97 rows=1897 width=16) (actual
> time=2.177..2.177 rows=1897 loops=1)
> Buckets: 2048 Batches: 1 Memory Usage: 105kB
> -> Seq Scan on projects (cost=0.00..135.97 rows=1897
> width=16) (actual time=0.013..1.451 rows=1897 loops=1)
> Planning time: 2.775 ms
> Execution time: 1873.308 ms
>
>
maybe conditional index can help
CREATE INDEX ON workitems(project_id) WHERE deleted_at is null
Regards
Pavel
> Projects table has the index:
>
> Indexes:
> "projects_pkey" PRIMARY KEY, btree (id)
> "index_projects_on_company_id" btree (company_id)
> "index_projects_on_deleted_at" btree (deleted_at)
> "index_projects_on_inspector_id" btree (inspector_id)
> "index_projects_on_managed_offline_by_user_id" btree
> (managed_offline_by_user_id)
> "index_projects_on_project_status_id" btree (project_status_id)
> "index_projects_on_shipyard_id" btree (shipyard_id)
> "index_projects_on_vessel_id" btree (vessel_id)
>
> Workitems table has the index:
>
> Indexes:
> "workitems_pkey" PRIMARY KEY, btree (id)
> "index_workitems_on_company_id" btree (company_id)
> "index_workitems_on_deleted_at" btree (deleted_at)
> "index_workitems_on_parent_workitem_id" btree (parent_workitem_id)
> "index_workitems_on_project_id" btree (project_id)
> "index_workitems_on_standard_workitem_id" btree (standard_workitem_id)
> "index_workitems_on_workitem_category_id" btree (workitem_category_id)
>
>
> Thanks,
>
> Arup Rakshit
> ar(at)zeit(dot)io
>
>
>
>
From | Date | Subject | |
---|---|---|---|
Next Message | Arup Rakshit | 2018-09-30 16:49:11 | Re: How to improve sql query to achieve the better plan |
Previous Message | Arup Rakshit | 2018-09-30 16:22:54 | How to improve sql query to achieve the better plan |