From: | Job <Job(at)colliniconsulting(dot)it> |
---|---|
To: | "pgsql-general(at)postgresql(dot)org" <pgsql-general(at)postgresql(dot)org> |
Subject: | Aggregate query on large tables |
Date: | 2017-04-09 15:05:56 |
Message-ID: | 88EF58F000EC4B4684700C2AA3A73D7A0817DC96CB17@W2008DC01.ColliniConsulting.lan |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
Hi,
i have a table with about 400 millions of rows and i need to build some aggregate function for reporting.
I noticed that query performances are slowing down, even though indexes are present.
Query is simple (i make an example, my table is in italian language):
select a,sum(b) from table where a=x and c=y group by a
a is a varchar
b is an integer
x and y are two field i use for filter results.
I tried to create different indexes to try to speed up performance:
index1 (a)
index2 (c)
index3 (a,c)
I noticed, with query planner, that the mulfi-field index is not used.
Postgresql 9.6.1 still use scan without indexes.
I obtain significant improvements only if i create a materialized view with aggregated data.
But i would like to avoid - if possible - creating (and mantaining) the materialized view.
Than you!
/F
From | Date | Subject | |
---|---|---|---|
Next Message | Bill Moran | 2017-04-09 15:26:17 | Re: Aggregate query on large tables |
Previous Message | Nicolas Paris | 2017-04-09 11:58:42 | Re: TimeScaleDB -- Open Source Time Series Database Released (www.i-programmer.info); |