improving speed of query that uses a multi-column "filter" ?

From: Jonathan Vanasco <postgres(at)2xlp(dot)com>
To: "pgsql-general(at)postgresql(dot)org general" <pgsql-general(at)postgresql(dot)org>
Subject: improving speed of query that uses a multi-column "filter" ?
Date: 2014-09-30 23:50:20
Message-ID: 6F5A841A-F077-4F55-8386-25EBD2B0ADE4@2xlp.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general


I'm trying to improve the speed of suite of queries that go across a few million rows.

They use 2 main "filters" across a variety of columns:

WHERE (col_1 IS NULL ) AND (col_2 IS NULL) AND ((col_3 IS NULL) OR (col_3 = col_1))
WHERE (col_1 IS True ) AND (col_2 IS True) AND (col_3 IS True) OR (col_4 IS NULL)

I created a dedicated multi-column index for each query to speed them up. That was great.

I still don't have the performance where I want it to be - the size of the index seems to be an issue. If the index were on one column, instead of 4, I think the scans would complete in time.

i looked online and the archives, and couldn't find much information on good strategies to deal with this.

It looks like my best option is to somehow index on the "interpretation" of this criteria, and not the criteria itself.

the two ways that come to mind are:

1. alter the table: adding a boolean column for each filter-test to the table, index that, then query for that field
2. leave the table as-is: write a custom function for each filter, and then use a function index

has anyone else encountered a need like this?

are there any tips / tricks / things I should look out for. are there better ways to handle this?

Responses

Browse pgsql-general by date

  From Date Subject
Next Message John R Pierce 2014-10-01 00:04:43 Re: improving speed of query that uses a multi-column "filter" ?
Previous Message Roger Pack 2014-09-30 21:17:52 ability to return number of rows inserted into child partition tables request