Re: Large data and slow queries

From: Scott Marlowe <scott(dot)marlowe(at)gmail(dot)com>
To: Vick Khera <vivek(at)khera(dot)org>
Cc: pgsql-general <pgsql-general(at)postgresql(dot)org>
Subject: Re: Large data and slow queries
Date: 2017-04-20 15:01:12
Message-ID: CAOR=d=2DAjyAtRaWnLQBD1TK=0PxXKByoMcv8dVBZp+rs-V9QQ@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Apr 20, 2017 at 6:30 AM, Vick Khera <vivek(at)khera(dot)org> wrote:
> I'm curious why you have so many partial indexes. Are you trying to make
> custom indexes per query? It seems to me you might want to consider making
> the indexes general, and remove the redundant ones (that have the same
> prefix list of indexed fields).
>
> Secondly your table is 102Gb. Clearly there's a lot of data here. How many
> rows does that take? I would further suggest that you partition this table
> such that there are no more than about 10 million rows per partition (I've
> done this by using a id % 100 computation). Maybe in your case it makes
> sense to partition it based on the "what" field, because it appears you are
> trying to do that with your partial indexes already.

I would think a two field index might be just as effective and not
require a lot of maintenance etc.

--
To understand recursion, one must first understand recursion.

In response to

Responses

Browse pgsql-general by date

  From Date Subject
Next Message Rj Ewing 2017-04-20 15:15:54 Re: full text search on hstore or json with materialized view?
Previous Message Rj Ewing 2017-04-20 14:56:18 Re: full text search on hstore or json with materialized view?