Re: Alternatives to very large tables with many performance-killing indicies?

From: Merlin Moncure <mmoncure(at)gmail(dot)com>
To: Wells Oliver <wellsoliver(at)gmail(dot)com>
Cc: pgsql-general(at)postgresql(dot)org
Subject: Re: Alternatives to very large tables with many performance-killing indicies?
Date: 2012-08-16 21:00:41
Message-ID: CAHyXU0xN1obQtpWgnXkfMHaAjt+6UvvGhMxEPBz58NboTXi0Rg@mail.gmail.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-general

On Thu, Aug 16, 2012 at 3:54 PM, Wells Oliver <wellsoliver(at)gmail(dot)com> wrote:
> Hey folks, a question. We have a table that's getting large (6 million rows
> right now, but hey, no end in sight). It's wide-ish, too, 98 columns.
>
> The problem is that each of these columns needs to be searchable quickly at
> an application level, and I'm far too responsible an individual to put 98
> indexes on a table. Wondering what you folks have come across in terms of
> creative solutions that might be native to postgres. I can build something
> that indexes the data and caches it and runs separately from PG, but I
> wanted to exhaust all native options first.

Well, you could explore normalizing your table, particularly if many
of your 98 columns are null most of the time. Another option would be
to implement hstore for attributes and index with GIN/GIST --
especially if you need to filter on multiple columns. Organizing big
data for fast searching is a complicated topic and requires
significant thought in terms of optimization.

merlin

In response to

Browse pgsql-general by date

  From Date Subject
Next Message Tomas Hlavaty 2012-08-16 21:14:36 Re: success with postgresql on beaglebone
Previous Message Wells Oliver 2012-08-16 20:54:23 Alternatives to very large tables with many performance-killing indicies?