From: | "Isak Hansen" <isak(dot)hansen(at)gmail(dot)com> |
---|---|
To: | "Kevin Galligan" <kgalligan(at)gmail(dot)com> |
Cc: | pgsql-general(at)postgresql(dot)org |
Subject: | Re: Slow query performance |
Date: | 2008-10-31 10:40:37 |
Message-ID: | 6b9e1eb20810310340u24f29a1fx7c708b3ec1754848@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-general |
On Wed, Oct 29, 2008 at 9:18 PM, Kevin Galligan <kgalligan(at)gmail(dot)com> wrote:
> I'm approaching the end of my rope here. I have a large database.
> 250 million rows (ish). Each row has potentially about 500 pieces of
> data, although most of the columns are sparsely populated.
>
*snip*
>
> So, went the other direction completely. I rebuilt the database with
> a much larger main table. Any values with 5% or greater filled in
> rows were added to this table. Maybe 130 columns. Indexes applied to
> most of these. Some limited testing with a smaller table seemed to
> indicate that queries on a single table without a join would work much
> faster.
>
> So, built that huge table. now query time is terrible. Maybe a
> minute or more for simple queries.
Are indexes on sparsely populated columns already handled efficiently,
or could partial indexes with only non-null values improve things?
Isak
From | Date | Subject | |
---|---|---|---|
Next Message | Scott Marlowe | 2008-10-31 10:47:42 | Re: Are there plans to add data compression feature to postgresql? |
Previous Message | Ivan Sergio Borgonovo | 2008-10-31 10:37:25 | Re: tsearch2 problem |