Re: DB Performance decreases due to often written/accessed

From: "Jim C(dot) Nasby" <jim(at)nasby(dot)net>
To: Richard Huxton <dev(at)archonet(dot)com>
Cc: Jens Schipkowski <jens(dot)schipkowski(at)apus(dot)co(dot)at>, pgsql-performance(at)postgresql(dot)org
Subject: Re: DB Performance decreases due to often written/accessed
Date: 2006-10-19 17:22:50
Message-ID: 20061019172249.GT71084@nasby.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On Thu, Oct 19, 2006 at 06:19:16PM +0100, Richard Huxton wrote:
> OK - these plans look about the same, but the time is greatly different.
> Both have rows=140247 as the estimated number of rows in tbl_reg. Either
> you have many more rows in the second case (in which case you're not
> running ANALYSE enough) or you have lots of gaps in the table (you're
> not running VACUUM enough).

Look closer... the actual stats show that the sorts in the second case
are returning far more rows. And yes, analyze probably needs to happen.

> I'd then try putting an index on (attr1,attr2,attr3...attr6) and see if
> that helps reduce time.

With bitmap index scans, I think it'd be much better to create 6 indexes
and see which ones actually get used (and then drop the others).
--
Jim Nasby jim(at)nasby(dot)net
EnterpriseDB http://enterprisedb.com 512.569.9461 (cell)

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Merlin Moncure 2006-10-19 17:32:22 Re: DB Performance decreases due to often written/accessed table
Previous Message Richard Huxton 2006-10-19 17:19:16 Re: DB Performance decreases due to often written/accessed