Re: DB Performance decreases due to often written/accessed

From: Richard Huxton <dev(at)archonet(dot)com>
To: "Jim C(dot) Nasby" <jim(at)nasby(dot)net>
Cc: Jens Schipkowski <jens(dot)schipkowski(at)apus(dot)co(dot)at>, pgsql-performance(at)postgresql(dot)org
Subject: Re: DB Performance decreases due to often written/accessed
Date: 2006-10-19 18:00:28
Message-ID: 4537BD3C.7090402@archonet.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

Jim C. Nasby wrote:
> On Thu, Oct 19, 2006 at 06:19:16PM +0100, Richard Huxton wrote:
>> OK - these plans look about the same, but the time is greatly different.
>> Both have rows=140247 as the estimated number of rows in tbl_reg. Either
>> you have many more rows in the second case (in which case you're not
>> running ANALYSE enough) or you have lots of gaps in the table (you're
>> not running VACUUM enough).
>
> Look closer... the actual stats show that the sorts in the second case
> are returning far more rows. And yes, analyze probably needs to happen.

The results are different, I agree, but the plans (and estimates) are
the same. Given the deletes and inserts I wasn't sure whether this was
just lots more rows or a shift in values.

>> I'd then try putting an index on (attr1,attr2,attr3...attr6) and see if
>> that helps reduce time.
>
> With bitmap index scans, I think it'd be much better to create 6 indexes
> and see which ones actually get used (and then drop the others).

Good idea.

--
Richard Huxton
Archonet Ltd

In response to

Browse pgsql-performance by date

  From Date Subject
Next Message Jens Schipkowski 2006-10-19 18:34:00 Re: DB Performance decreases due to often written/accessed table
Previous Message Merlin Moncure 2006-10-19 17:32:22 Re: DB Performance decreases due to often written/accessed table