Re: Horrific time for getting 1 record from an index?

From: Jim Nasby <jnasby(at)enova(dot)com>
To: Jeff Janes <jeff(dot)janes(at)gmail(dot)com>
Cc: Daniel Farina <daniel(at)heroku(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org>
Subject: Re: Horrific time for getting 1 record from an index?
Date: 2013-11-11 23:28:37
Message-ID: 52816825.3020605@enova.com
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-performance

On 11/11/13 4:57 PM, Jeff Janes wrote:
> On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <jnasby(at)enova(dot)com <mailto:jnasby(at)enova(dot)com>> wrote:
> Btree indexes have special code that kill index-tuples when the table-tuple is dead-to-all, so only the first such query after the mass deletion becomes vacuum-eligible should be slow, even if a vacuum is not done. But if there are long running transactions that prevent the dead rows from going out of scope, nothing can be done until those transactions go away.

There is? I didn't know that, can you point me at code?

BTW, I originally had this, even after multiple queries:

Buffers: shared hit=1 read=9476

Then vacuum:
INFO: index "page_hits_raw_pkey" now contains 50343572 row versions in 182800 pages
DETAIL: 3466871 index row versions were removed.
44728 index pages have been deleted, 35256 are currently reusable.

Then...

Buffers: shared hit=1 read=4

So I suspect a vacuum is actually needed...
--
Jim Nasby, Lead Data Architect (512) 569-9461

In response to

Responses

Browse pgsql-performance by date

  From Date Subject
Next Message Jeff Janes 2013-11-12 00:30:59 Re: Horrific time for getting 1 record from an index?
Previous Message Jeff Janes 2013-11-11 22:57:16 Re: Horrific time for getting 1 record from an index?