From: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com> |
---|---|
To: | Jim Nasby <jnasby(at)enova(dot)com> |
Cc: | Daniel Farina <daniel(at)heroku(dot)com>, "pgsql-performance(at)postgresql(dot)org" <pgsql-performance(at)postgresql(dot)org> |
Subject: | Re: Horrific time for getting 1 record from an index? |
Date: | 2013-11-12 00:30:59 |
Message-ID: | CAMkU=1zQg2jxbeOomydgFwRGcwkKTf7Xj+dg7Cwxz9NiUSAzjA@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On Mon, Nov 11, 2013 at 3:28 PM, Jim Nasby <jnasby(at)enova(dot)com> wrote:
> On 11/11/13 4:57 PM, Jeff Janes wrote:
>
> On Mon, Nov 11, 2013 at 1:57 PM, Jim Nasby <jnasby(at)enova(dot)com <mailto:
>> jnasby(at)enova(dot)com>> wrote:
>> Btree indexes have special code that kill index-tuples when the
>> table-tuple is dead-to-all, so only the first such query after the mass
>> deletion becomes vacuum-eligible should be slow, even if a vacuum is not
>> done. But if there are long running transactions that prevent the dead
>> rows from going out of scope, nothing can be done until those transactions
>> go away.
>>
>
> There is? I didn't know that, can you point me at code?
>
git grep "kill_prior_tuple"
>
> BTW, I originally had this, even after multiple queries:
>
> Buffers: shared hit=1 read=9476
>
> Then vacuum:
> INFO: index "page_hits_raw_pkey" now contains 50343572 row versions in
> 182800 pages
> DETAIL: 3466871 index row versions were removed.
> 44728 index pages have been deleted, 35256 are currently reusable.
>
> Then...
>
> Buffers: shared hit=1 read=4
>
> So I suspect a vacuum is actually needed...
Hmm. Maybe the kill method doesn't unlink the empty pages from the tree?
Cheers,
Jeff
From | Date | Subject | |
---|---|---|---|
Next Message | Ryan LeCompte | 2013-11-12 04:22:19 | Unexpected slow query time when joining small table with large table |
Previous Message | Jim Nasby | 2013-11-11 23:28:37 | Re: Horrific time for getting 1 record from an index? |