From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Michael Paquier <michael(dot)paquier(at)gmail(dot)com> |
Cc: | Jim Nasby <Jim(dot)Nasby(at)bluetreble(dot)com>, Pavan Deolasee <pavan(dot)deolasee(at)gmail(dot)com>, Heikki Linnakangas <hlinnaka(at)iki(dot)fi>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: FSM corruption leading to errors |
Date: | 2016-10-24 16:17:07 |
Message-ID: | 19347.1477325827@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
I wrote:
> It looks to me like this is approximating the highest block number that
> could possibly have an FSM entry as size of the FSM fork (in bytes)
> divided by 2. But the FSM stores one byte per block. There is overhead
> for the FSM search tree, but in a large relation it's not going to be as
> much as a factor of 2. So I think that to be conservative we need to
> drop the "/ 2". Am I missing something?
Ah, scratch that, after rereading the FSM README I see it's correct,
because there's a binary tree within each page; I'd only remembered
that there was a search tree of pages.
Also, we could at least discount the FSM root page and first intermediate
page, no? That is, the upper limit could be
pg_relation_size(oid::regclass, 'fsm') / 2 - 2*current_setting('block_size')::BIGINT
I think this is a worthwhile improvement because it reduces the time spent
on small relations. For me, the query as given takes 9 seconds to examine
the regression database, which seems like a lot. Discounting two pages
reduces that to 20 ms.
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Petr Jelinek | 2016-10-24 16:19:25 | Re: [PATCH] Send catalog_xmin separately in hot standby feedback |
Previous Message | Pavan Deolasee | 2016-10-24 16:15:47 | Re: FSM corruption leading to errors |