From: | Greg Stark <stark(at)mit(dot)edu> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: mdnblocks() sabotages error checking in _mdfd_getseg() |
Date: | 2015-12-10 17:55:37 |
Message-ID: | CAM-w4HOSX2gCFiZ-L0cTp+9JAjeVLm3SJm6roXKrVLvsbAkHAQ@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Thu, Dec 10, 2015 at 4:47 PM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
>
> It's not straightforward, but I don't think that's the reason. What
> we could do is look at the call sites that use
> RelationGetNumberOfBlocks() and change some of them to get the
> information some other way instead. I believe get_relation_info() and
> initscan() are the primary culprits, accounting for some enormous
> percentage of the system calls we do on a read-only pgbench workload.
> Those functions certainly know enough to consult a metapage if we had
> such a thing.
Would this not run into a chicken and egg problem with recovery?
Unless you're going to fsync the meta page whenever the file is
extended you'll have to xlog any updates to it and treat the values in
memory as authoritative. But when replaying xlog you'll see obsolete
inconsistent versions on disk and won't have the correct values in
memory either.
It seems to me that if you want to fix the linked lists of files
that's orthogonal to whether the file lengths on disk are
authoritative. You can always keep the lengths or at least the number
of files cached and updated in shared memory in a more efficient
storage format.
--
greg
From | Date | Subject | |
---|---|---|---|
Next Message | Andres Freund | 2015-12-10 18:01:42 | Re: mdnblocks() sabotages error checking in _mdfd_getseg() |
Previous Message | Andres Freund | 2015-12-10 17:50:49 | Re: Error with index on unlogged table |