Re: pg_dump versus ancient server versions

From: Andrew Dunstan <andrew(at)dunslane(dot)net>
To: Andres Freund <andres(at)anarazel(dot)de>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Cc: Robert Haas <robertmhaas(at)gmail(dot)com>, Alvaro Herrera <alvherre(at)alvh(dot)no-ip(dot)org>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org>
Subject: Re: pg_dump versus ancient server versions
Date: 2021-10-25 20:29:11
Message-ID: c51ede27-3b6d-1d27-9d1a-227526858db2@dunslane.net
Views: Raw Message | Whole Thread | Download mbox | Resend email
Thread:
Lists: pgsql-hackers


On 10/25/21 13:06, Andres Freund wrote:
> Hi,
>
> On 2021-10-25 10:23:40 -0400, Tom Lane wrote:
>> Also, I concur with Andrew's point that we'd really have to have
>> buildfarm support. However, this might not be as bad as it seems.
>> In principle we might just need to add resurrected branches back to
>> the branches_to_build list. Given my view of what the back-patching
>> policy ought to be, a new build in an old branch might only be
>> required a couple of times a year, which would not be an undue
>> investment of buildfarm resources.
> FWIW, if helpful I could easily specify a few additional branches to some of
> my buildfarm animals. Perhaps serinus/flaviventris (snapshot gcc wo/w
> optimizations) so we'd see problems coming early? I could also add
> recent-clang one.
>
> I think doing this to a few designated animals is a better idea than wasting
> cycles and space on a lot of animals.

Right now the server will only accept results for something in
branches_of_interest.txt. So we would need to modify that.

I tend to agree that we don't need a whole lot of cross platform testing
here.

>
>
>> It seems like a fresh checkout from the repo would be little more expensive
>> than the current copy-a-checkout process.)
> I haven't looked in detail, but from what I've seen in the logs the
> is-there-anything-new check is already not cheap, and does a checkout / update
> of the git directory.
>
>

If you have removed the work tree (with the "rm_worktrees => 1" setting)
then it restores it by doing a checkout. It then does a "git fetch", and
then as you say looks to see if there is anything new. If you know of a
better way to manage it then please let me know. On crake (which is
actually checking out four different repos) the checkout step typically
takes one or two seconds.

Copying the work tree can take a few seconds - to avoid that on
Unix/msys use vpath builds.

cheers

andrew

--
Andrew Dunstan
EDB: https://www.enterprisedb.com

In response to

Responses

Browse pgsql-hackers by date

  From Date Subject
Next Message Robert Haas 2021-10-25 20:31:59 Re: [Patch] ALTER SYSTEM READ ONLY
Previous Message Robert Haas 2021-10-25 20:29:04 Re: parallelizing the archiver