From: | "Bossart, Nathan" <bossartn(at)amazon(dot)com> |
---|---|
To: | Dipesh Pandit <dipesh(dot)pandit(at)gmail(dot)com> |
Cc: | Robert Haas <robertmhaas(at)gmail(dot)com>, Kyotaro Horiguchi <horikyota(dot)ntt(at)gmail(dot)com>, Jeevan Ladhe <jeevan(dot)ladhe(at)enterprisedb(dot)com>, Stephen Frost <sfrost(at)snowman(dot)net>, Andres Freund <andres(at)anarazel(dot)de>, Hannu Krosing <hannuk(at)google(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: .ready and .done files considered harmful |
Date: | 2021-08-22 04:28:51 |
Message-ID: | 620F3CE1-0255-4D66-9D87-0EADE866985A@amazon.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 5/4/21, 7:07 AM, "Robert Haas" <robertmhaas(at)gmail(dot)com> wrote:
> On Tue, May 4, 2021 at 12:27 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
>> On 2021-05-03 16:49:16 -0400, Robert Haas wrote:
>> > I have two possible ideas for addressing this; perhaps other people
>> > will have further suggestions. A relatively non-invasive fix would be
>> > to teach pgarch.c how to increment a WAL file name. After archiving
>> > segment N, check using stat() whether there's an .ready file for
>> > segment N+1. If so, do that one next. If not, then fall back to
>> > performing a full directory scan.
>>
>> Hm. I wonder if it'd not be better to determine multiple files to be
>> archived in one readdir() pass?
>
> I think both methods have some merit. If we had a way to pass a range
> of files to archive_command instead of just one, then your way is
> distinctly better, and perhaps we should just go ahead and invent such
> a thing. If not, your way doesn't entirely solve the O(n^2) problem,
> since you have to choose some upper bound on the number of file names
> you're willing to buffer in memory, but it may lower it enough that it
> makes no practical difference. I am somewhat inclined to think that it
> would be good to start with the method I'm proposing, since it is a
> clear-cut improvement over what we have today and can be done with a
> relatively limited amount of code change and no redesign, and then
> perhaps do something more ambitious afterward.
I was curious about this, so I wrote a patch (attached) to store
multiple files per directory scan and tested it against the latest
patch in this thread (v9) [0]. Specifically, I set archive_command to
'false', created ~20K WAL segments, then restarted the server with
archive_command set to 'true'. Both the v9 patch and the attached
patch completed archiving all segments in just under a minute. (I
tested the attached patch with NUM_FILES_PER_DIRECTORY_SCAN set to 64,
128, and 256 and didn't observe any significant difference.) The
existing logic took over 4 minutes to complete.
I'm hoping to do this test again with many more (100K+) status files,
as I believe that the v9 patch will be faster at that scale, but I'm
not sure how much faster it will be.
Nathan
Attachment | Content-Type | Size |
---|---|---|
v1-0001-Improve-performance-of-pgarch_readyXlog-with-many.patch | application/octet-stream | 10.3 KB |
From | Date | Subject | |
---|---|---|---|
Next Message | Julien Rouhaud | 2021-08-22 08:10:33 | Mark all GUC variable as PGDLLIMPORT |
Previous Message | Ranier Vilela | 2021-08-21 20:05:39 | Re: [EXTERNAL] Re: Allow declaration after statement and reformat code to use it |