From: | David Steele <david(at)pgmasters(dot)net> |
---|---|
To: | Andrey Borodin <x4mmm(at)yandex-team(dot)ru>, Stephen Frost <sfrost(at)snowman(dot)net> |
Cc: | hubert depesz lubaczewski <depesz(at)depesz(dot)com>, pgsql-hackers mailing list <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: Would it be possible to have parallel archiving? |
Date: | 2018-08-28 21:15:54 |
Message-ID: | dbd2184b-987b-7a49-a341-f8c44940bf3a@pgmasters.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On 8/28/18 4:34 PM, Andrey Borodin wrote:
>>
>> I still don't think it's a good idea and I specifically recommend
>> against making changes to the archive status files- those are clearly
>> owned and managed by PG and should not be whacked around by external
>> processes.
> If you do not write to archive_status, you basically have two options:
> 1. On every archive_command recheck that archived file is identical to file that is already archived. This hurts performance.
> 2. Hope that files match. This does not add any safety compared to whacking archive_status. This approach is prone to core changes as writes are.
Another option is to maintain the state of what has been safely archived
(and what has errored) locally. This allows pgBackRest to rapidly
return the status to Postgres without rechecking against the repository,
which as you note would be very slow.
This allows more than one archive_command to be safely run since all
archive commands must succeed before Postgres will mark the segment as done.
It's true that reading archive_status is susceptible to core changes but
the less interaction the better, I think.
Regards,
--
-David
david(at)pgmasters(dot)net
From | Date | Subject | |
---|---|---|---|
Next Message | Stephen Frost | 2018-08-28 21:20:10 | Re: Would it be possible to have parallel archiving? |
Previous Message | Peter Eisentraut | 2018-08-28 21:12:43 | some pg_dump query code simplification |