Hi,
I'm in a situation where we quite often generate more WAL than we can
archive. The thing is - archiving takes long(ish) time but it's
multi-step process and includes talking to remote servers over network.
I tested that simply by running archiving in parallel I can easily get
2-3 times higher throughput.
But - I'd prefer to keep postgresql knowing what is archived, and what
not, so I can't do the parallelization on my own.
So, the question is: is it technically possible to have parallel
archivization, and would anyone be willing to work on it (sorry, my
c skills are basically none, so I can't realistically hack it myself)
Best regards,
depesz