From: | Robert Haas <robertmhaas(at)gmail(dot)com> |
---|---|
To: | Jeevan Ladhe <jeevan(dot)ladhe(at)enterprisedb(dot)com> |
Cc: | Dilip Kumar <dilipbalaut(at)gmail(dot)com>, Mark Dilger <mark(dot)dilger(at)enterprisedb(dot)com>, "pgsql-hackers(at)postgresql(dot)org" <pgsql-hackers(at)postgresql(dot)org>, tushar <tushar(dot)ahuja(at)enterprisedb(dot)com> |
Subject: | Re: refactoring basebackup.c |
Date: | 2021-09-21 17:20:04 |
Message-ID: | CA+TgmoYSaC25B9dNsSX-qaw-jHBDip-Xq+tVcqfs__WJsd2Nvg@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
On Tue, Sep 21, 2021 at 9:35 AM Jeevan Ladhe
<jeevan(dot)ladhe(at)enterprisedb(dot)com> wrote:
> Here is a patch for lz4 based on the v5 set of patches. The patch adapts with the
> bbsink changes, and is now able to make the provision for the required length
> for the output buffer using the new callback function bbsink_lz4_begin_backup().
>
> Sample command to take backup:
> pg_basebackup -t server:/tmp/data_lz4 -Xnone --server-compression=lz4
>
> Please let me know your thoughts.
This pretty much looks right, with the exception of the autoFlush
thing about which I sent a separate email. I need to write docs for
all of this, and ideally test cases. It might also be good if
pg_basebackup had an option to un-gzip or un-lz4 archives, but I
haven't thought too hard about what would be required to make that
work.
+ if (opt->compression == BACKUP_COMPRESSION_LZ4)
else if
+ /* First of all write the frame header to destination buffer. */
+ Assert(CHUNK_SIZE >= LZ4F_HEADER_SIZE_MAX);
+ headerSize = LZ4F_compressBegin(mysink->ctx,
+ mysink->base.bbs_next->bbs_buffer,
+ CHUNK_SIZE,
+ prefs);
I think this is wrong. I think you should be passing bbs_buffer_length
instead of CHUNK_SIZE, and I think you can just delete CHUNK_SIZE. If
you think otherwise, why?
+ * sink's bbs_buffer of length that can accomodate the compressed input
Spelling.
+ * Make it next multiple of BLCKSZ since the buffer length is expected so.
The buffer length is expected to be a multiple of BLCKSZ, so round up.
+ * If we are falling short of available bytes needed by
+ * LZ4F_compressUpdate() per the upper bound that is decided by
+ * LZ4F_compressBound(), send the archived contents to the next sink to
+ * process it further.
If the number of available bytes has fallen below the value computed
by LZ4F_compressBound(), ask the next sink to process the data so that
we can empty the buffer.
--
Robert Haas
EDB: http://www.enterprisedb.com
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2021-09-21 17:24:44 | Re: .ready and .done files considered harmful |
Previous Message | Fujii Masao | 2021-09-21 17:03:11 | Re: Improve logging when using Huge Pages |