From: | Melih Mutlu <m(dot)melihmutlu(at)gmail(dot)com> |
---|---|
To: | Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Cc: | pgsql-hackers <pgsql-hackers(at)postgresql(dot)org> |
Subject: | Re: [PATCH] Reuse Workers and Replication Slots during Logical Replication |
Date: | 2022-12-20 14:44:36 |
Message-ID: | CAGPVpCRTdiL5CzQo5FBZw2O1isudinEkkjg6ZLSK_chdkgjHrw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi Amit,
Amit Kapila <amit(dot)kapila16(at)gmail(dot)com>, 16 Ara 2022 Cum, 05:46 tarihinde
şunu yazdı:
> Right, but when the size is 100MB, it seems to be taking a bit more
> time. Do we want to evaluate with different sizes to see how it looks?
> Other than that all the numbers are good.
>
I did a similar testing with again 100MB and also 1GB this time.
| 100 MB | 1 GB
----------------------------------------------------------
master | 14761.425 ms | 160932.982 ms
----------------------------------------------------------
patch | 14398.408 ms | 160593.078 ms
This time, it seems like the patch seems slightly faster than the master.
Not sure if we can say the patch slows things down (or speeds up) if the
size of tables increases.
The difference may be something arbitrary or caused by other factors. What
do you think?
I also wondered what happens when "max_sync_workers_per_subscription" is
set to 1.
Which means tablesync will be done sequentially in both cases but the patch
will use only one worker and one replication slot during the whole
tablesync process.
Here are the numbers for that case:
| 100 MB | 1 GB
----------------------------------------------------------
master | 27751.463 ms | 312424.999 ms
----------------------------------------------------------
patch | 27342.760 ms | 310021.767 ms
Best,
--
Melih Mutlu
Microsoft
From | Date | Subject | |
---|---|---|---|
Next Message | Andrew Dunstan | 2022-12-20 15:05:37 | Re: Avoid generating SSL certs for LDAP tests |
Previous Message | Ranier Vilela | 2022-12-20 13:21:24 | Re: code cleanups |