From: | Nitin Jadhav <nitinjadhavpostgres(at)gmail(dot)com> |
---|---|
To: | Andres Freund <andres(at)anarazel(dot)de> |
Cc: | Bharath Rupireddy <bharath(dot)rupireddyforpostgres(at)gmail(dot)com>, Matthias van de Meent <boekewurm+postgres(at)gmail(dot)com>, Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com>, Julien Rouhaud <rjuju123(at)gmail(dot)com>, Bruce Momjian <bruce(at)momjian(dot)us>, Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>, Magnus Hagander <magnus(at)hagander(dot)net>, PostgreSQL Hackers <pgsql-hackers(at)lists(dot)postgresql(dot)org> |
Subject: | Re: Report checkpoint progress with pg_stat_progress_checkpoint (was: Report checkpoint progress in server logs) |
Date: | 2022-07-28 09:38:38 |
Message-ID: | CAMm1aWbfFwT+Cf1eSrXtF2R8ffxXTbH=un8unQ8WbcpcbymnYw@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
> > To understand the performance effects of the above, I have taken the
> > average of five checkpoints with the patch and without the patch in my
> > environment. Here are the results.
> > With patch: 269.65 s
> > Without patch: 269.60 s
>
> Those look like timed checkpoints - if the checkpoints are sleeping a
> part of the time, you're not going to see any potential overhead.
Yes. The above data is collected from timed checkpoints.
create table t1(a int);
insert into t1 select * from generate_series(1,10000000);
I generated a lot of data by using the above queries which would in
turn trigger the checkpoint (wal).
---
> To see whether this has an effect you'd have to make sure there's a
> certain number of dirty buffers (e.g. by doing CREATE TABLE AS
> some_query) and then do a manual checkpoint and time how long that
> times.
For this case I have generated data by using below queries.
create table t1(a int);
insert into t1 select * from generate_series(1,8000000);
This does not trigger the checkpoint automatically. I have issued the
CHECKPOINT manually and measured the performance by considering an
average of 5 checkpoints. Here are the details.
With patch: 2.457 s
Without patch: 2.334 s
Please share your thoughts.
Thanks & Regards,
Nitin Jadhav
On Thu, Jul 7, 2022 at 5:34 AM Andres Freund <andres(at)anarazel(dot)de> wrote:
>
> Hi,
>
> On 2022-06-13 19:08:35 +0530, Nitin Jadhav wrote:
> > > Have you measured the performance effects of this? On fast storage with large
> > > shared_buffers I've seen these loops in profiles. It's probably fine, but it'd
> > > be good to verify that.
> >
> > To understand the performance effects of the above, I have taken the
> > average of five checkpoints with the patch and without the patch in my
> > environment. Here are the results.
> > With patch: 269.65 s
> > Without patch: 269.60 s
>
> Those look like timed checkpoints - if the checkpoints are sleeping a
> part of the time, you're not going to see any potential overhead.
>
> To see whether this has an effect you'd have to make sure there's a
> certain number of dirty buffers (e.g. by doing CREATE TABLE AS
> some_query) and then do a manual checkpoint and time how long that
> times.
>
> Greetings,
>
> Andres Freund
From | Date | Subject | |
---|---|---|---|
Next Message | Amit Kapila | 2022-07-28 09:53:54 | Re: [BUG] Logical replication failure "ERROR: could not map filenode "base/13237/442428" to relation OID" with catalog modifying txns |
Previous Message | Thomas Munro | 2022-07-28 09:31:44 | Re: Checking pgwin32_is_junction() errors |