From: | Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com> |
---|---|
To: | Robert Haas <robertmhaas(at)gmail(dot)com> |
Cc: | Andres Freund <andres(at)anarazel(dot)de>, Mithun Cy <mithun(dot)cy(at)enterprisedb(dot)com>, pgsql-hackers <pgsql-hackers(at)postgresql(dot)org>, Amit Kapila <amit(dot)kapila16(at)gmail(dot)com> |
Subject: | Re: Perf Benchmarking and regression. |
Date: | 2016-05-12 12:39:07 |
Message-ID: | CAE9k0PkFEhVq-Zg4MH0bZ-zt_oE5PAS6dAuxRCXwX9kEVWceag@mail.gmail.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Hi,
Please find the test results for the following set of combinations taken at
128 client counts:
*1)* *Unpatched master, default *_flush_after :* TPS = 10925.882396
*2) Unpatched master, *_flush_after=0 :* TPS = 18613.343529
*3)* *That line removed with #if 0, default *_flush_after :* TPS =
9856.809278
*4)* *That line removed with #if 0, *_flush_after=0 :* TPS = 18158.648023
Here, *That line* points to "*AddWaitEventToSet(FeBeWaitSet,
WL_POSTMASTER_DEATH, -1, NULL, NULL);* in pq_init()."
Please note that earlier i had taken readings with data directory and
pg_xlog directory at the same location in HDD. But this time i have changed
the location of pg_xlog to ssd and taken the readings. With pg_xlog and
data directory at the same location in HDD i was seeing much lesser
performance like for "*That line removed with #if 0, *_flush_after=0 :*"
case i was getting 7367.709378 tps.
Also, the commit-id on which i have taken above readings along with pgbench
commands used are mentioned below:
commit 8a13d5e6d1bb9ff9460c72992657077e57e30c32
Author: Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us>
Date: Wed May 11 17:06:53 2016 -0400
Fix infer_arbiter_indexes() to not barf on system columns.
*Non Default settings and test*:
./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB -c
checkpoint_completion_target=0.9 &
./pgbench -i -s 1000 postgres
./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
With Regards,
Ashutosh Sharma
EnterpriseDB: *http://www.enterprisedb.com <http://www.enterprisedb.com>*
On Thu, May 12, 2016 at 9:22 AM, Robert Haas <robertmhaas(at)gmail(dot)com> wrote:
> On Wed, May 11, 2016 at 12:51 AM, Ashutosh Sharma <ashu(dot)coek88(at)gmail(dot)com>
> wrote:
> > I am extremely sorry for the delayed response. As suggested by you, I
> have
> > taken the performance readings at 128 client counts after making the
> > following two changes:
> >
> > 1). Removed AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL,
> > NULL); from pq_init(). Below is the git diff for the same.
> >
> > diff --git a/src/backend/libpq/pqcomm.c b/src/backend/libpq/pqcomm.c
> > index 8d6eb0b..399d54b 100644
> > --- a/src/backend/libpq/pqcomm.c
> > +++ b/src/backend/libpq/pqcomm.c
> > @@ -206,7 +206,9 @@ pq_init(void)
> > AddWaitEventToSet(FeBeWaitSet, WL_SOCKET_WRITEABLE,
> > MyProcPort->sock,
> > NULL, NULL);
> > AddWaitEventToSet(FeBeWaitSet, WL_LATCH_SET, -1, MyLatch, NULL);
> > +#if 0
> > AddWaitEventToSet(FeBeWaitSet, WL_POSTMASTER_DEATH, -1, NULL,
> NULL);
> > +#endif
> >
> > 2). Disabled the guc vars "bgwriter_flush_after",
> "checkpointer_flush_after"
> > and "backend_flush_after" by setting them to zero.
> >
> > After doing the above two changes below are the readings i got for 128
> > client counts:
> >
> > CASE : Read-Write Tests when data exceeds shared buffers.
> >
> > Non Default settings and test
> > ./postgres -c shared_buffers=8GB -N 200 -c min_wal_size=15GB -c
> > max_wal_size=20GB -c checkpoint_timeout=900 -c maintenance_work_mem=1GB
> -c
> > checkpoint_completion_target=0.9 &
> >
> > ./pgbench -i -s 1000 postgres
> >
> > ./pgbench -c 128 -j 128 -T 1800 -M prepared postgres
> >
> > Run1 : tps = 9690.678225
> > Run2 : tps = 9904.320645
> > Run3 : tps = 9943.547176
> >
> > Please let me know if i need to take readings with other client counts as
> > well.
>
> Can you please take four new sets of readings, like this:
>
> - Unpatched master, default *_flush_after
> - Unpatched master, *_flush_after=0
> - That line removed with #if 0, default *_flush_after
> - That line removed with #if 0, *_flush_after=0
>
> 128 clients is fine. But I want to see four sets of numbers that were
> all taken by the same person at the same time using the same script.
>
> Thanks,
>
> --
> Robert Haas
> EnterpriseDB: http://www.enterprisedb.com
> The Enterprise PostgreSQL Company
>
From | Date | Subject | |
---|---|---|---|
Next Message | Fabrízio de Royes Mello | 2016-05-12 12:48:58 | Re: Error during restore - dump taken with pg_dumpall -c option |
Previous Message | Peter Eisentraut | 2016-05-12 12:36:57 | Re: Minor documentation patch |