From: | "Erik Rijkers" <er(at)xs4all(dot)nl> |
---|---|
To: | pgsql-hackers(at)postgresql(dot)org |
Subject: | testing HS/SR - 1 vs 2 performance |
Date: | 2010-04-09 23:23:15 |
Message-ID: | 8319df0a5f4c59cea55459dbc76e40c1.squirrel@webmail.xs4all.nl |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
Using 9.0devel cvs HEAD, 2010.04.08.
I am trying to understand the performance difference
between primary and standby under a standard pgbench
read-only test.
server has 32 GB, 2 quadcores.
primary:
tps = 34606.747930 (including connections establishing)
tps = 34527.078068 (including connections establishing)
tps = 34654.297319 (including connections establishing)
standby:
tps = 700.346283 (including connections establishing)
tps = 717.576886 (including connections establishing)
tps = 740.522472 (including connections establishing)
transaction type: SELECT only
scaling factor: 1000
query mode: simple
number of clients: 20
number of threads: 1
duration: 900 s
both instances have
max_connections = 100
shared_buffers = 256MB
checkpoint_segments = 50
effective_cache_size= 16GB
See also:
http://archives.postgresql.org/pgsql-testers/2010-04/msg00005.php
(differences with scale 10_000)
I understand that in the scale=1000 case, there is a huge
cache effect, but why doesn't that apply to the pgbench runs
against the standby? (and for the scale=10_000 case the
differences are still rather large)
Maybe these differences are as expected. I don't find
any explanation in the documentation.
thanks,
Erik Rijkers
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2010-04-09 23:53:51 | Re: extended operator classes vs. type interfaces |
Previous Message | Nathan Boley | 2010-04-09 23:18:36 | Re: extended operator classes vs. type interfaces |