From: | Greg Smith <greg(at)2ndQuadrant(dot)com> |
---|---|
To: | Costin Oproiu <costin(dot)oproiu(at)gmail(dot)com> |
Cc: | pgsql-performance(at)postgresql(dot)org |
Subject: | Re: pgbench intriguing results: better tps figures for larger scale factor |
Date: | 2013-03-04 22:46:29 |
Message-ID: | 51352445.4050305@2ndQuadrant.com |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
On 2/26/13 4:45 PM, Costin Oproiu wrote:
> First, I've got no good explanation for this and it would be nice to
> have one. As far as I can understand this issue, the heaviest update
> traffic should be on the branches table and should affect all tests.
From http://www.postgresql.org/docs/current/static/pgbench.html :
"For the default TPC-B-like test scenario, the initialization scale
factor (-s) should be at least as large as the largest number of clients
you intend to test (-c); else you'll mostly be measuring update
contention. There are only -s rows in the pgbench_branches table, and
every transaction wants to update one of them, so -c values in excess of
-s will undoubtedly result in lots of transactions blocked waiting for
other transactions."
I normally see peak TPS at a scale of around 100 on current generation
hardware, stuff in the 4 to 24 core range. Nowadays there really is no
reason to consider running pgbench on a system with a smaller scale than
that. I normally get a rough idea of things by running with scales 100,
250, 500, 1000, 2000.
--
Greg Smith 2ndQuadrant US greg(at)2ndQuadrant(dot)com Baltimore, MD
PostgreSQL Training, Services, and 24x7 Support www.2ndQuadrant.com
From | Date | Subject | |
---|---|---|---|
Next Message | Steven Crandell | 2013-03-04 22:54:40 | Re: hardware upgrade, performance degrade? |
Previous Message | Claudio Freire | 2013-03-04 21:37:23 | Re: Schema obfuscator for performance question |