| From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
|---|---|
| To: | Simon Riggs <simon(at)2ndQuadrant(dot)com> |
| Cc: | David Kerr <dmk(at)mr-paradox(dot)net>, pgsql-performance(at)postgresql(dot)org |
| Subject: | Re: Question on pgbench output |
| Date: | 2009-04-05 15:46:52 |
| Message-ID: | 4665.1238946412@sss.pgh.pa.us |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-performance |
Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
> On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:
>> 400 concurrent users doesn't mean that they're pulling 1.5 megs /
>> second every second.
> There's a world of difference between 400 connected and 400 concurrent
> users. You've been testing 400 concurrent users, yet without measuring
> data transfer. The think time will bring the number of users right down
> again, but you really need to include the much higher than normal data
> transfer into your measurements and pgbench won't help there.
Actually pgbench can simulate think time perfectly well: use its \sleep
command in your script. I think you can even set it up to randomize the
sleep time.
I agree that it seems David has been measuring a case far beyond what
his real problem is.
regards, tom lane
| From | Date | Subject | |
|---|---|---|---|
| Next Message | David Kerr | 2009-04-05 17:12:34 | Re: Question on pgbench output |
| Previous Message | Simon Riggs | 2009-04-05 07:01:39 | Re: Question on pgbench output |