From: | David Kerr <dmk(at)mr-paradox(dot)net> |
---|---|
To: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
Cc: | Simon Riggs <simon(at)2ndQuadrant(dot)com>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Question on pgbench output |
Date: | 2009-04-05 17:12:34 |
Message-ID: | 49D8E682.9020601@mr-paradox.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Tom Lane wrote:
> Simon Riggs <simon(at)2ndQuadrant(dot)com> writes:
>> On Fri, 2009-04-03 at 16:34 -0700, David Kerr wrote:
>>> 400 concurrent users doesn't mean that they're pulling 1.5 megs /
>>> second every second.
>
>> There's a world of difference between 400 connected and 400 concurrent
>> users. You've been testing 400 concurrent users, yet without measuring
>> data transfer. The think time will bring the number of users right down
>> again, but you really need to include the much higher than normal data
>> transfer into your measurements and pgbench won't help there.
>
> Actually pgbench can simulate think time perfectly well: use its \sleep
> command in your script. I think you can even set it up to randomize the
> sleep time.
>
> I agree that it seems David has been measuring a case far beyond what
> his real problem is.
>
> regards, tom lane
>
Fortunately the network throughput issue is not mine to solve.
Would it be fair to say that with the pgbench output i've given so far
that if all my users clicked "go" at the same time (i.e., worst case
scenario), i could expect (from the database) about 8 second response time?
Thanks
Dave Kerr
From | Date | Subject | |
---|---|---|---|
Next Message | Tom Lane | 2009-04-05 17:43:50 | Re: Question on pgbench output |
Previous Message | Tom Lane | 2009-04-05 15:46:52 | Re: Question on pgbench output |