From: | Tom Lane <tgl(at)sss(dot)pgh(dot)pa(dot)us> |
---|---|
To: | Greg Smith <gsmith(at)gregsmith(dot)com> |
Cc: | David Kerr <dmk(at)mr-paradox(dot)net>, pgsql-performance(at)postgresql(dot)org |
Subject: | Re: Question on pgbench output |
Date: | 2009-04-03 22:52:26 |
Message-ID: | 24809.1238799146@sss.pgh.pa.us |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-performance |
Greg Smith <gsmith(at)gregsmith(dot)com> writes:
> pgbench is extremely bad at simulating large numbers of clients. The
> pgbench client operates as a single thread that handles both parsing the
> input files, sending things to clients, and processing their responses.
> It's very easy to end up in a situation where that bottlenecks at the
> pgbench client long before getting to 400 concurrent connections.
Yeah, good point.
> That said, if you're in the hundreds of transactions per second range that
> probably isn't biting you yet. I've seen it more once you get around
> 5000+ things per second going on.
However, I don't think anyone else has been pgbench'ing transactions
where client-side libpq has to absorb (and then discard) a megabyte of
data per xact. I wouldn't be surprised that that eats enough CPU to
make it an issue. David, did you pay any attention to how busy the
pgbench process was?
Another thing that strikes me as a bit questionable is that your stated
requirements involve being able to pump 400MB/sec from the database
server to your various client machines (presumably those 400 people
aren't running their client apps directly on the DB server). What's the
network fabric going to be, again? Gigabit Ethernet won't cut it...
regards, tom lane
From | Date | Subject | |
---|---|---|---|
Next Message | Josh Berkus | 2009-04-03 23:12:55 | Using IOZone to simulate DB access patterns |
Previous Message | Tom Lane | 2009-04-03 22:30:38 | Re: Question on pgbench output |