| From: | Richard Huxton <dev(at)archonet(dot)com> |
|---|---|
| To: | Wei Weng <wweng(at)kencast(dot)com>, pgsql-sql(at)postgresql(dot)org |
| Subject: | Re: concurrent connections is worse than serialization? |
| Date: | 2002-08-14 09:18:38 |
| Message-ID: | 200208141018.38875.dev@archonet.com |
| Views: | Whole Thread | Raw Message | Download mbox | Resend email |
| Thread: | |
| Lists: | pgsql-sql |
On Tuesday 13 Aug 2002 9:39 pm, Wei Weng wrote:
> I have a testing program that uses 30 concurrent connections
> (max_connections = 32 in my postgresql.conf) and each does 100
> insertions to a simple table with index.
>
> It took me approximately 2 minutes to finish all of them.
>
> But under the same environment(after "delete From test_table, and vacuum
> analyze"), I then queue up all those 30 connections one after another
> one (serialize) and it took only 30 seconds to finish.
>
> Why is it that the performance of concurrent connections is worse than
> serializing them into one?
What was the limiting factor during the test? Was the CPU maxed, memory, disk
I/O?
I take it the insert really *is* simple - no dependencies etc.
> I was testing them using our own (proprietary) scripting engine and the
> extension library that supports postgresql serializes the queries by
> simply locking when a query manipulates a PGconn object and unlocking
> when it is done. (And similiarly, it creates a PGconn object on the
> stack for each concurrent queries.)
I assume you've ruled the application end of things out.
- Richard Huxton
| From | Date | Subject | |
|---|---|---|---|
| Next Message | Andreas Tille | 2002-08-14 09:30:25 | Explicite typecasting of functions |
| Previous Message | Christopher Kings-Lynne | 2002-08-14 08:42:38 | Re: pgsql-sql@postgresql.org |