From: | Andrew Dunstan <andrew(at)dunslane(dot)net> |
---|---|
To: | "David E(dot) Wheeler" <david(at)kineticode(dot)com> |
Cc: | Jeff Janes <jeff(dot)janes(at)gmail(dot)com>, David Fetter <david(at)fetter(dot)org>, pgsql-hackers(at)postgresql(dot)org |
Subject: | Re: Concurrency testing |
Date: | 2009-10-08 02:07:29 |
Message-ID: | 4ACD4961.1050705@dunslane.net |
Views: | Raw Message | Whole Thread | Download mbox | Resend email |
Thread: | |
Lists: | pgsql-hackers |
David E. Wheeler wrote:
> On Oct 7, 2009, at 5:18 PM, Jeff Janes wrote:
>
>> I'd much rather live without Test::More and use DBD::Pg, then have
>> Test::More but need to open pipes to psql to talk to the database,
>> rather than using DBI to do it. But I guess we would need to worry
>> about whether we can make DBD::Pg work with the installation being
>> tested, rather than finding some other install.
>
> The test architecture depends on Perl, but not on the DBI. I don't
> think that Andrew wants to add any dependencies. Therefore we'd need
> to use file handles. That's not to say that we couldn't write a nice
> little interface for it such that the implementation could later change.
Well, that's true of the buildfarm. And there are reasons I don't want
to use DBI for the buildfarm, mainly to do with its intended role in
simulating what a human would do by hand.
What we do for the core testing framework is a different question.
Nevertheless, a requirement for DBI and DBD::Pg would be a significant
escalation of testing prerequisites. Test::More is comparatively modest
requirement, and is fairly universal where Perl is installed. And since
we'd just be using it to drive psql, we wouldn't be having to decide if
a problem we saw was due to a problem in Postgres or a problem in DBD::Pg.
If we want something to drive a huge number of clients, I suspect
neither of these is a good way to go, and something more custom built
would be required. Last time I built something to drive a huge client
load (many thousands of simultaneous connections to a web app) I did it
in highly threaded Java using HttpUnit from a number of separate client
machines. You wouldn't believe what that managed to do to MySQL on the
backend ;-)
cheers
andrew
From | Date | Subject | |
---|---|---|---|
Next Message | Robert Haas | 2009-10-08 02:30:57 | Re: COPY enhancements |
Previous Message | u235sentinel | 2009-10-08 01:48:07 | postgres 8.3.8 and Solaris 10_x86 64 bit problems? |