On Fri, Dec 11, 2009 at 3:12 PM, Scott Carey <scott(at)richrelevance(dot)com> wrote:
>
> On 12/11/09 1:57 PM, "Scott Marlowe" <scott(dot)marlowe(at)gmail(dot)com> wrote:
>
>>
>> This is my big issue. dropping / creating databases for unit tests is
>> overkill. Running any DDL at all for a unit test seems wrong to me
>> too. Insert a row if you need it, MAYBE. Unit tests should work with
>> a test database that HAS the structure and database already in place.
>>
>> What happens if your unit tests get lose in production and drop a
>> database, or a table. Not good.
>>
>
> Production should not have a db with the same username/pw combination as dev
> boxes and unit tests . . .
>
> Unfortunately, unit-like (often more than a 'unit') tests can't always rely
> on a test db being already set up. If one leaves any cruft around, it might
> break later tests later on non-deterministically. Automated tests that
> insert data are absolutely required somewhere if the application inserts
> data.
>
> The best way to do this in postgres is to create a template database from
> scratch with whatever DDL is needed at the start of the run, and then create
> and drop db's as copies of that template per test or test suite.
Debateable. Last job we had 44k or so unit tests, and we gave each
dev their own db made from the main qa / unit testing db that they
could refresh at any time, and run the unit tests locally before
committing code. Actual failures like the one you mention were very
rare because of this approach. A simple ant refresh-db and they were
ready to test their code before committing it to the continuous
testing farm.