http://weblogs.asp.net/nunitaddin/archive/2008/12/02/testdriven-net-2-18-nunit-2-5-beta.aspx

Jamie covers off some of the new things in NUnit 2.5 which are pretty cool, but the one thing he omitted (and I think is quite awesome) is the Assert.Throws<T>(); assertion.

Previously (NUnit 2.4 and below) you would either have to use the [ExpectedException()] attribute or implement the exception handling logic yourself in your test. Issues with using the ExpectedException attribute are well known. I’m doing this without VS at hand, so forgive me if parts are wrong, but for example:


[Test]
[ExpectedException(typeof(ArgumentNullException))]
public void TestArgumentNullThrowsException()
{
    new MyObject(null);
}
  
  
[Test]
public void TestArgumentNullThrowsException()
{
  bool thrown = false;
  
  try
  {
    new MyObject(null);
  }
  catch (ArgumentNullException e)
  {
    // Check if the message is correct
    thrown = true;
  }
  catch (Exception e)
  {
    throw; // Any unexpected exceptions still get raised up
  }
  finally
  {
    Assert.IsTrue(thrown, "Expected exception should be thrown, but was not");
  }
}

Well with NUnit 2.5, you can lambda the entire thing and the testcase becomes so much simpler:


[Test]
public void TestArgumentNullExceptionIsThrown()
{
  Assert.Throws<ArgumentNullException>( () =>  new MyObject(null)  );
}

…And why is this good for TestDriven.NET? It means I can install NUnit 2.5 and actually use it with TD.NET, instead of using the GUI test runner.

I’ve been wondering recently whether it’s important to take a purist line or whether to be practical about the way something is developed or the tools we use…In almost all cases, I agree fundamentally with the purist point of view, but i’m a man of practicality, meaning practicality wins out most times….So why then do I second-guess myself this time around compared to any other time?

The situation I have is that i’m the lead maintainer of a legacy project. I can’t *really* call it legacy, because the client (whom i know reads this blog so i have to tip-toe around here ;)) invests money into updating their website and introducing new features. But as far as technologies go, ASP.NET 1.1 is well considered legacy. The website was written originally with no unit testing…at all. In fact, i was lucky enough to introduce TDD and unit testing in particular to my company and at least forge a stream in that direction.

After working on the project for nearly 18 months, and in that time introducing hundreds of unit tests, i’ve come to realise just how much of the test code i’d written was boilerplate Setup/Teardown of database content for unit testing. Remember this is a legacy project, so although IoC/DI are great concepts, breaking dependencies on a large project such as this wont happen with the click of a finger, so some heavyweight unit testing is unavoidable if I wanted to do unit testing at all (which i do).

The kind of Setup/Teardown code i’m talking about is pretty intense:

  1. SETUP – Insert 3-4 records, plus their associated records
  2. SETUP – Update foreign keys where applicable
  3. SETUP – Precondition assertion that the data was inserted successfully
  4. TEARDOWN – Delete inserted records
  5. TEARDOWN – Postcondition assertion that the data used for testing was removed (IE: maintaining ATOM’icity of tests)

Yes, a lot of this common functionality is encapsulated into helper classes, but some unit tests are still turning out 30-40 lines long. The test code is becoming harder to maintain because there’s just so much of it, and although i’m not quite as anal about using regions, I despise the thought of hiding code in a method with regions (here’s one thing the purist part of me won’t relent over :)). And worse yet, because of all the transactional overhead from inserting/rolling-back and pre/post-checking i’m sure our test-run times are suffering too (probably in the order of 10-20 seconds, but its the purist argument that they should be as fast as possible)

So my solution? To generate a specific database used with the intention of being in a known state for the test fixtures. Then, if i need to prepare the database for some tests, I only need to insert data into the testing database and that’s that. My test is no longer responsible for setting up of a stack of test data and ensuring its perfectly clean afterward. I could hire a new junior developer tomorrow, and it wouldn’t have disastrous side-effects if his unit tests didn’t sweep up after themselves (because they wouldn’t necessarily need to!)

We did this in my last job, and the approach worked pretty well from a practical aspect, however I always felt that it violated the purist argument that a unit test is:

  1. ATOMic
  2. responsible for ensuring it leaves the test environment in the pristine condition it was given in
  3. Does not make assumptions about the operating environment of the machine it is running on

The first two points i’ve talked to death about. It’s the third one which i’ve either made up in my head to try and take away my sleep at night, or it’s really a sounds piece of advice that your test should be self-containing and not relying on anything being pre-configured within it’s environment in order to do whatever it needs to do. Which quite frankly is a right-royal PITA.

So i’m not happy about it…I’ve concluded (again) that practicality will win out and having this database is not such a terrible thing because the long-term benefits it provides such as faster test execution time, and (more importantly) cleaner code far outweigh any unjustified idealism I may or may not hold.

I’ll do it, but i’ll argue with myself about it for a while to come yet…