27. May 2009 13:36
The iterative nature of writing code inevitably involves adding code which is good enough for now, but should be refactored later. The problem is that unless you have some system in place, later, you will just forget about it. Personally, I have been trying 3 approaches to address this: bug-tracking systems, the good old-fashion text to-do list, and its variant, the task list built in Visual Studio, and finally, comments embedded in the code.
Each have their pros and cons. Bug tracking systems are great for systematically managing work items in a team (prioritization, assignment to various members...), but work best for items at the level of a feature: in my experience, smaller code changes don't fit well. I am a big fan of the bare-bones text file to-do list; I tried, but never took to the Visual Studio to-do list (no clear reason there). I hardly embed comments in code anymore (like 'To-do: change this later'): on the plus side, the comment is literally tacked to the code that needs changing, but the comments cannot be displayed all as one list, which makes them too easy to forget.
Today I found a cool alternative via Donn Felker’s blog: #warning. You use it essentially like a comment, but preface it with #warning, like this:
#warning The tag name should not be hardcoded
XmlNodeList atBatNodes = document.GetElementsByTagName("atbat");
Now, when you build, you will see something like this in Visual Studio:
It has all the benefits of the embedded comment – it’s close to the code that needs to be changed - but will also show up as a list which will be in-your-face every time you build. I’ll try that out, and see how that goes, and what stays in the todo.txt!
17. May 2009 02:25
Another nice improvement coming with NUnit 2.5 is the mechanism for data-driven tests. NUnit 2.4.7 included an extension by andreas schlapsi which permitted to write row tests, using the [TestRow] attribute.
NUnit 2.5 eases the process with the [TestCase] attribute. Unlike [TestRow], the [TestCase] attribute is available within the NUnit.Framework namespace, and doesn’t require including additional references.
Why do Data-driven tests matter? They are not technically necessary: you can write the same tests as easily using the standard [Test] attribute. However, it comes handy when you are testing a feature where you want to verify the behavior for multiple combinations of input values. Using “classic” unit tests, you will end up duplicating test code, and you will have to find different name for tests method which are in essence the same test.
Using [TestCase] instead, here is how it looks. Suppose your class MyClass has a method “Divide” like this one:
public class MyClass
public double Divide(double numerator, double denominator)
if (denominator == 0)
throw new DivideByZeroException("Cannot divide by zero.");
return numerator / denominator;
One way to test that feature would be with a test like that one:
[TestCase(2.5d, 2d, Result=1.25d)]
[TestCase(-2.5d, 1d, Result = -2.5d)]
public double ValidateDivision(double numerator, double denominator)
var myClass = new MyClass();
14. May 2009 01:10
Update: here is the event page.
Microsoft is hosting a Windows Live Messenger Hackathon on the 27th May 2009 at Microsoft's offices on 835 Market St in San Francisco. The event starts at 5:30, there will be a discussion on social media, how to use the Windows Live API in your website, a coding contest, prizes, free beer and pizza… Sounds like fun!
10. May 2009 23:24
Compared to the recent releases, NUnit 2.5 contains quite a few significant changes. One notable change is in the area of exception testing – and it is for the better.
Until NUnit 2.4.8, testing for exceptions was done by decorating tests with the [ExpectedException] attribute. NUnit 2.5 introduces a new assertion instead, Assert.Throws.
Why is this better?
I see at least 2 reasons: it makes it much easier to catch the exception precisely where it is expected to happen, and working with the exception itself is now a breeze.
While the ExpectedException attribute is typically sufficient, it is a bit of a blunt tool. What it does verify is that the code under test throws the expected type of exception; what it doesn’t tell you is where. The tests are not fully explicit, and sometimes, they can result in unexpected “false positives”.
Consider the following situation: a class MyClass exposes a method that takes a positive int as argument, and throws an ArgumentException if a negative is passed. A test for this behavior would look something like this:
public void FalsePositive()
MyClass myClass = new MyClass();
// This call is test setup, but will
// throw an unwanted ArgumentException.
// We want to check that this call throws.