Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
by Mathias 22. March 2014 13:11

A couple of days ago, I got into the following Twitter exchange:

 

So why do I think FsCheck + XUnit = The Bomb?

I have a long history with Test-Driven Development; to this day, I consider Kent Beck’s “Test-Driven Development by Example” one of the biggest influences in the way I write code (any terrible code I might have written is, of course, to be blamed entirely on me, and not on the book).

In classic TDD style, you typically proceed by writing incremental test cases which match your requirements, and progressively write the code that will satisfy the requirements. Let’s illustrate on an example, a password strength validator. Suppose that my requirements are “a password must be at least 8 characters long to be valid”. Using XUnit, I would probably write something along these lines:

namespace FSharpTests

open Xunit
open CSharpCode

module ``Password validator tests`` =

    [<Fact>]
    let ``length above 8 should be valid`` () =
        let password = "12345678"
        let validator = Validator ()
        Assert.True(validator.IsValid(password))

… and in the CSharpCode project, I would then write the dumbest minimal implementation that could passes that requirement, that is:

public class Validator
{
    public bool IsValid(string password)
    {
        return true;
    }
}

Next, I would write a second test, to verify the obvious negative:

namespace FSharpTests

open Xunit
open CSharpCode

module ``Password validator tests`` =

    [<Fact>]
    let ``length above 8 should be valid`` () =
        let password = "12345678"
        let validator = Validator ()
        Assert.True(validator.IsValid(password))

    [<Fact>]
    let ``length under 8 should not be valid`` () =
        let password = "1234567"
        let validator = Validator ()
        Assert.False(validator.IsValid(password))

This fails, producing the following output in Visual Studio:

Classic-Test-Result

… which forces me to fix my implementation, for instance like this:

public class Validator
{
    public bool IsValid(string password)
    {
        if (password.Length < 8)
        {
            return false;
        }

        return true;
    }
}

Let’s pause here for a couple of remarks. First, note that while my tests are written in F#, the code base I am testing against is in C#. Mixing the two languages in one solution is a non-issue. Then, after years of writing C# test cases with names like Length_Above_8 _Should_Be_Valid, and arguing whether this was better or worse than LengthAbove8 ShouldBeValid, I find that having the ability to simply write “length above 8 should be valid”, in plain old English (and seeing my tests show that way in the test runner as well), is pleasantly refreshing. For that reason alone, I would encourage F#-curious C# developers to try out writing tests in F#; it’s a nice way to get your toes in the water, and has neat advantages.

But that’s not the main point I am interested here. While this process works, it is not without issues. From a single requirement, “a password must be at least 8 characters long to be valid”, we ended up writing 2 test cases. First, the cases we ended up are somewhat arbitrary, and don’t fully reflect what they say. I only tested two instances, one 7 characters long, one 8 characters long. This is really relying on my ability as a developer to identify “interesting cases” in a vast universe of possible passwords, hoping that I happened to cover sufficient ground.

This is where FsCheck comes in. FsCheck is a port of Haskell’s QuickCheck, a property-based testing framework. The term “property” is somewhat overloaded, so let’s clarify: what “Property” means in that context is a property of our program that should be true, in the same sense as mathematically, a property of any number x is “x * x is positive”. It should always be true, for any input x.

Install FsCheck via Nuget, as well as the FsCheck XUnit extension; you can now write tests that verify properties by marking them with the attribute [<Property>], instead of [<Fact>], and the XUnit test runner will pick them up as normal tests. For instance, taking our example from right above, we can write:

namespace FSharpTests

open Xunit
open FsCheck
open FsCheck.Xunit
open CSharpCode

module Specification =

    [<Property>]
    let ``square should be positive`` (x:float) = 
        x * x > 0.

Let’s run that – fail. If you click on the test results, here is what you’ll see:

Square-Test

FsCheck found a counter-example, 0.0. Ooops! Our specification is incorrect here, the square value doesn’t have to be strictly positive, and could be zero. This is an obvious mistake, let’s fix the test, and get on with our lives:

[<Property>]
let ``square should be positive`` (x:float) = 
    x * x >= 0.

More...

by Mathias 4. December 2011 15:47

I am in the middle of “Working Effectively with Legacy Code”, and found it every bit as great as it was said to be. In the book, Feathers introduces the concept of Seams and Enabling Points:

a Seam is a place where you can alter behavior in your program without editing it in that place

every seam has an enabling point, a place where you can make the decision to use one behavior or another.

The idea - as I understand it - is that an enabling point is a hook for testability, a place where you can replace the behavior of a piece of code with your own controlled behavior, and validate that the results are as expected.

The reason I am bringing this up is that I have been writing lots of F# lately, and it made me realize that a functional style provides lots of enabling points, and can be much easier to test than object-oriented code.

Here is a simplified, but representative, example of the problem I was looking at: I needed to pick a random item in a list. In C#, a method along these lines would do the job:

public T PickFrom(IList<T> list)
{
   var random = new Random();
   return list[random.Next(list.Count())];
}

However, this code is utterly untestable; it’s also probably a terrible idea to instantiate a new Random every time this is called, so we modify it this way:

public T PickFrom(IList<T> list, Random random)
{
   return list[random.Next(list.Count())];
}

This is much better: now we have a decent Enabling Point, because the list of arguments of the method contains everything that is used inside the method. However, this is still untestable, but for a different reason: by definition, Random.Next() will return different values every time PickFrom is called, and expecting a repeatable result from PickFrom is a bit of a desperate enterprise.

More...

by Mathias 13. November 2011 11:03

We were doing some pair-programming with Petar recently, Test-Driven Development style, and started talking about how figuring out where to begin with the tests is often the hardest part. Petar noticed that when writing a test, I was typically starting at the end, first writing an Assert, and then coding my way backwards in the test – and that it helped getting things started.

I hadn’t realized I was doing it, and suspected it was coming from Kent Beck’s “Test-Driven Development, by Example”. Sure enough, the Patterns section of the book lists the following:

Assert First. When should you write the asserts? Try writing them first.

So why would this be a good idea?

I think the reason it works well, is that it helps focus the effort on one single goal at a time, and requires clarifying what that goal is. Starting with the Assert forces you to imagine one single fact that should be true once you have implemented the feature, and to think about how you are going to verify that the feature is indeed working.

Once the Assert is in place, you can now write the story backwards: what is the method that was called to get the result being checked, and  where does it belong? What classes and setup is required to make that method call? And, now that the story is written, what is it really saying, and what should the test method name be?

In other words, begin with the Assert, figure out the Act part, Arrange the actors, and (re)name the test method.

I think what trips some people is that while a good test will look like a little story, progressing from a beginning to a logical end, the process leading to it follows a completely opposite direction. Kent Beck points the Self-Similarity in the entire process: write stories which describe what the application will do once done, write tests which describe what the feature does once the code is implemented, and write asserts which will pass once the test is complete. Always start with the end in mind, and do exactly what it takes to achieve your goal.

dyn002_original_472_480_pjpeg_2665126_e9a998bfcc0456c55ca52c2c29484609[1]

by Mathias 25. September 2011 05:31

headCodeCamp

It’s this time of the year again: on Saturday & Sunday October 8 + 9, Silicon Valley CodeCamp is taking place at Foothill College in Los Altos Hill. There are currently over 200 sessions listed, and 2000 people signed up already. I am expecting lots of fun - again.

I’ll be giving 3 talks this year:

  • Sat, 11:45 - Beginning TDD for C# Developers.
  • Sun, 1:15: For Those About to Mock.
  • Sun, 2:45: An excursion in F#

Hope to see you there, and also that I will have some energy left to attend some of the other talks!

More information about my talks here.

by Mathias 13. February 2011 13:46

I have been a fan of Test-Driven Development for a long time; it has helped me write better code and keep my sanity more than once. However, until now, I haven’t really looked into Behavior-Driven Development, even though I have often heard it described as a natural next step from TDD. A recent piece in MSDN Magazine re-ignited my interest, and helped me figure out one point I had misunderstood, namely how BDD and TDD fit together, so I started looking into existing frameworks.

Most of them follow a similar approach: write in a plain-text file a description of the feature in Gherkin, a “feature description” language that is human readable, let the framework generate test stubs which map to the story, and progressively fill the stubs as the feature gets implemented.

I am probably (too) used to writing tests as code, but something about the idea of starting from text files just doesn’t feel right to me. I understand the appeal of Gherkin as a platform-independent specification language, and of letting the product owner write specifications – but the thought of having to maintain two sets of files (the features and the actual tests) worries me. I may warm up to it in due time, but in the meanwhile I came across StoryQ, a framework which felt much easier to adopt for me.

StoryQ is a tiny dll, which permits to write stories as tests in C#, using a fluent interface, with all the comfort and safety of strong typing and intellisense; Gherkin stories can be produced as an output of the tests, and a separate utility allows you to create code templates from Gherkin.

Rather than talk about it, let’s see a quick code example. I have a regular NUnit TestFixture with one Test, which represents a Story I am interested in: when I pay the check at the restaurant, I need to add a tip to the check. There are 2 scenarios I am interested in: when I am happy, I’ll tip a nice 20%, but when I am not, there will be zero tip. This is how it could look like in StoryQ:

using NUnit.Framework;
using StoryQ;

[TestFixture]
public class CalculateTip
{
   [Test]
   public void CalculatingTheTip()
   {
      new Story("Calculating the Tip")
      .InOrderTo("Pay the check")
      .AsA("Customer")
      .IWant("Add tip to check")

      .WithScenario("Unhappy with service")
      .Given(CheckTotalIs, 100d)
      .When(IAmHappyWithService, false)
      .Then(TipShouldBe, 0d)

      .WithScenario("Happy with service")
      .Given(CheckTotalIs, 100d)
      .When(IAmHappyWithService, true)
      .Then(TipShouldBe, 20d)

      .Execute();
   }

   public double CheckTotal { get; set; }

   public bool IsHappy { get; set; }

   public void CheckTotalIs(double total)
   {
      this.CheckTotal = total;
   }

   public void IAmHappyWithService(bool isHappy)
   {
      this.IsHappy = isHappy;
   }

   public void TipShouldBe(double expectedTip)
   {
      var tip = TipCalculator.Tip(CheckTotal, IsHappy);
      Assert.AreEqual(expectedTip, tip);
   }
}

(The TipCalculator class is a simple class I implemented on the side).

This test can now be run just like any other NUnit test; when I ran this with ReSharper within Visual Studio, I immediately saw the output below. Pretty nice, I say.

image

What I liked so far

  • Painless transition for someone used to TDD. For someone like me, who is used to write unit tests within Visual Studio, this is completely straightforward. No new language to learn, a process pretty similar to what I am used to – a breeze.
  • Completely smooth integration with NUnit and ReSharper: no plugin to install, no tweaks, it just worked.
  • Fluent interface: the fluent interface provides guidance as you write the story, and hints at what steps are expected next.
  • Passing arguments: I like the API for expressing the Given/When/Then steps. Passing arguments feels very natural.
  • xml report: I have not played much with it yet, but there is an option to produce an xml file with the results of the tests, which should work well with a continuous integration server.

What I didn’t like that much

  • Execute: at some point I inadvertently deleted the .Execute() call at the end of the Story, and it took me a while to figure out why all my tests were passing, but no output was produced. More generally, I would have preferred something like Verify(), which seems clearer, but that’s nitpicking.
  • Multiple scenarios in one test: once I figured out that I could chain multiple scenarios in one story, I was a happy camper, but all the examples I saw on the project page have one story / one scenario per test method. It’s only when I used the WPF story converter that I realized I could do this.
  • Crash of the WPF converter: the converter is awesome – but the first time I ran it it crashed.

So where do I go from there? So far, I really enjoyed playing with StoryQ – enough that I want to give it a go on a real project. I expect that the path to getting comfortable with BDD will be similar to TDD: writing lots of tests, some of them fairly bad, until over time a certain feeling for what’s right or very wrong develops… In spite of my reservations, I am skeptical but curious (after all, I have been known to be wrong sometimes…), so I also plan to give SpecFlow a try.

Comments

Comment RSS