Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
by Mathias 22. March 2014 13:11

A couple of days ago, I got into the following Twitter exchange:

 

So why do I think FsCheck + XUnit = The Bomb?

I have a long history with Test-Driven Development; to this day, I consider Kent Beck’s “Test-Driven Development by Example” one of the biggest influences in the way I write code (any terrible code I might have written is, of course, to be blamed entirely on me, and not on the book).

In classic TDD style, you typically proceed by writing incremental test cases which match your requirements, and progressively write the code that will satisfy the requirements. Let’s illustrate on an example, a password strength validator. Suppose that my requirements are “a password must be at least 8 characters long to be valid”. Using XUnit, I would probably write something along these lines:

namespace FSharpTests

open Xunit
open CSharpCode

module ``Password validator tests`` =

    [<Fact>]
    let ``length above 8 should be valid`` () =
        let password = "12345678"
        let validator = Validator ()
        Assert.True(validator.IsValid(password))

… and in the CSharpCode project, I would then write the dumbest minimal implementation that could passes that requirement, that is:

public class Validator
{
    public bool IsValid(string password)
    {
        return true;
    }
}

Next, I would write a second test, to verify the obvious negative:

namespace FSharpTests

open Xunit
open CSharpCode

module ``Password validator tests`` =

    [<Fact>]
    let ``length above 8 should be valid`` () =
        let password = "12345678"
        let validator = Validator ()
        Assert.True(validator.IsValid(password))

    [<Fact>]
    let ``length under 8 should not be valid`` () =
        let password = "1234567"
        let validator = Validator ()
        Assert.False(validator.IsValid(password))

This fails, producing the following output in Visual Studio:

Classic-Test-Result

… which forces me to fix my implementation, for instance like this:

public class Validator
{
    public bool IsValid(string password)
    {
        if (password.Length < 8)
        {
            return false;
        }

        return true;
    }
}

Let’s pause here for a couple of remarks. First, note that while my tests are written in F#, the code base I am testing against is in C#. Mixing the two languages in one solution is a non-issue. Then, after years of writing C# test cases with names like Length_Above_8 _Should_Be_Valid, and arguing whether this was better or worse than LengthAbove8 ShouldBeValid, I find that having the ability to simply write “length above 8 should be valid”, in plain old English (and seeing my tests show that way in the test runner as well), is pleasantly refreshing. For that reason alone, I would encourage F#-curious C# developers to try out writing tests in F#; it’s a nice way to get your toes in the water, and has neat advantages.

But that’s not the main point I am interested here. While this process works, it is not without issues. From a single requirement, “a password must be at least 8 characters long to be valid”, we ended up writing 2 test cases. First, the cases we ended up are somewhat arbitrary, and don’t fully reflect what they say. I only tested two instances, one 7 characters long, one 8 characters long. This is really relying on my ability as a developer to identify “interesting cases” in a vast universe of possible passwords, hoping that I happened to cover sufficient ground.

This is where FsCheck comes in. FsCheck is a port of Haskell’s QuickCheck, a property-based testing framework. The term “property” is somewhat overloaded, so let’s clarify: what “Property” means in that context is a property of our program that should be true, in the same sense as mathematically, a property of any number x is “x * x is positive”. It should always be true, for any input x.

Install FsCheck via Nuget, as well as the FsCheck XUnit extension; you can now write tests that verify properties by marking them with the attribute [<Property>], instead of [<Fact>], and the XUnit test runner will pick them up as normal tests. For instance, taking our example from right above, we can write:

namespace FSharpTests

open Xunit
open FsCheck
open FsCheck.Xunit
open CSharpCode

module Specification =

    [<Property>]
    let ``square should be positive`` (x:float) = 
        x * x > 0.

Let’s run that – fail. If you click on the test results, here is what you’ll see:

Square-Test

FsCheck found a counter-example, 0.0. Ooops! Our specification is incorrect here, the square value doesn’t have to be strictly positive, and could be zero. This is an obvious mistake, let’s fix the test, and get on with our lives:

[<Property>]
let ``square should be positive`` (x:float) = 
    x * x >= 0.

More...

by Mathias 19. October 2013 10:33

A couple of weeks ago, I had the pleasure to attend Progressive F# Tutorials in NYC. The conference was fantastic – two days of hands-on workshops, great organization by the good folks at SkillsMatter, Rickasaurus and Paul Blasucci, and a great opportunity to exchange with like-minded people, catch up with old friends and make new ones.

As an aside, if you missed NYC, fear not – you can still get tickets for Progressive F# Tutorials in London, coming up October 31 and November 1 in London.

After some discussion with Phil Trelford, we decided it would be a lot of fun to organize a workshop around PacMan. Phil has a long history with game development, and a lot of wisdom to share on the topic. I am a total n00b as far as game programming goes, but I thought PacMan would make a fun theme to hack some AI, so I set to refactor some of Phil’s old code, and transform it into a “coding playground” where people could tinker with how PacMan and the Ghosts behave, and make them smarter.

Long story short, the refactoring exercise turned out to be a bit more involved than what I had initially anticipated. First, games are written in a style which is pretty different from your run-of-the-mill business app, and getting familiar with a code base that didn’t follow a familiar style wasn’t trivial.

So here I am, trying to refactor that unfamiliar and somewhat idiosyncratic code base, and I start hitting stuff like this:

let ghost_starts = 
     [
         "red", (16, 16), (1,0)
         "cyan", (14, 16), (1,0)
         "pink", (16, 14), (0,-1)
         "orange", (18, 16), (-1,0)
     ]
     |> List.map (fun (color,(x, y), v) -> 
         // some stuff happens here
         { … X = x * 8 - 7; Y = y * 8 - 3; V = v; … }
     )

This is where I begin to get nervous. I need to get this done quickly, and factor our functions, but I am really worried to touch any of this. What’s X and Y? Why 8, 7 or 3?

More...

by Mathias 26. January 2013 19:08

Phil Trelford recently released Foq, a small F# mocking library (with a very daring name). If most of your code is in F#, this is probably not a big deal for you, because the technique of mocking isn’t very useful in F# (at least in my experience). On the other hand, if your goal is to unit test some C# code in F#, then Foq comes in very handy.

So why would you want to write your unit tests in F# in the first place?

Let’s start with some plain old C# code, like this:

namespace CodeBase
{
    using System;

    public class Translator
    {
        public const string ErrorMessage = "Translation failure";

        private readonly ILogger logger;
        private readonly IService service;

        public Translator(ILogger logger, IService service)
        {
            this.logger = logger;
            this.service = service;
        }

        public string Translate(string input)
        {
            try
            {
                return this.service.Translate(input);
            }
            catch (Exception exception)
            {
                this.logger.Log(exception);
                return ErrorMessage;
            }
        }
    }

    public interface ILogger
    {
        void Log(Exception exception);
    }

    public interface IService
    {
        string Translate(string input);
    }
}
We have a class, Translator, which takes 2 dependencies, a logger and a service. The main purpose of the class is to Translate a string, by calling the service. If the call succeeds, we return the translation, otherwise we log the exception and return an arbitrary error message.

This piece of code is very simplistic, but illustrates well the need for Mocking. If I want to unit test that class, there are 3 things I need to verify:

  • when the translation service succeeds, I should receive whatever the service says is right,
  • when the translation service fails, I should receive the error message,
  • when the translation service fails, the exception should be logged.

In standard C#, I would typically resort to a Mocking framework like Moq or NSubstitute to test this. What the framework buys me is the ability to create cheaply a fake implementation for the interfaces, setup their behavior to whatever my scenario is (“stubbing”), and in the case of the logger, where I can’t observe through state whether the exception has been logged, verify that the proper call has been made (“mocking”).

This is how my test suite would look:

namespace MoqTests
{
    using System;
    using CodeBase;
    using Moq;
    using NUnit.Framework;

    [TestFixture]
    public class TestsTranslator
    {
        [Test]
        public void Translate_Should_Return_Successful_Service_Response()
        {
            var input = "Hello";
            var output = "Kitty";

            var service = new Mock<IService>();
            service.Setup(s => s.Translate(input)).Returns(output);

            var logger = new Mock<ILogger>();

            var translator = new Translator(logger.Object, service.Object);

            var result = translator.Translate(input);

            Assert.That(result, Is.EqualTo(output));
        }

        [Test]
        public void When_Service_Fails_Translate_Should_Return_ErrorMessage()
        {
            var service = new Mock<IService>();
            service.Setup(s => s.Translate(It.IsAny<string>())).Throws<Exception>();

            var logger = new Mock<ILogger>();

            var translator = new Translator(logger.Object, service.Object);

            var result = translator.Translate("Hello");

            Assert.That(result, Is.EqualTo(Translator.ErrorMessage));
        }

        [Test]
        public void When_Service_Fails_Exception_Should_Be_Logged()
        {
            var error = new Exception();
            var service = new Mock<IService>();
            service.Setup(s => s.Translate(It.IsAny<string>())).Throws(error);

            var logger = new Mock<ILogger>();

            var translator = new Translator(logger.Object, service.Object);

            translator.Translate("Hello");

            logger.Verify(l => l.Log(error));
        }
    }
}

More...

by Mathias 21. April 2012 05:34

I presented “For Those About to Mock”, an introduction to Mocking for C# developers, at the San Francisco chapter of Bay .NET last week, and promised I would make the material available.

Here it is: you can download the code “For Those About to Mock” here.

Thanks for everyone who made it, it was a great crowd with lots of good questions – I had a great time!

by Mathias 17. February 2012 06:27
I finally finished “Working effectively with legacy code”, reading it a few pages at a time every morning on my way to work. Legacy code is one of these topics you know are important, but which you never really want to hear about, so the book has stayed on the backlog for a while. Recently, I helped out someone establish tests on a legacy code base, and began following Michael Feather’s tweets with great enjoyment, and decided it was time to read it.

Who should read it?

The developer who is already familiar with unit testing, comfortable with his language, object-oriented concepts, and what makes code maintainable - and wants to expand his thoughts and tools on testing and testability.


3 things I liked about it

  • The chapter titles are awesome – just like good naming is a hallmark of Clean Code, the chapter titles convey very clearly what the intent is. “I need to change a Monster method and I can’t write tests for it”, “It takes forever to make a change”, “How do I know that I am not breaking anything”, “I am changing the same code all over the place” – they all evoke situations we have been through one time or another, and the corresponding chapters do address these questions head-on.
  • Clear concepts and vocabulary: if anything, the one sentence that will stay with me is “legacy code is simply code without tests”, a wonderfully clear and opinionated definition, which not everyone may agree with. Feathers defines a few concepts (like a Seam or a Pinch Point), which provide a helpful language to think and and discuss legacy code.
  • Multiple languages: I write primarily in C# and F#, so in principle, learning about specific issues of testing legacy C code isn’t high on my concerns list. Still, I found that going through examples in languages I am not familiar with was interesting, in that it provided both a broader perspective on testing and on the relative strengths and weaknesses of various languages. It also made me think of techniques I seldom (if ever) use in C#, like pre-processor directives.

3 things I didn’t like that much

  • Multiple languages: covering multiple languages provides a broader perspective, but it also comes at the expense of each individual language. If you are specifically interested in, say, C#-specific techniques, this book may disappoint you - it is fairly general.
  • A bit dated: for a book published in 2004, it aged remarkably well. Still, 8 years is a long time in computer-years. From a C# developer perspective, there have been a few major releases of both the language and the IDE, with implications on testing and refactoring. I would assume (hope) that today, most language/IDEs do support refactorings like Extract Method. On the language side, the book touches on using function pointers to achieve decoupling, but the context is mostly C. With the emergence of functional concepts (Func<T> in modern C# for instance), I think this would warrant a bigger discussion today.
  • A somewhat tedious read: this book is not exactly a page-turner. Reading legacy code examples (a good part of them probably not in a language you are comfortable with, unless you are a polyglot) and figuring out mechanical steps to disentangle it isn’t material that will be turned into a Hollywood movie any time soon.

Parting thoughts

I really enjoyed this book, but I would recommend it with an asterisk. Depending on how you want to look at it, a polyglot book will either lose specificity, or gain generality. Personally, I think in this case, the gain in generality easily compensates for the lack of depth in each individual language. Yes, I would like a C#-specific book which points to useful, up-to-date tools – but that book would be obsolete in 2 years at best. By covering a variety of languages, Feathers illustrates very different solutions or ideas, and because he uses only fairly simple features in each language, the ideas remain easy to understand and convert into other “coding dialects”.

My personal bent is for concepts and language, because they last longer than recipes and tools, which is why I really enjoyed this book: it helped me create / articulate a mental map. I don’t have many computer books published in 2004 that I read for insight, today – and this one feels like one of these “timeless classics”.

That being said, I think it takes a certain experience with unit testing and code maintenance to appreciate the book, and I wouldn’t recommend it to someone who is just starting with tests and wants to find quick solutions to their problems. It may work (the book is very clear on steps and methodology), but I suspect it may be potentially frustrating.

Totally unrelated note: this is the first technical book I read on Kindle, and I have mixed feelings about it. I was hoping that the Kindle could serve as a portable library for all these massive technical bricks. On one hand, it’s nice to have the possibility to carry around searchable books; on the other hand, clearly, it’s not the best way to read through code samples, where good old paper still has an edge.

Comments

Comment RSS