Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
by Mathias 8. August 2015 12:42

A couple of weeks ago, I came across this blog post by Steve Shogren, which looks at various programming languages, and attempts to define a “language safety score”, by taking into account a list of language criteria (Are nulls allowed? Can variables be mutated? And so on), aggregating them into an overall safety score – and finally looking for whether the resulting score was a reasonable predictor for the observed bug rate across various projects.

I thought this was an interesting idea. However, I also had reservations on the methodology. Picking a somewhat arbitrary list of criteria, giving them indiscriminately the same weight, and summing them up, didn’t seem to me like the most effective approach – especially given that Steve had already collected a nice dataset. If the goal is to identify which language features best predict how buggy the code will be, why not start from there, and build a model which attempts to predict the bug rate based on language features?

So I decided to give it a shot, and build a quick-and-dirty logistic regression model. In a nutshell, logistic regression attempts to model the probability of observing an event, based on a set of criteria / features. A prototypical application would be in medicine, trying to predict, for instance, the chances of developing a disease, given patient characteristics. In our case, the disease is a bug, and the patient a code base. We’ll use the criteria listed by Steve as potential predictors, and, as a nice side-product of logistic regression, we will get a quantification of how important each of the criteria is in predicting the bug rate.

I’ll discuss later some potential issues with the approach; for now, let’s build a model, and see where that leads us. I lifted the data from Steve’s post (hopefully without typos), with one minor modification: instead of scoring criteria as 1, 0 or –1, I just retained 1 or 0 (it’s there or it’s not there), and prepared an F# script, using the Accord framework to run my logistic regression.

Note: the entire script is here as a Gist.

#I @"../packages"
#r @"Accord.3.0.1-alpha\lib\net45\Accord.dll"
#r @"Accord.MachineLearning.3.0.1-alpha\lib\net45\Accord.MachineLearning.dll"
#r @"Accord.Math.3.0.1-alpha\lib\net45\Accord.Math.dll"
#r @"Accord.Statistics.3.0.1-alpha\lib\net45\Accord.Statistics.dll"

let language, bugrate, criteria = 
    [|  "F#",           0.023486288,    [|1.;1.;1.;0.;1.;1.;1.;0.;0.;0.;1.;1.;1.;0.|]
        "Haskell",      0.015551204,    [|1.;1.;1.;0.;1.;1.;1.;1.;1.;0.;1.;1.;0.;1.|]
        "Javascript",   0.039445132,    [|0.;0.;0.;0.;0.;0.;0.;0.;0.;0.;1.;0.;1.;0.|] 
        "CoffeeScript", 0.047242288,    [|0.;0.;0.;0.;0.;0.;0.;0.;0.;0.;1.;0.;1.;0.|] 
        "Clojure",      0.011503478,    [|0.;1.;0.;0.;0.;0.;1.;0.;1.;1.;1.;0.;0.;0.|] 
        "C#",           0.03261284,     [|0.;0.;1.;0.;0.;1.;1.;0.;0.;0.;1.;0.;1.;0.|] 
        "Python",       0.02531419,     [|0.;0.;0.;0.;0.;0.;0.;0.;0.;0.;1.;0.;1.;0.|] 
        "Java",         0.032567736,    [|0.;0.;0.;0.;0.;0.;0.;0.;0.;0.;1.;0.;1.;0.|] 
        "Ruby",         0.020303702,    [|0.;0.;0.;0.;0.;0.;0.;0.;0.;0.;1.;0.;1.;0.|] 
        "Scala",        0.01904762,     [|1.;1.;1.;0.;1.;1.;1.;0.;0.;0.;1.;0.;0.;0.|] 
        "Go",           0.024698375,    [|0.;0.;1.;0.;0.;1.;1.;0.;0.;0.;1.;0.;1.;0.|] 
        "PHP",          0.031669293,    [|0.;0.;0.;0.;0.;0.;0.;0.;0.;0.;1.;0.;1.;0.|] |]
    |> Array.unzip3

open Accord.Statistics.Models.Regression
open Accord.Statistics.Models.Regression.Fitting 

let features = 14
let model = LogisticRegression(features)
let learner = LogisticGradientDescent(model)

let rec learn () = 
    let delta = learner.Run(criteria, bugrate)
    if delta > 0.0001
    then learn ()
    else ignore ()

learn () |> ignore

And we are done – we have trained a model to predict the bug rate, based on our 14 criteria. How is this working? Let’s find out:

for i in 0 .. (language.Length - 1) do    
    let lang = language.[i]
    let predicted = model.Compute(criteria.[i])
    let real = bugrate.[i]
    printfn "%16s Real: %.3f Pred: %.3f" lang real predicted

              F# Real: 0.023 Pred: 0.023
         Haskell Real: 0.016 Pred: 0.016
      Javascript Real: 0.039 Pred: 0.033
    CoffeeScript Real: 0.047 Pred: 0.033
         Clojure Real: 0.012 Pred: 0.011
              C# Real: 0.033 Pred: 0.029
          Python Real: 0.025 Pred: 0.033
            Java Real: 0.033 Pred: 0.033
            Ruby Real: 0.020 Pred: 0.033
           Scala Real: 0.019 Pred: 0.020
              Go Real: 0.025 Pred: 0.029
             PHP Real: 0.032 Pred: 0.033

Looks pretty good. Let’s confirm that with a chart, using FSharp.Charting:

#load "FSharp.Charting.0.90.12\FSharp.Charting.fsx"
open FSharp.Charting

let last = language.Length - 1

Chart.Combine [
    Chart.Line ([ for i in 0 .. last -> bugrate.[i]], "Real", Labels=language)
    Chart.Line ([ for i in 0 .. last -> model.Compute(criteria.[i])], "Pred") ] 
|> Chart.WithLegend()


What criteria did our model identify as predictors for bugs? Let’s find out.

let criteriaNames = [|
    "Null Variable Usage"
    "Null List Iteration"
    "Prevent Variable Reuse"
    "Ensure List Element Exists"
    "Safe Type Casting"
    "Passing Wrong Type"
    "Misspelled Method"
    "Missing Enum Value"
    "Variable Mutation"
    "Prevent Deadlocks"
    "Memory Deallocation"
    "Tail Call Optimization"
    "Guaranteed Code Evaluation"
    "Functional Purity" |]    
for i in 0 .. (features - 1) do 
    let name = criteriaNames.[i]
    let wald = model.GetWaldTest(i)
    let odds = model.GetOddsRatio(i)
    (printfn "%28s odds: %4.2f significant: %b" name odds wald.Significant)

         Null Variable Usage odds:  0.22 significant: true
         Null List Iteration odds:  0.86 significant: true
      Prevent Variable Reuse odds:  0.64 significant: true
  Ensure List Element Exists odds:  1.05 significant: true
           Safe Type Casting odds:  1.00 significant: false
          Passing Wrong Type odds:  0.86 significant: true
           Misspelled Method odds:  1.05 significant: true
          Missing Enum Value odds:  0.78 significant: true
           Variable Mutation odds:  0.86 significant: true
           Prevent Deadlocks odds:  0.64 significant: true
         Memory Deallocation odds:  0.74 significant: true
      Tail Call Optimization odds:  0.22 significant: true
  Guaranteed Code Evaluation odds:  1.71 significant: true
           Functional Purity odds:  0.69 significant: true

How should you read this? The first output, the odds ratio, describes how much more likely it is to observe success than failure when that criterion is active. In our case, success means “having a bug”, so for instance, if your language prevents using nulls, you’d expect 1.0 / 0.22 = 4.5 times less chances to write bugs. In other words, if the odds are close to 1.0, the criterion doesn’t make much of a difference. The closer to zero it is, the lower the predicted bug count, and vice-versa.

Conclusions and caveats

The 3 most significant predictors of a low bug rate are, in order, no nulls, tail calls optimization, and (to a much lesser degree) lazy evaluation. After that, we have honorable scores for avoiding variable reuse, preventing deadlocks, and functional purity.

So… what’s the bottom line? First off, just based on the bug rates alone, it seems that using functional languages would be a safer bet than Javascript (and CoffeeScript) to avoid bugs.

Then, now would be a good time to reiterate that this is a quick-and-dirty analysis. Specifically, there are some clear issues with the dataset. First, we are fitting 12 languages on 14 criteria – that’s not much to go on. On top of that, there is some data redundancy. None of the languages in our sample has “ensure list element exists” (4th column is filled with zeroes), and all of them guarantee memory de-allocation (11th column filled with ones). I suspect there is some additional redundancy, because of the similarity between the columns.

Note: another interesting discussion would be whether the selected criteria properly cover the differences between languages. I chose to not go into that, and focus strictly on using the data as-is.

I ran the model again, dropping the 2 columns that contain no information; while this doesn’t change the predictions of the model, it does impact a bit the weight of each criterion. The results, while similar, do show some differences:

("Null Variable Usage", 0.0743885639)
("Functional Purity", 0.4565632287)
("Prevent Variable Reuse", 0.5367456237)
("Prevent Deadlocks", 0.5374379877)
("Tail Call Optimization", 0.7028982809)
("Missing Enum Value", 0.7539575884)
("Null List Iteration", 0.7636177784)
("Passing Wrong Type", 0.7636177784)
("Variable Mutation", 0.7646027916)
("Safe Type Casting", 1.072641105)
("Misspelled Method", 1.072641105)
("Guaranteed Code Evaluation", 2.518831684)

Another piece of information I didn’t use is how many commits were taken into consideration. This matters, because the information gathered for PHP is across 10 times more commits than F#, for instance. It wouldn’t be very hard to do – instead of regressing against the bug rate, we could count the clean and buggy commits per language, and proceed along the lines of the last example described here.

In spite of these issues, I think this constitutes a better base to construct a language score index. Rather than picking criteria by hand and giving them arbitrary weights, let the data speak. Measure how well each of them explains defects, and use that as a basis to determine their relative importance.

That’s it for today! Big thanks to Steve Shogren for a stimulating post, and for making the data available. And again, you can find the script here as a Gist. If you have comments or questions, ping me on Twitter!

by Mathias 12. May 2015 13:50

As much as we people who write code like to talk about code, the biggest challenge in a software project is not code. A project rarely fails because of technology – it usually fails because of miscommunications: the code that is delivered solves a problem (sometimes), but not the right one. One of the reasons we often deliver the wrong solution is that coding involves translating the world of the original problem into a different language. Translating one way is hard enough as it is, but then, rarely are users comfortable with reading and interpreting code – and as a result, confirming whether the code “does the right thing” is hard, and errors go un-noticed.

This is why the idea of Ubiquitous Language, coined by Eric Evans in his Domain Driven Design book, always appealed to me. The further apart the languages of the domain expert and the code are, the more likely it is that something will be lost in translation.

However, achieving this perfect situation, with “a language structured around the domain model and used by all team members to connect all the activities of the team with the software” [source], is hard. I have tried this in the past, mainly through tests. My idea at the time was that tests, especially BDD style, could perhaps provide domain experts with scenarios similar enough to their worldview that they could serve as a basis for an active dialogue. The experience wasn’t particularly successful: it helped some, but in the end, I never got to the point where tests would become a shared, common ground (which doesn’t mean it’s not possible – I just didn’t manage to do it).

Fast forward a bit to today – I just completed a project, and it’s the closest I have ever been to seeing Ubiquitous Language in action. It was one of the most satisfying experiences I had, and F# had a lot to do with why it worked.

The project involved some pretty complex modeling, and only two people – me and the client. The client is definitely a domain expert, and on the very high end of the “computer power user” spectrum: he is very comfortable with SQL, doesn’t write software applications, but has a license to Visual Studio and is not afraid of code.

The fact that F# worked well for me isn’t a surprise – I am the developer in that equation, and I love it, for all the usual technical reasons. It just makes my life writing code easier. The part that was interesting here is that F# worked well for the client, too, and became our basis for communication.

What ended up happening was the following: I created a GitHub private repository, and started coding in a script file, fleshing out a domain model with small, runnable pieces of code illustrating what it was doing. We would have regular Skype meetings, with a screen share so that I could walk him through the code in Visual Studio, and explain the changes I made - and we would discuss. Soon after, he started to run the code himself, and even making small changes here and there, not necessarily the most complicated bits, but more domain-specific parts, such as adjusting parameters and seeing how the results would differ. And soon, I began receiving emails containing specific scenarios he had experimented with, using actual production data, and pointing at possible flaws in my approach, or questions that required clarifications.

So how did F# make a difference? I think it’s a combination of at least 2 things: succinctness, and static typing + scripts. Succinctness, because you can define a domain with very little code, without loosing expressiveness. As a result, the core entities of the domain end up taking a couple of lines at the top of a single file, and it’s easy to get a full picture, without having to navigate around between files and folders, and keep information in your head. As an illustration, here is a snippet of code from the project:

type Window = { Early:DateTime; Target:DateTime; Late:DateTime }

type Trip = { 
    Dwell:TimeSpan }

type Action = 
    | Pickup of Trip 
    | Dropoff of Trip
    | CompleteRoute of Location

This is concise, and pretty straightforward – no functional programming guru credentials needed. This is readable code, which we can talk about without getting bogged down in extraneous details.

The second ingredient is static typing + scripts. What this creates is a safe environment for experimentation.  You can just change a couple of lines here and there, run the code, and see what happens. And when you break something, the compiler immediately barks at you – just undo or fix it. Give someone a running script, and they can start playing with it, and exploring ideas.

In over 10 years writing code professionally, I never had such a collaborative, fruitful, and productive interaction cycle with a client. Never. This was the best of both worlds – I could focus on the code and the algorithms, and he could immediately use it, try it out, and send me invaluable feedback, based on his domain knowledge. No noise, no UML diagrams, no slides, no ceremony – just write code, and directly communicate around it, making sure nothing was amiss. Which triggered this happy tweet a few weeks back:

We were looking at the code together, and my client spotted a domain modeling mistake, right there. This is priceless.

As a side-note, another thing that is priceless is “F# for Fun and Profit”. Scott Wlaschin has been doing an incredible work with this website. It’s literally a gold mine, and I picked up a lot of ideas there. If you haven’t visited it yet, you probably should.

by Mathias 3. May 2015 16:34

I got curious the other day about how to measure the F# community growth, and thought it could be interesting to take a look at this through StackOverflow. As it turns out, it’s not too hard to get some data, because StackExchange exposes a nice API, which allows you to make all sorts of queries and get a JSON response back.

As a starting point, I figured I would just try to get the number of questions asked per month. The API allows you to retrieve questions on any site, by tag, between arbitrary dates. Responses are paged: you can get up to 100 items per page, and keep asking for next pages until there is nothing left to receive. That sounds like a perfect job for the FSharp.Data JSON Type Provider.

First things first, we create a type, Questions, by pointing the JSON Type Provider to a url that returns questions; based on the structure of the JSON document it receives, the Type Provider creates a type, which we will then be able to use to make queries:

#r @"FSharp.Data.2.2.0\lib\net40\FSharp.Data.dll"
open FSharp.Data
open System

let sampleUrl = ""
type Questions = JsonProvider<sampleUrl>

Next, we’ll need to grab all the questions tagged F# between 2 given dates. As an example, the following would return the second page (questions 101 to 200) from all F# questions asked between January 1, 2014 and January 31, 2015:

There are a couple of quirks here. First, the dates are in UNIX standard, that is, the number of seconds elapsed from January 1, 1970. Then, we need to keep pulling pages, until the response indicates that there are no more questions to receive, which is indicated by the HasMore property. That’s not too hard: let’s create a couple of functions, first to convert a .NET date to a UNIX date, and then to build up a proper query, appending the page and dates we are interested in to our base query – and finally, let’s build a request that recursively calls the API and appends results, until there is nothing left:

let fsharpQuery = ""

let unixEpoch = DateTime(1970,1,1)
let unixTime (date:DateTime) = 
    (date - unixEpoch).TotalSeconds |> int64

let page (page:int) (query:string) = 
    sprintf "%s&page=%i" query page
let between (from:DateTime) (``to``:DateTime) (query:string) = 
    sprintf "%s&&fromdate=%i&todate=%i" query (unixTime from) (unixTime ``to``)

let questionsBetween (from:DateTime) (``to``:DateTime) =
    let baseQuery = fsharpQuery |> between from ``to``
    let rec pull results p = 
        let nextPage = Questions.Load (baseQuery |> page p)
        let results = results |> Array.append nextPage.Items
        if (nextPage.HasMore)
        then pull results (p+1)
        else results
    pull Array.empty 1

And we are pretty much done. At that point, we can for instance ask for all the questions asked in January 2015, and check what percentage were answered:

let january2015 = questionsBetween (DateTime(2015,1,1)) (DateTime(2015,1,31))

|> Seq.averageBy (fun x -> if x.IsAnswered then 1. else 0.)
|> printfn "Average answer rate: %.3f"

… which produces a fairly solid 78%.

If you play a bit more with this, and perhaps try to pull down more data, you might experience (as I did) the Big StackExchange BanHammer. As it turns out, the API has usage limits (which is totally fair). In particular, if you ask for too much data, too fast, you will get banned from making requests, for a dozen hours or so.

This is not pleasant. However, in their great kindness, the API designers have provided a way to avoid it. When you are making too many requests, the response you receive will include a field named “backoff”, which indicates for how many seconds you should back off until you make your next call.

This got me stumped for a bit, because that field doesn’t show up by default on the response – only when you are hitting the limit. As a result, I wasn’t sure how to pass that information to the JSON Type Provider, until Max Malook helped me out (thanks so much, Max!). The trick here is to supply not one sample response to the type provider, but a list of samples, in that case, one without the backoff field, and one with it.

I carved out an artisanal, hand-crafted sample for the occasion, along these lines:

let sample = """
   {"tags":["f#","units-of-measurement"],//SNIPPED FOR BREVITY}],
   {"tags":["f#","units-of-measurement"],//SNIPPED FOR BREVITY}],

type Questions = JsonProvider<sample,SampleIsList=true>

… and everything is back in order – we can now modify the recursive request, causing it to sleep for a bit when it encounters a backoff. Not the cleanest solution ever, but hey, I just want to get data here:

let questionsBetween (from:DateTime) (``to``:DateTime) =
    let baseQuery = fsharpQuery |> between from ``to``
    let rec pull results p = 
        let nextPage = Questions.Load (baseQuery |> page p)
        let results = results |> Array.append nextPage.Items
        if (nextPage.HasMore)
            match nextPage.Backoff with
            | Some(seconds) -> System.Threading.Thread.Sleep (1000*seconds + 1000)
            | None -> ignore ()
            pull results (p+1)
        else results
    pull Array.empty 1

So what were the results? I decided, quite arbitrarily, to count questions month by month since January 2010. Here is how the results looks like:


Clearly, the trend is up – it doesn’t take an advanced degree in statistics to see that. It’s interesting also to see the slump around 2012-2013; I can see a similar pattern in the Meetup registration numbers in San Francisco. My sense is that after a spike in interest in 2010, when F# launched with Visual Studio, there hasn’t been much marketing push for the language, and interest eroded a bit, until serious community-driven efforts took place. However, I don’t really have data to back that up – this is speculation.

How this correlates to overall F# adoption is another question: while I think this curves indicates growth, the number of questions on StackOverflow is clearly a very indirect measurement of how many people actually use it, and StackOverflow itself is a distorted sample of the overall population. Would be interesting to take a similar look at GitHub, perhaps…

by Mathias 1. May 2015 16:14

About a month ago, I vaguely recall a discussion on Twitter – if memory serves me, @rickasaurus was involved – around sharing articles. This inspired me to try something. Every morning, I start my day with an espresso first, followed by reading blog posts for half an hour or so. While I get a lot from these quick reading sessions, I rarely go back to the material afterwards, and thought it would be interesting to keep track of a few, and revisit them at the end of the month. I also decided I would primarily focus on slightly out-of-topic areas, that is, pieces with ideas loosely connected to my daily work, but which I found inspiring or stimulating.

Long story short – here is a collection of links I found interesting this month, with minimal commentary on why I found them interesting. I also tried to mention the source when I remembered it; I am always curious to hear how people come across information, I figured others might be interested in my sources.

Lovers and liars: How many sex partners have you really had?

Since my days studying decision analysis, I have been interested by the topic of heuristics and biases, that is, what strategies we use to process information and form decisions – and how far we are from “rational agents”. There is a lot of food for thought in this experiment; one bit I found intriguing was the suggestion that gender had an influence on what strategy is used to produce an estimate, I wish there was more about that.

The best and worst times to have your case reviewed by a judge

Decision making again. I love a well-designed experiment – in this case, the whole story is there, in just one simple chart. Also a reminder that taking regular snacks during your workday is important.  

What is the most efficient algorithm to check if a number is a Fibonacci Number? [via @hammett]

Because every functional programmer loves a Fibonacci sequence :)

Captivating Geometric GIFs by Florian de Looijby [via @ptrelford]

Beautiful – I need to look into how one creates gifs programmatically!

Data science done well looks easy - and that is a big problem for data scientists [via @tggleeson]

An interesting discussion on a topic that has been in the back of my mind for a bit: the discourse around data science / machine learning tends to emphasize fancy techniques and algorithm, and not the data work, even though it is an essential part of the job.

Donut math: how donut.c works [via @flangy]

No comment – pure awesome.

Are we kidding ourselves on competition?

A provocative and intriguing argument: rational investors should diversify, and as a result, firms that act in the best interest of their shareholders have an incentive to avoid competition and collude.

Hacking an epic NHL goal celebration with a hue light show and real-time machine learning [via @rasbt]

Love it – a gross misuse of brain and computer power, and a very interesting machine learning project.

Parasitic populations solve algorithm problems in half the time

I have a long-standing fascination for optimization techniques that mimic the behavior of populations, mixed together with randomness (ant colonies, bee colonies, swarms…). The idea to introduce a parasite in the system to preserve diversity (and avoid concentrating all the resources on one single search region, I presume) sounds really interesting, I just wish the full article was available – the link merely hints at the idea.

That’s it for April – I’ll keep doing this for myself anyways, if anybody is interested, I’ll be happy to post these once a month.

by Mathias 30. March 2015 16:34

In our previous post, we looked at James McCaffrey’s code, “Gradient Descent Training Using C#” from MSDN magazine, and took a stab at rewriting the first part in F#, to clarify a bit the way the dataset was created. Today, we’ll dive in the second block, which implements the logistic regression using gradient descent. Again, we won’t discuss why the algorithm works – the article does a pretty good job at that – and focus instead purely on the F# / C# conversion part.

Let’s begin by taking a look at the core of the C# code, which lives in the LogisticClassifier class. I took the liberty to do some minor cleanup, and remove some parts which were un-necessary, so as to make it a bit easier to see what is going on:

public class LogisticClassifier
    private int numFeatures; // number of x variables aka features
    private double[] weights; // b0 = constant
    private Random rnd;

    public LogisticClassifier(int numFeatures)
        this.numFeatures = numFeatures;
        this.weights = new double[numFeatures + 1]; // [0] = b0 constant
        this.rnd = new Random(0);

    public double[] Train(double[][] trainData, int maxEpochs, double alpha)
        // alpha is the learning rate
        int epoch = 0;
        int[] sequence = new int[trainData.Length]; // random order
        for (int i = 0; i < sequence.Length; ++i)
            sequence[i] = i;

        while (epoch < maxEpochs)

            if (epoch % 100 == 0 && epoch != maxEpochs)
                double mse = Error(trainData, weights);
                Console.Write("epoch = " + epoch);
                Console.WriteLine("  error = " + mse.ToString("F4"));

            Shuffle(sequence); // process data in random order

            // stochastic/online/incremental approach
            for (int ti = 0; ti < trainData.Length; ++ti)
                int i = sequence[ti];
                double computed = ComputeOutput(trainData[i], weights);
                int targetIndex = trainData[i].Length - 1;
                double target = trainData[i][targetIndex];

                weights[0] += alpha * (target - computed) * 1; // the b0 weight has a dummy 1 input
                for (int j = 1; j < weights.Length; ++j)
                    weights[j] += alpha * (target - computed) * trainData[i][j - 1];             
        } // while
        return this.weights; // by ref is somewhat risky
    } // Train

    private void Shuffle(int[] sequence)
        for (int i = 0; i < sequence.Length; ++i)
            int r = rnd.Next(i, sequence.Length);
            int tmp = sequence[r];
            sequence[r] = sequence[i];
            sequence[i] = tmp;

    private double Error(double[][] trainData, double[] weights)
        // mean squared error using supplied weights
        int yIndex = trainData[0].Length - 1; // y-value (0/1) is last column
        double sumSquaredError = 0.0;
        for (int i = 0; i < trainData.Length; ++i) // each data
            double computed = ComputeOutput(trainData[i], weights);
            double desired = trainData[i][yIndex]; // ex: 0.0 or 1.0
            sumSquaredError += (computed - desired) * (computed - desired);
        return sumSquaredError / trainData.Length;

    private double ComputeOutput(double[] dataItem, double[] weights)
        double z = 0.0;
        z += weights[0]; // the b0 constant
        for (int i = 0; i < weights.Length - 1; ++i) // data might include Y
            z += (weights[i + 1] * dataItem[i]); // skip first weight
        return 1.0 / (1.0 + Math.Exp(-z));
} // LogisticClassifier

Just from the length of it, you can tell that most of the action is taking place in the Train method, so let’s start there. What we have here is two nested loops. The outer one runs maxEpoch times, a user defined parameter. Inside that loop, we randomly shuffle the input dataset, and then loop over each training example, computing the predicted output of the logistic function for that example, comparing it to a target, the actual  label of the example, which can be 0 or 1, and adjusting the weights so as to reduce the error. We also have a bit of logging going on, displaying the prediction error every hundred outer iteration. Once the two loops are over, we return the weights.

Two things strike me here. First, a ton of indexes are involved, and this tends to obfuscate what is going on; as a symptom, a few comments are needed, to clarify how the indexes work, and what piece of the data is organized. Then, there is a lot of mutation going on. It’s not necessarily a bad thing, but I tend to avoid it as much as possible, simply because it requires keeping more moving parts in my head when I try to follow the code, and also, as McCaffrey himself points out in a comment, because “by ref is somewhat risky”.

As a warm up, let’s begin with the error computation, which is displayed every 100 iterations.  Rather than having to remember in what column the actual expected value is stored, let’s make our life easier, and use a type alias, Example, so that the features are neatly tucked in an array, and the value is clearly separated. We need to compute the average square difference between the expected value, and the output of the logistic function for each example. As it turns out, we have already implemented the logistic function in the first part in the code, so re-implementing it as in ComputeOutput seems like un-necessary work – we can get rid of that part entirely, and simply map every example to the square error, and compute the average, using pattern matching on the examples to separate clearly the features and the expected value:

type Example = float [] * float

let Error (trainData:Example[], weights:float[]) =
    // mean squared error using supplied weights
    |> (fun (features,value) -> 
        let computed = logistic weights features
        let desired = value
        (computed - desired) * (computed - desired))
    |> Array.average

Some of you might argue that this could be made tighter – I can think of at least two possibilities. First, using a Tuple might not be the most expressive approach; replacing it with a Record instead could improve readability. Then, we could also skip the map + average part, and directly ask F# to compute the average on the fly:

type Example = { Features:float[]; Label:float }

let Error (trainData:Example[], weights:float[]) =
    |> Array.averageBy (fun example -> 
        let computed = logistic weights example.Features
        let desired = example.Label
        (computed - desired) * (computed - desired))

I will keep my original version the way it is, mostly because we created a dataset based on tuples last times.

We are now ready to hit the center piece of the algorithm. Just like we would probably try to extract a method in C#, we will start extracting some of the gnarly code that lies in the middle:

for (int ti = 0; ti < trainData.Length; ++ti)
    int i = sequence[ti];
    double computed = ComputeOutput(trainData[i], weights);
    int targetIndex = trainData[i].Length - 1;
    double target = trainData[i][targetIndex];

    weights[0] += alpha * (target - computed) * 1; // the b0 weight has a dummy 1 input
    for (int j = 1; j < weights.Length; ++j)
        weights[j] += alpha * (target - computed) * trainData[i][j - 1];             

Rather than modify the weights values, it seems safer to compute new weights. And because we opted last week to insert a column with ones for the constant feature, we won’t have to deal with the index misalignment, which requires separate handling for b0 and the rest. Instead, we can write an update operation that takes in an example and weights, and returns new weights:

let update (example:Example) (weights:float[]) =
    let features,target = example
    let computed = logistic weights features
    |> Array.mapi (fun i w -> 
        w + alpha * (target - computed) * features.[i])

Array.mapi allows us to iterate over the weights, while maintaining the index we are currently at, which we use to grab the feature value at the corresponding index. Alternatively, you could go all verbose and zip the arrays together – or all fancy with a double-pipe and map2 to map the two arrays in one go. Your pick: weights features
|> (fun (weight,feat) -> 
    weight + alpha * (target - computed) * feat)

||> Array.map2 (fun weight feat -> 
    weight + alpha * (target - computed) * feat)

We are now in a very good place; the only thing left to do is to plug that into the two loops. The inner loop is a perfect case for a fold (the Aggregate method in LINQ): given a starting value for weights, we want to go over every example in our training set, and, for each of them, run the update function to compute new weights. For the while loop, we’ll take a different approach, and use recursion: when the epoch reaches maxEpoch, you are done, return the weights, otherwise, keep shuffling the data and updating weights. Let’s put that all together:

let Train (trainData:Example[], numFeatures, maxEpochs, alpha, seed) =
    let rng = Random(seed)
    let epoch = 0

    let update (example:Example) (weights:float[]) =
        let features,target = example
        let computed = logistic weights features
        |> Array.mapi (fun i w -> 
            w + alpha * (target - computed) * features.[i])

    let rec updateWeights (data:Example[]) epoch weights =
        if epoch % 100 = 0 
        then printfn "Epoch: %i, Error: %.2f" epoch (Error (data,weights))
        if epoch = maxEpochs then weights
            let data = shuffle rng data
            let weights = 
                |> Array.fold (fun w example -> update example w) weights
            updateWeights data (epoch + 1) weights
    // initialize the weights and start the recursive update
    let initialWeights = [| for _ in 1 .. numFeatures + 1 -> 0. |]
    updateWeights trainData 0 initialWeights

And that’s pretty much it. We replaced the whole class by a couple of functions, and all the indexes are gone. This is probably a matter of taste and comfort with functional concepts, but in my opinion, this is much easier to follow.

Before trying it out, to make sure it works, I’ll take a small liberty, and modify the Train function. As it stands right now, it returns the final weights, but really, we don’t care about the weights, what we want is a classifier, which is a function that, given an array, will predict a one or a zero. That’s easy enough, let’s return a function at the end instead of weights:

// initialize the weights and start the recursive update
let initialWeights = [| for _ in 1 .. numFeatures + 1 -> 0. |]
let finalWeights = updateWeights trainData 0 initialWeights
let classifier (features:float[]) = 
    if logistic finalWeights features > 0.5 then 1. else 0.

We can now wrap it up, and see our code in action:

printfn "Begin Logistic Regression (binary) Classification demo"
printfn "Goal is to demonstrate training using gradient descent"

let numFeatures = 8 // synthetic data
let numRows = 10000
let seed = 1

printfn "Generating %i artificial data items with %i features" numRows numFeatures    
let trueWeights, allData = makeAllData(numFeatures, numRows, seed)

printfn "Data generation weights:"
trueWeights |> Array.iter (printf "%.2f ")
printfn ""

printfn "Creating train (80%%) and test (20%%) matrices"
let trainData, testData = makeTrainTest(allData, 0)
printfn "Done"

let maxEpochs = 1000
let alpha = 0.01

let classifier = Train (trainData,numFeatures,maxEpochs,alpha,0)

let accuracy (examples:Example[]) = 
    |> Array.averageBy (fun (feat,value) -> 
        if classifier feat = value then 1. else 0.)

accuracy trainData |> printfn "Prediction accuracy on train data: %.4f"
accuracy testData |> printfn "Prediction accuracy on test data: %.4f"

We used a small trick to compute the accuracy – we mark every correct call as a one, every incorrect one as a zero, which, when we compute the average, gives us directly the proportion of cases that were called correctly. On my machine, I get the following output:

Prediction accuracy on train data: 0.9988
Prediction accuracy on test data: 0.9980

Looks good enough to me, the implementation seems to be working. The whole code presented here is available as a gist here. I’ll leave it at that for now (I might revisit it later, and try to make this work with DiffSharp at some point, if anyone is interested) – feel free to ask questions, or drop me a comment on Twitter!


Comment RSS