Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
by Mathias 13. July 2013 10:43

It looks like this summer will be my strangest vacation in a while – I’ll be taking a F# road trip of sorts in August, talking about F# at user groups all over the United States. How this crazy plan took shape exactly I am not quite sure in retrospect, but I am really looking forward to meeting all the local communities – this will be fun!

As of July 13th July 28th, here is the plan:

July 31, Sacramento: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 8, Houston: “An Introduction to F# for the C# Developer”

August 9, Houston: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 12, Nashville: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 13, Charlotte: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 14, Raleigh: “An Introduction to F# for the C# Developer”

August 15, Raleigh: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 16, Washington DC: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 17, Philadelphia: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 19, New York City: “Coding Dojo: a gentle introduction to Machine Learning with F#”

August 20, Boston: “An introduction to F# for the C# developer”

August 21, Boston: “Coding Dojo: a gentle introduction to Machine Learning“

August 22, Detroit: TBA

… and a few more should be added to the list soon! I’ll let you extrapolate what possible cities could be following, given the map below. Stay tuned for updates.


View Larger Map

Huge thanks to the people who helped make this happen – I am sure I forgot some of you, sorry about that, and I’ll owe you a beer when I visit your city!

@zychr and the Sacramento .NET group

@panesofglass and the Houston F# group

@bryan_hunter and @NashFP in Nashville

@mattduffield and the Charlotte Alt.NET group

@Jamie_Dixon and the Triangle .NET group 

@devshorts and the newly founded Washington DC F# meetup group!

@briandonahue and the Philly Alt.NET group

@rickasaurus and the NYC F# group

@plepilov, @talbott, @jonnyboats and @hackreduce + New England F# group in Boston

… and of course @INETA!

by Mathias 5. July 2013 15:51

Besides having one of the coolest names around, Random Forest is an interesting machine learning algorithm, for a few reasons. It is applicable to a large range of classification problems, isn’t prone to over-fitting, can produce good quality metrics as a side-effect of the training process itself, and is very suitable for parallelization. For all these reasons, I thought it would be interesting to try it out in F#.

The current implementation I will be discussing below works, but isn’t production ready (yet) – it is work in progress. The API and implementation are very likely to change over the next few weeks. Still, I thought I would share what I did so far, and maybe get some feedback!

The idea behind the algorithm

As the name suggests, Random Forest (introduced in the early 2000s by Leo Breiman) can be viewed as an extension of Decision Trees, which I discussed before. A decision tree grows a single classifier, in a top-down manner: the algorithm recursively selects the feature which is the most informative, partitions the data according to the outcomes of that feature, and repeats the process until no information can be gained by partitioning further. On a non-technical level, the algorithm is playing a smart “game of 20 questions”: given what has been deduced so far, it picks from the available features the one that is most likely to lead to a more certain answer.

How is a Random Forest different from a Decision Tree? The first difference is that instead of growing a single decision tree, the algorithm will create a “forest” – a collection of Decision Trees; the final decision of the classifier will be the majority decision of all trees in the forest. However, having multiple times the same tree wouldn’t be of much help, because we would get the same classifier repeated over and over again. This is where the algorithm gets interesting: instead of growing a Tree using the entire training set and features, it introduces two sources of randomness:

  • each tree is grown on a new sample, created by randomly sampling the original dataset with replacement (“bagging”),
  • at each node of the tree, only a random subset of the remaining features is used.

Why would introducing randomness be a good idea? It has a few interesting benefits:

  • by selecting different samples, it mitigates the risk of over-fitting. A single tree will produce an excellent fit on the particular dataset that was used to train it, but this doesn’t guarantee that the result will generalize to other sets. Training multiple trees on random samples creates a more robust overall classifier, which will by construction handle a “wider” range of situations than a single dataset,
  • by selecting a random subset of features, it mitigates the risks of greedily picking locally optimal features that could be overall sub-optimal. As a bonus, it also allows a computation speed-up for each tree, because fewer features need to be considered at each step,
  • the bagging process, by construction, creates for each tree a Training Set (the selected examples) and a Cross-Validation Set (what’s “out-of-the-bag”), which can be directly used to produce quality metrics on how the classifier may perform in general.

Usage

Before delving into the current implementation, I thought it would be interesting to illustrate on an example the intended usage. I will be using the Titanic dataset, from the Kaggle Titanic contest. The goal of the exercise is simple: given the passengers list of the Titanic, and what happened to them, can you build a model to predict who sinks or swims?

I didn’t think the state of affairs warranted a Nuget package just yet, so this example is implemented as a script, in the Titanic branch of the project itself on GitHub.

First, let’s create a Record type to represent passengers:

type Passenger = {
    Id: string; 
    Class: string;
    Name: string;
    Sex: string;
    Age: string;
    SiblingsOrSpouse: string;
    ParentsOrChildren: string;
    Ticket: string;
    Fare: string;
    Cabin: string;
    Embarked: string }

Note that all the properties are represented as strings; it might be better to represent them for what they are (Age is a float, SiblingsOrSpouse an integer…) – but given that the dataset contains missing data, this would require dealing with that issue, perhaps using an Option type. We’ll dodge the problem for now, and opt for a stringly-typed representation.

Next, we need to construct a training set from the Kaggle data file. We’ll use the CSV parser that comes with FSharp.Data to extract the passengers from that list, as well as their known fate (the file is assumed to have been downloaded on your local machine first):

let path = @"C:\Users\Mathias\Documents\GitHub\Charon\Charon\Charon\train.csv"
let data = CsvFile.Load(path).Cache()

let trainingSet =
    [| for line in data.Data -> 
        line.GetColumn "Survived" |> Some, // the label
        {   Id = line.GetColumn "PassengerId"; 
            Class = line.GetColumn "Pclass";
            Name = line.GetColumn "Name";
            Sex = line.GetColumn "Sex";
            Age = line.GetColumn "Age";
            SiblingsOrSpouse = line.GetColumn "SibSp";
            ParentsOrChildren = line.GetColumn "Parch";
            Ticket = line.GetColumn "Ticket";
            Fare =line.GetColumn "Fare";
            Cabin = line.GetColumn "Cabin";
            Embarked = line.GetColumn "Embarked" } |]

Now that we have data, we can get to work, and define a model. We’ll start first with a regular Decision Tree, and extract only one feature, Sex:

let features = 
    [| (fun x -> x.Sex |> StringCategory); |]

What this is doing is defining an Array of features, a feature being a function which takes in a Passenger, and returns an Option string, via the utility StringCategory. StringCategory simply expects a string, and transforms a null or empty case into the “missing data” case, and otherwise treats the string as a Category. So in that case, x is a passenger, and if no Sex information is found, it will transform it into None, and otherwise into Some(“male”) or Some(“female”), the two cases that exist in the dataset.

We are now ready to go – we can run the algorithm and get a Decision Tree classifier, with a minimum leaf of 5 elements (i.e. we stop partitioning if we have less than 5 elements left):

let minLeaf = 5
let classifier = createID3Classifier trainingSet features minLeaf

… and we are done. How good is our classifier? Let’s check:

let correct = 
    trainingSet
    |> Array.averageBy (fun (label, obs) -> 
        if label = Some(classifier obs) then 1. else 0.)
printfn "Correct: %.4f" correct

More...

Comments

Comment RSS