Mathias Brandewinder on .NET, F#, VSTO and Excel development, and quantitative analysis / machine learning.
by Mathias 22. March 2015 17:27

I will admit it, I got a bit upset by James McCaffrey’s column in MSDN magazine this month, “Gradient Descent Training Using C#”. While the algorithm explanations are quite good, I was disappointed by the C# sample code, and kept thinking to myself “why oh why isn’t this written in F#”. This is by no means intended as a criticism of C#; it’s a great language, but some problems are just better suited for different languages, and in this case, I couldn’t fathom why F# wasn’t used.

Long story short, I just couldn’t let it go, and thought it would be interesting to take that C# code, and do a commented rewrite in F#. I won’t even go into why the code does what it does – the article explains it quite well – but will instead purely focus on the implementation, and will try to keep it reasonably close to the original, at the expense of some additional nifty things that could be done.

The general outline of the code follows two parts:

  • Create a synthetic dataset, creating random input examples, and computing the expected result using a known function,
  • Use gradient descent to learn the model parameters, and compare them to the true value to check whether the method is working.

You can download the original C# code here. Today we’ll focus only on the first part, which is mainly contained in two methods, MakeAllData and MakeTrainTest:

static double[][] MakeAllData(int numFeatures, int numRows, int seed)
{
  Random rnd = new Random(seed);
  double[] weights = new double[numFeatures + 1]; // inc. b0
  for (int i = 0; i < weights.Length; ++i)
    weights[i] = 20.0 * rnd.NextDouble() - 10.0; // [-10.0 to +10.0]

  double[][] result = new double[numRows][]; // allocate matrix
  for (int i = 0; i < numRows; ++i)
    result[i] = new double[numFeatures + 1]; // Y in last column

  for (int i = 0; i < numRows; ++i) // for each row
  {
    double z = weights[0]; // the b0 
    for (int j = 0; j < numFeatures; ++j) // each feature / column except last
    {
      double x = 20.0 * rnd.NextDouble() - 10.0; // random X in [10.0, +10.0]
      result[i][j] = x; // store x
      double wx = x * weights[j + 1]; // weight * x 
      z += wx; // accumulate to get Y
    }
    double y = 1.0 / (1.0 + Math.Exp(-z));
    if (y > 0.55)  // slight bias towards 0
      result[i][numFeatures] = 1.0; // store y in last column
    else
      result[i][numFeatures] = 0.0;
  }
  Console.WriteLine("Data generation weights:");
  ShowVector(weights, 4, true);

  return result;
}

MakeAllData takes a number of features and rows, and a seed for the random number generator so that we can replicate the same dataset repeatedly. The dataset is represented as an array of array of doubles. The first columns, from 0 to numFeatures – 1, contain random numbers between –10 and 10. The last column contains a 0 or a 1. What we are after here is a classification model: each row can take two states (1 or 0), and we are trying to predict them from observing the features. In our case, that value is computed using a logistic model: we have a set of weights (which we also generate randomly), corresponding to each feature, and the output is

logistic [ x1; x2; … xn ] = 1.0 / (1.0 + exp ( - (w0 * 1.0 + w1 * x1 + w2 * x2 + … + wn * xn))

Note that w0 plays the role of a constant term in the equation, and is multiplied by 1.0 all the time. This is adding some complications to a code where indices are already flying left and right, because now elements in the weights array are mis-aligned by one element with the elements in the features array. Personally, I also don’t like adding another column to contain the predicted value, because that’s another implicit piece of information we have to remember.

In that frame, I will make two minor changes here, just to keep my sanity. First, as is often done, we will insert a column containing just 1.0 in each observation, so that the weights and features are now aligned. Then, we will move the 0s and 1s outside of the features array, to avoid any ambiguity.

Good. Instead of creating a Console application, I’ll simply go for a script. That way, I can just edit my code and check live whether it does what I want, rather than recompile and run every time.

Let’s start with the weights. What we are doing here is simply creating an array of numFeatures + 1 elements, populated by random values between –10.0 and 10.0. We’ll go a bit fancy here: given that we are also generating random numbers the same way a bit further down, let’s extract a function that generates numbers uniformly between a low and high value:

let rnd = Random(seed)
let generate (low,high) = low + (high-low) * rnd.NextDouble()
let weights = Array.init (numFeatures + 1) (fun _ -> generate(-10.0,10.0))

The next section is where things get a bit thornier. The C# code creates an array, then populates it row by row, first filling in the columns with random numbers, and then applying the logistic function to compute the value that goes in the last column. We can make that much clearer, by extracting that function out. The logistic function is really doing 2 things:

  • first, the sumproduct of 2 arrays,
  • and then, 1.0/(1.0 + exp ( – z ).

That is easy enough to implement:

let sumprod (v1:float[]) (v2:float[]) =
    Seq.zip v1 v2 |> Seq.sumBy (fun (x,y) -> x * y)

let sigmoid z = 1.0 / (1.0 + exp (- z))

let logistic (weights:float[]) (features:float[]) =
    sumprod weights features |> sigmoid

We can now use all this, and generate a dataset by simply first creating rows of random values (with a 1.0 in the first column for the constant term), applying the logistic function to compute the value for that row, and return them as a tuple:

open System

let sumprod (v1:float[]) (v2:float[]) =
    Seq.zip v1 v2 |> Seq.sumBy (fun (x,y) -> x * y)

let sigmoid z = 1.0 / (1.0 + exp (- z))

let logistic (weights:float[]) (features:float[]) =
    sumprod weights features |> sigmoid

let makeAllData (numFeatures, numRows, seed) =

    let rnd = Random(seed)
    let generate (low,high) = low + (high-low) * rnd.NextDouble()
    let weights = Array.init (numFeatures + 1) (fun _ -> generate(-10.0,10.0))
    
    let dataset = 
        [| for row in 1 .. numRows ->
            let features = 
                [|  
                    yield 1.0 
                    for feat in 1 .. numFeatures -> generate(-10.0,10.0) 
                |]
            let value = 
                if logistic weights features > 0.55 
                then 1.0 
                else 0.0
            (features, value)
        |]

    weights, dataset

Done. Let’s move to the second part of the data generation, with the MakeTrainTest method. Basically, what this does is take a dataset, shuffle it, and split it in two parts, 80% which we will use for training, and 20% we leave out for validation.

static void MakeTrainTest(double[][] allData, int seed,
  out double[][] trainData, out double[][] testData)
{
  Random rnd = new Random(seed);
  int totRows = allData.Length;
  int numTrainRows = (int)(totRows * 0.80); // 80% hard-coded
  int numTestRows = totRows - numTrainRows;
  trainData = new double[numTrainRows][];
  testData = new double[numTestRows][];

  double[][] copy = new double[allData.Length][]; // ref copy of all data
  for (int i = 0; i < copy.Length; ++i)
    copy[i] = allData[i];

  for (int i = 0; i < copy.Length; ++i) // scramble order
  {
    int r = rnd.Next(i, copy.Length); // use Fisher-Yates
    double[] tmp = copy[r];
    copy[r] = copy[i];
    copy[i] = tmp;
  }
  for (int i = 0; i < numTrainRows; ++i)
    trainData[i] = copy[i];

  for (int i = 0; i < numTestRows; ++i)
    testData[i] = copy[i + numTrainRows];
}

Again, there is a ton of indexing going on, which in my old age I find very hard to follow. Upon closer inspection, really, the only thing complicated here is the Fischer-Yates shuffle, which takes an array and randomly shuffles the order. The rest is pretty simply – we just want to shuffle, and then split into two arrays. Let’s extract the shuffle code (which happens to also be used and re-implemented later on):

let shuffle (rng:Random) (data:_[]) =
    let copy = Array.copy data
    for i in 0 .. (copy.Length - 1) do
        let r = rng.Next(i, copy.Length)
        let tmp = copy.[r]
        copy.[r] <- copy.[i]
        copy.[i] <- tmp
    copy

We went a tiny bit fancy again here, and made the shuffle work on generic arrays; we also pass in the Random instance we want to use, so that we can control / repeat shuffles if we want, by passing a seeded Random. Does this work? Let’s check in FSI:

> [| 1 .. 10 |] |> shuffle (Random ());;
val it : int [] = [|6; 7; 2; 10; 8; 5; 4; 9; 3; 1|]

Looks reasonable. Let’s move on – we can now implement the makeTrainTest function.

let makeTrainTest (allData:_[], seed) =

      let rnd = Random(seed)
      let totRows = allData.Length
      let numTrainRows = int (float totRows * 0.80) // 80% hard-coded

      let copy = shuffle rnd allData
      copy.[.. numTrainRows-1], copy.[numTrainRows ..]

Done. A couple of remarks here. First, F# is a bit less lenient than C# around types, so we have to be explicit when converting the number of rows to 80%, first to float, then back to int. As an aside, this used to annoy me a bit in the beginning, but I have come to really like having F# as this slightly psycho-rigid friend who nags me when I am taking a dangerous path (for instance, dividing two integers and hoping for a percentage).

Besides that, I think the code is markedly clearer. The complexity of the shuffle has been nicely contained, and we just have to slice the array to get a training and test sets. As an added bonus, we got rid of the out parameters, and that always feels nice and fuzzy.

I’ll leave it at for today; next time we’ll look at the second part, the learning algorithm itself. Before closing shop, let me make a couple of comments. First, the code is a tad shorter, but not by much. I haven’t really tried, and deliberately made only the changes I thought were needed. What I like about it, though, is that all the indexes are gone, except for the shuffle. In my opinion, this is a good thing. I find it difficult to keep it all in my head when more than one index is involved; when I need to also remember what columns contain special values, I get worried – and just find it hard to figure out what is going on. By contrast, I think makeTrainTest, for instance, conveys pretty directly what it does. makeAllData, in spite of some complexity, also maps closely the way I think about my goal: “I want to generate rows of inputs” – this is precisely what the code does. There is probably an element of culture to it, though; looping over arrays has a long history, and is familiar to every developer, and what looks readable to me might look entirely weird to some.

Easier, or more complicated than before? Anything you like or don’t like – or find unclear? Always interested to hear your opinion! Ping me on Twitter if you have comments.

by Mathias 21. February 2015 12:23

A few weeks ago, I came across DiffSharp, an automatic differentiation library in F#. As someone whose calculus skills have always been rather mediocre (thanks Wolfram Alpha!), but who needs to deal with gradients and the like on a regular basis because they are quite useful in machine learning and numerical methods, the project looked pretty interesting: who wouldn’t want exact and efficient calculations of derivatives? So I figured I would take a couple of hours to experiment with the library. This post is by no means an in-depth evaluation, but rather intended as “notes from the road” from someone entirely new to DiffSharp.

Basics

Suppose I want to compute the derivative of f(x) = √ x at, say, 42.0. Double-checking Wolfram Alpha confirms that f has derivative f’(x) = 1 / (2 x √ x) .

Once DiffSharp is installed via Nuget, we can automatically evaluate f’(x) :

#r @"..\packages\DiffSharp.0.5.7\lib\DiffSharp.dll"
open DiffSharp.AD.Forward

let f x = sqrt x
diff f 42. |> printfn "Evaluated: %f"
1. / (2. * sqrt 42.)  |> printfn "Actual: %f"

Evaluated: 0.077152
Actual: 0.077152

val f : x:Dual -> Dual
val it : unit = ()

First off, obviously, it worked. Without any need for us to perform anything, DiffSharp took in our implementation of f, and computed the correct value. This is really nifty.

The piece which is interesting here is the inferred signature of f. If I were to remove the line that immediately follows the function declaration, f would have the following signature:

val f : x:float –> float

The moment you include the line diff f 42., the inferred type changes drastically, and becomes

val f : x:Dual –> Dual

This is pretty interesting. Because we call diff on f, which expects Duals (a type that is defined in DiffSharp), our function isn’t what we originally defined it to be – and calling f 42.0 at that point (for instance) will fail, because 42.0 is a float, and not a Dual. In other words, DiffSharp leverages type inference pretty aggressively, to convert functions into the form it needs to perform its magic.

Edit: Atilim Gunes Baydin suggested another way around that issue, which is inlining f. The following works perfectly well, and allows to both differentiate f, and use this against floats:

let inline f x = sqrt x
let f' = diff f
f 42.

Thanks for the input!

This has a couple of implications. First, if you work in a script, you need to be careful about how you send your code to the F# interactive for execution. If you process the sample code above line by line in FSI, the evaluation will fail, because f will be inferred to be float –> float. Then, you will potentially need to annotate your functions with type hints, to help inference. As an example, the following doesn’t work:

let g x = 3. * x
diff g 42.

As is, g is still inferred to be of type float –> float, because of the presence of the constant term, which is by default inferred as a float. That issue can be addressed at least two ways – by explicitly marking x or 3. as dual in g, like this:

let g x = (dual 3.) * x
let h (x:Dual) = 3. * x

That’s how far we will go on this – if you want to dive in deeper, the Type Inference page discusses the topic in much greater detail

A tiny example

So why is this interesting? As I mentioned earlier, differentiation is used heavily in numeric algorithms to identify values that minimize a function, a prime example being the gradient descent algorithm. The simplest example possible would be finding a (local) minimum of a single-argument function: starting from an arbitrary value x, we can iteratively follow the direction of steepest descent, until no significant change is observed.

Here is a quick-and-dirty implementation, using DiffSharp:

let minimize f x0 alpha epsilon =
    let rec search x =
        let fx' = diff f x
        if abs fx' < epsilon
        then x
        else
            let x = x - alpha * fx'
            search x
    search x0

Edit, 3/4/2015: fixed issue in code, using abs fx’ instead of fx’

Because DiffSharp handles the differentiation part automatically for us, with only 10 lines of code, we can now pass in arbitrary functions we want to minimize, and (with a few caveats…), and get a local minimum, no calculus needed:

let epsilon = 0.000001

let g (x:Dual) = 3. * pown x 2 + 2. * x + 1.
minimize g 0. 0.1 epsilon |> printfn "Min of g at x = %f"

let h (x:Dual) = x + x * sin(x) + cos(x) * (3. * x  - 7.)
minimize h 0. 0.1 epsilon |> printfn "Min of h at x = %f"

> 
Min of g at x = -0.333333
Min of h at x = -0.383727

Let’s make sure this is reasonable. g is a quadratic function, which has a minimum or maximum at –b/2*a, that is, –2 / 2 x 3 - this checks out. As for h, inspecting the function plot confirms that it has a minimum around the identified value:

wavy-function-plot

Edit, 3/4/2015: changed function h to a simpler shape.

Conclusion

I have barely started to scratch the surface of DiffSharp in this post, but so far, I really, really like its promise. While I limited my examples to single-variable functions, DiffSharp supports multivariate functions, and vector operations as well. The way it uses type inference is a bit challenging at first, but seems a reasonable price to pay for the resulting magic. My next step will probably be a less tiny example, perhaps a logistic regression against realistic data. I am very curious to try out the algebra bits – and also wondering in the back of my head how to best use the library in general. For instance, how easy is it to construct a function from external data, and turn it into the appropriate types for DiffSharp to work its magic? How well does this integrate with other libraries, say, Math.NET? We’ll see!

In the meanwhile, I’d recommend checking out the project page, which happens to also be beautifully documented! And, as always, you can ping me on twitter for comments or question.

by Mathias 11. January 2015 18:37

I had the great pleasure to speak at CodeMash this week, and, on my way back, ended up spending a couple of hours at the Atlanta airport waiting for my connecting flight back to the warmer climate of San Francisco – a perfect opportunity for some light-hearted coding fun. A couple of days earlier, I came across this really nice tweet, rendering the results of an L-system:

I ended up looking up L-systems on Wikipedia, and thought this would make for some fun coding exercise. In a nutshell, a L-system is a grammar. It starts with an alphabet of symbols, and a set of rules which govern how each symbol can be transformed into another chain of symbols. By applying these rules to a starting state (the initial axiom), one can evolve it into a succession of states, which can be seen as the growth of an organism. And by mapping each symbol to operations in a logo/turtle like language, each generation can then be rendered as a graphic.

So how could we go about coding this in F#? If you are impatient, you can find the final result as a gist here.

First, I started with representing the core elements of an L-System with a couple of types:

type Symbol = | Sym of char

type State = Symbol list

type Rules = Map<Symbol,State>

type LSystem = 
    { Axiom:State
      Rules:Rules }

A symbol is a char, wrapped in a single-case discriminated union, and a State is simply a list of Symbols. We define the Rules that govern the transformation of Symbols by a Map, which associates a particular Symbol with a State, and an L-System is then an Axiom (the initial State), with a collection of Rules.

Let’s illustrate this on the second example from the Wikipedia page, the Pythagoras tree. Our grammar contains 4 symbols, 0, 1, [ and ], we start with a 0, and we have 2 rules, (1 → 11), and (0 → 1[0]0). This can be encoded in a straightforward manner in our domain, like this:

let lSystem =
    { Axiom = [ Sym('0') ]
      Rules = [ Sym('1'), [ Sym('1'); Sym('1') ]
                Sym('0'), [ Sym('1'); Sym('['); Sym('0'); Sym(']'); Sym('0') ]]
              |> Map.ofList }

Growing the organism by applying the rules is fairly straightforward: given a State, we traverse the list of Symbols, look up for each of them if there is a matching rule, and perform a substitution if it is found, leaving it unchanged otherwise:

(*
Growing from the original axiom
by applying the rules
*)

let applyRules (rs:Rules) (s:Symbol) =
    match (rs.TryFind s) with
    | None -> [s]
    | Some(x) -> x

let evolve (rs:Rules) (s:State) =
    [ for sym in s do yield! (applyRules rs sym) ]

let forward (g:LSystem) =
    let init = g.Axiom
    let gen = evolve g.Rules
    init |> Seq.unfold (fun state -> Some(state, gen state))

// compute nth generation of lSystem
let generation gen lSystem =
    lSystem
    |> forward 
    |> Seq.nth gen
    |> Seq.toList

What does this give us on the Pythagoras Tree?

> lSystem |> generation 1;;
val it : Symbol list = [Sym '1'; Sym '['; Sym '0'; Sym ']'; Sym '0']

Nice and crisp – that part is done. Next up, rendering. The idea here is that for each Symbol in a State, we will perform a substitution with a sequence of instructions, either a Move, drawing a line of a certain length, or a Turn of a certain Angle. We will also have a Stack, where we can Push or Pop the current position of the Turtle, so that we can for instance store the current position and direction on the stack, perform a couple of moves with a Push, and then return to the previous position by a Pop, which will reset the turtle to the previous position. Again, that lends itself to a very natural model:

(*
Modelling the Turtle/Logo instructions
*)

type Length = | Len of float
type Angle = | Deg of float

// override operator later
let add (a1:Angle) (a2:Angle) =
    let d1 = match a1 with Deg(x) -> x
    let d2 = match a2 with Deg(x) -> x
    Deg(d1+d2)

type Inst =
    | Move of Length
    | Turn of Angle
    | Push
    | Pop

let Fwd x = Move(Len(x))
let Lft x = Turn(Deg(x))
let Rgt x = Turn(Deg(-x))

We can now transform our L-system state into a list of instructions, and convert them into a sequence of Operations, in that case Drawing lines between 2 points:

type Pos = { X:float; Y:float; }
type Dir = { L:Length; A:Angle }

type Turtle = { Pos:Pos; Dir:Dir }
type ProgState = { Curr:Turtle; Stack:Turtle list }

let turn angle turtle = 
    let a = turtle.Dir.A |> add angle
    { turtle with Dir = { turtle.Dir with A = a } }

type Translation = Map<Symbol,Inst list>

type Ops = | Draw of Pos * Pos

let pi = System.Math.PI

let line (pos:Pos) (len:Length) (ang:Angle) =
    let l = match len with | Len(l) -> l
    let a = match ang with | Deg(a) -> (a * pi / 180.)
    { X = pos.X + l * cos a ; Y = pos.Y + l * sin a }

let execute (inst:Inst) (state:ProgState) =
    match inst with
    | Push -> None, { state with Stack = state.Curr :: state.Stack }
    | Pop -> 
        let head::tail = state.Stack // assumes more Push than Pop
        None, { state with Curr = head; Stack = tail }
    | Turn(angle) -> 
        None, { state with Curr =  state.Curr |> turn angle }
    | Move(len) -> 
        let startPoint = state.Curr.Pos
        let endPoint = line startPoint len state.Curr.Dir.A
        Some(Draw(startPoint,endPoint)), { state with Curr = { state.Curr with Pos = endPoint } }

let toTurtle (T:Translation) (xs:Symbol list) =

    let startPos = { X = 400.; Y = 400. }
    let startDir = { L = Len(0.); A = Deg(0.) }
    let init = 
        { Curr = { Pos = startPos; Dir = startDir }
          Stack = [] }
    xs 
    |> List.map (fun sym -> T.[sym]) 
    |> List.concat
    |> Seq.scan (fun (op,state) inst -> execute inst state) (None,init)
    |> Seq.map fst
    |> Seq.choose id

We simply map each Symbol to a List of instructions, transform the list of symbols into a list of instructions, and maintain at each step the current position and direction, as well as a Stack (represented as a list) of positions and directions. How does it play out on our Pythagoras Tree? First, we define the mapping from Symbols to Instructions:

let l = 1.
let T = 
    [ Sym('0'), [ Fwd l; ]
      Sym('1'), [ Fwd l; ]
      Sym('['), [ Push; Lft 45.; ]
      Sym(']'), [ Pop; Rgt 45.; ] ]
    |> Map.ofList

… and we simply send that toTurtle, which produces a list of Draw instructions:

> lSystem |> generation 1 |> toTurtle T;;
val it : seq<Ops> =
  seq
    [Draw ({X = 400.0;
            Y = 400.0;},{X = 401.0;
                         Y = 400.0;}); Draw ({X = 401.0;
                                              Y = 400.0;},{X = 401.7071068;
                                                           Y = 400.7071068;});
     Draw ({X = 401.0;
            Y = 400.0;},{X = 401.7071068;
                         Y = 399.2928932;})]

Last step – some pretty pictures. We’ll simply generate a html document, rendering the image using SVG, by creating one SVG line per Draw instruction:

let header = """
<!DOCTYPE html>
<html>
<body>
<svg height="800" width="800">"""

let footer = """
</svg>
</body>
</html>
"""

let toSvg (ops:Ops seq) =
    let asString (op:Ops) = 
        match op with
        | Draw(p1,p2) -> sprintf """<line x1="%f" y1="%f" x2="%f" y2="%f" style="stroke:rgb(0,0,0);stroke-width:1" />""" p1.X p1.Y p2.X p2.Y 

    [ yield header
      for op in ops -> asString op
      yield footer ]
    |> String.concat "\n"

open System.IO

let path = "C:/users/mathias/desktop/lsystem.html"
let save template = File.WriteAllText(path,template)

And we are pretty much done:

> lSystem |> generation 8 |> toTurtle T |> toSvg |> save;;
val it : unit = ()

… which produces the following graphic:

image

Pretty neat! Just for fun, I replicated the Sierpinski Triangle example as well:

let sierpinski () =

    let lSystem =
        { Axiom = [ Sym('A') ]
          Rules = [ Sym('A'), [ Sym('B'); Sym('>'); Sym('A'); Sym('>'); Sym('B') ]
                    Sym('B'), [ Sym('A'); Sym('<'); Sym('B'); Sym('<'); Sym('A') ]]
                  |> Map.ofList }

    let l = 1.
    let T = 
        [ Sym('A'), [ Fwd l; ]
          Sym('B'), [ Fwd l; ]
          Sym('>'), [ Lft 60.; ]
          Sym('<'), [ Rgt 60.; ] ]
        |> Map.ofList

    lSystem 
    |> generation 9
    |> toTurtle T
    |> toSvg 
    |> save

… which results in the following picture:

image

That’s it for tonight! I had a lot of fun coding this (it certainly made the flight less boring), and found the idea of converting code to turtle instructions, with a stack, pretty interesting. Hope you enjoyed it, and if you end up playing with this, share your creations on Twitter and ping me at @brandewinder!

Gist for the whole code here

by Mathias 31. December 2014 10:59

Well, we are in the last hours of 2014, and I am nearly recovered from the craziness that was the F# Europa Tour 2014, so here we go – the Tour, in cold, hard facts (after all, I am a numbers’ guy):

  • 40 days of travelling across Europe.
  • 16 talks.
  • 5 workshops (about 50 hours total).
  • 9 countries.
  • 6991 miles (11,250 kilometers) travelled, roughly (this is straight-line city to city, so the actual number is probably a good deal larger).
  • 14 hours of bus.
  • roughly 50 hours of train.
  • roughly 28 hours of plane.
  • 12 cities visited (and spoken at!).
  • I lost track of how many gallons of beer were ingested. This is big data.
  • 500 attendees? Maybe more? See previous data point.
  • Delivered hundreds of shiny fsharp.org stickers to F# Communities across Europe. [btw, in case you didn't hear - the F# Software Foundation is now a full-fledged, legally established entity, and YOU can be a member. Check it out!]

Now for the important qualitative questions:

  • Where did I eat the best bacon? This came as a surprise to me, but I have to say, the bacon I ate in Dublin, Ireland was amazing. Twice.
  • Where does one find the best beer in Europe? This is a hard one – I had a chance to sample great beers from all over the place. I would say, Munich and its Biergarten rules, but the live beers at BuildStuff in Vilnius, Lithuania, were a very nice surprise.
  • What’s the weirdest thing I ate? This one goes to Norway and its Lutefisk, a traditional Christmas fish dish. It’s definitely a regional specialty, as in, a specialty which didn’t expand beyond a limited regional area, for good reasons. For the record, I actually enjoyed it!
  • What was the worst travelling mistake? Booking a last minute train ticket from Paris to Aarhus, Denmark, to realize in the train that instead of a nice sleeping car, I would be spending 22 hours sitting in a train with no food on board.
  • Biggest scare: every person who has given a talk will tell you, relying on the internet and anything live in a presentation is a rookie mistake. This is great advice, which is why I completely ignored it. It all worked just fine, but learning that Azure had been down for a couple of hours, right before a talk at BuildStuff which 100% required a live deployment to Azure to work, did give me some cold sweat.

Would I do it again? In a heartbeat! It was a bit crazy, and definitely exhausting, but a ton of fun. All of you who helped out making this happen, from the bottom of my heart, thank you! The F# Community is absolutely fantastic, packed with energy and a good, friendly vibe, and everywhere I went felt like family. You all kept me going, so again, thank you (you know who you are)! In the meanwhile, I wish you all a happy year 2015 ahead, let’s make that one even better than 2014, and I hope to see many of you again this year! And, as always, feel free to ping me on Twitter as @brandewinder.

by Mathias Brandewinder 20. December 2014 01:55

This post is December 20th' part of the English F# Advent #fsAdvent series; make sure to also check out the Japanese series, which also packs the awesome!

As I was going around Paris the other day, I ended up in the Concorde metro station. Instead of the standard issue white tiles, this station is decorated with the French constitution, rendered as a mosaic of letters.

[Source: Wikipedia]

My mind started wandering, and by some weird association, it reminded me of Calligrammes, a collection of poems by Guillaume Apollinaire, where words are arranged on the page to depict images that compliment the text itself.

Guillaume-Apollinaire-Calligramme-La_Mandoline,_l’œillet_et_le_bambou

[Source: Wikipedia]

After some more drifting, I started wondering if I could use this as an inspiration for some playful F# fun. How about taking a piece of text, an image, and fusing them into one?

There are many ways one could approach this; being rather lazy, I thought a reasonably simple direction would be to decompose the original image into dark and light blocks, and fill them with the desired text. As simple as it may sound, the task is not entirely trivial. First, we need to decide on an appropriate threshold to separate "dark" and "light" areas on the image, to get a contrast good enough to recognize the image rendered in black & white. Then, we also have to resize the image appropriately into a new grid, where the characters from the original text fit the mapped dark area as closely as possible.

Note: I don't think the warning "don't put this in production, kids" is useful, unless someone thinks there is a market for Calligramme as a Service. However, I'll say this: this is me on "vacation hacking fun" mode, so yes, there are quite probably flaws in that code. I put it up as a Gist here - flame away, or tell me how to make it better on Twitter ;)

Separating dark and light

So how could we go about splitting an image into dark and light pixels? First, we can using the color brightness from the System.Drawing namespace to determine how light the color of an individual pixel is:

open System.Drawing

let brightness (c:Color) = c.GetBrightness ()

let pixels (bmp:Bitmap) =
    seq { for x in 0 .. bmp.Width - 1 do
            for y in 0 .. bmp.Height - 1 ->
                (x,y) |> bmp.GetPixel }

We still need to decide what boundary to use to separate the image between dark and light. What we want in the end is an image which is reasonably balanced, that is, it should be neither overwhelmingly dark or light. A simple way to enforce that is to arbitrarily constrain one third of the pixels at least to be either dark or light. Then, we want a boundary value that is as clear cut as possible, for instance by finding a value with a large brightness change margin. Let's do that:

let breakpoint (bmp:Bitmap) =
    let pixelsCount = bmp.Width * bmp.Height
    let oneThird = pixelsCount / 3
    let pixs = pixels bmp
    let threshold = 
        pixs
        |> Seq.map brightness
        |> Seq.sort
        |> Seq.pairwise
        |> Seq.skip oneThird
        |> Seq.take oneThird
        |> Seq.maxBy (fun (b0,b1) -> b1 - b0)
        |> snd
    let darkPixels = 
        pixs 
        |> Seq.map brightness
        |> Seq.filter ((>) threshold)
        |> Seq.length
    (threshold,darkPixels)

We iterate over every pixel, sort them by brightness, retain only the middle third, and look for the largest brightness increase. Done - breakpoint returns both the threshold value (the lightness level which decides whether a pixel will be classified as dark or light), as well as how many pixels will be marked as dark.

Resizing

Now that we have a boundary value, and know how many pixels will be marked as dark, we need to determine the size of the grid where our text will be mapped. Ignoring for a moment rounding issues, let's figure out a reasonable size for our grid.

First, how many characters do we need? We know the number of dark pixels in the original image - and in our target image, we want the same ratio of text to white space, so the total number of characters we'll want in our final image will be roughly total chars ~ text length * dark pixels / (width * height).

Then, what should the width of the target image, in characters? First, we want [1] target width * target height ~ total chars. Then, ideally, the proportions of the target image should be similar to the original image, so target width / target height ~ width / height, which gives us target height ~ target width * height / width. Substituting in [1] gives us target width * target width * height / width ~ total chars, which simplifies to target width ~ sqrt (total chars * width / height).

Translating this to code, conveniently ignoring all the rounding issues, we get:

let sizeFor (bmp:Bitmap) (text:string) darkPixels =
    let width,height = bmp.Width, bmp.Height
    let pixels = width * height
    let textLength = text.Length
    let chars = textLength * pixels / darkPixels 
    let w = (chars * width / height) |> float |> sqrt |> int
    let h = (w * height) / width
    (w,h)

Rendering

Good, now we are about ready to get down to business. We have an original image, a threshold to determine which pixels to consider dark or light, and a "target grid" of known width and height. What we need now is to map every cell of our final grid to the original image, decide whether it should be dark or light, and if dark, write a character from our text.

Ugh. More approximation ahead. At that point, there is no chance that the cells from our target grid map the pixels from the original image one to one. What should we do? This is my Christmas vacation time, a time of rest and peace, so what we will do is be lazy again. For each cell in the target grid, we will retrieve the pixels that it overlaps on the original image, and simply average out their brightness, not even bothering with a weighted average based on their overlap surface. As other lazy people before me nicely put it, "we'll leave that as an exercise to the reader".

Anyways, here is the result, a mapping function that returns the coordinates of the pixels intersected by a cell, as well as a reducer, averaging the aforementioned pixels by brightness, and a somewhat un-necessary function that transforms the original image in a 2D array of booleans, marking where a letter should go (I mainly created it because I had a hard time keeping track of rows and columns):

let mappedPixels (bmp:Bitmap) (width,height) (x,y) =

    let wScale = float bmp.Width / float width
    let hScale = float bmp.Height / float height

    let loCol = int (wScale * float x)
    let hiCol = 
        int (wScale * float (x + 1)) - 1
        |> min (bmp.Width - 1)
    let loRow = int (hScale * float y)
    let hiRow = 
        int (hScale * float (y + 1)) - 1
        |> min (bmp.Width - 1)

    seq { for col in loCol .. hiCol do
            for row in loRow .. hiRow -> (col,row) }

let reducer (img:Bitmap) pixs =
    pixs 
    |> Seq.map img.GetPixel
    |> Seq.averageBy brightness

let simplified (bmp:Bitmap) (width,height) threshold =

    let map = mappedPixels bmp (width,height)
    let reduce = reducer bmp
    let isDark value = value < threshold

    let hasLetter = map >> reduce >> isDark

    Array2D.init width height (fun col row ->
        (col,row) |> hasLetter)

Almost there - wrap this with 2 functions, applyTo to transform the text into a sequence, and rebuild, to recreate the final string function:

let applyTo (bmp:Bitmap) (width,height) threshold (text:string) =
    
    let chars = text |> Seq.toList
    let image = simplified bmp (width,height) threshold
    
    let nextPosition (col,row) =
        match (col < width - 1) with 
        | true -> (col+1,row) 
        | false -> (0,row+1)
    
    (chars,(0,0)) 
    |> Seq.unfold (fun (cs,(col,row)) ->
        let next = nextPosition (col,row)
        match cs with
        | [] -> Some(' ',(cs,next))
        | c::tail ->
            if image.[col,row]
            then 
                Some(c,(tail,next))
            else Some(' ',(cs,next)))

let rebuild (width,height) (data:char seq) =
    seq { for row in 0 .. height - 1 -> 
            data 
            |> Seq.map string
            |> Seq.skip (row * width)
            |> Seq.take width
            |> Seq.toArray  
            |> (String.concat "") }
    |> (String.concat "\n")

Trying it out

Let's test this out, using the F# Software Foundation logo as an image, and the following text, from fsharp.org, as a filler:

F# is a mature, open source, cross-platform, functional-first programming language. It empowers users and organizations to tackle complex computing problems with simple, maintainable and robust code.

Run this through the grinder…

let path = @"c:/users/mathias/pictures/fsharp-logo.jpg"
let bmp = new Bitmap(path)

let text = """F# is // snipped // """

let threshold,darkPixels = breakpoint bmp
let width,height = sizeFor bmp text darkPixels

text
|> applyTo bmp (width,height) threshold    
|> rebuild (width,height)

… and we get the following:

                        
           F#           
           is           
         a matu         
        re, open        
        source, c       
      ross-platfor      
     m, fun  ctiona     
    l-firs t   progr    
   amming  l   anguag   
  e. It  emp    owers   
 users  and      organi 
 zation s to     tackle 
   compl ex    computi  
   ng pro bl  ems wit   
    h simp l e, main    
     tainab le and      
      robust code.      

Not too bad! The general shape of the logo is fairly recognizable, with some happy accidents, like for instance isolating "fun" in "functional". However, quite a bit of space has been left empty, most likely because of the multiple approximations we did along the way.

Improved resizing

Let's face it, I do have some obsessive-compulsive behaviors. As lazy as I feel during this holiday break, I can't let go of this sloppy sizing issue. We can't guarantee a perfect fit (there might simply not be one), but maybe we can do a bit better than our initial sizing guess. Let's write a mini solver, a recursive function that will iteratively attempt to improve the fit.

Given a current size and count of dark cells, if the text is too long to fit, the solver will simply expand the target grid size, adding one row or one column, picking the one that keeps the grid horizonal/vertical proportions closest to the image. If the text fits better in the new solution, keep searching, otherwise, done (similarly, reduce the size if the text is too short to fit).

For the sake of brevity, I won't include the solver code here in the post. If you are interested, you can find it in the gist here.

Below are the results of the original and shiny new code, which I ran on a slightly longer bit of text.

Before:

                                  
               F#is               
              amatur              
             eopenso              
            urcecross             
           -platformfun           
          ctional-firstp          
         rogramminglangua         
        geItempowersusersa        
       ndorganiz  ationstot       
      acklecomp    lexcomput      
     ingproble ms   withsimpl     
    emaintain abl    eandrobus    
   tcodeF#ru nson     LinuxMacO   
  SXAndroid iOSWi      ndowsGPUs  
 andbrowse rsItis       freetouse 
  andisope nsourc       eunderanO 
  SI-approv edlic      enseF#isu  
   sedinawid eran     geofappli   
     cationar eas   andissuppo    
      rtedbybo th   anactive      
      opencommu    nityandin      
       dustry-le  adingcomp       
         aniesprovidingpr         
          ofessionaltool          
          s                       

… and after:

               F #               
              is am              
             atu reo             
            pens ourc            
           ecros s-pla           
          tformf unctio          
         nal-fir stprogr         
        ammingla nguageIt        
       empowers   usersand       
      organiza t   ionstota      
     cklecomp le    xcomputi     
    ngproble msw     ithsimpl    
   emaintai nabl      eandrobu   
  stcodeF# runso       nLinuxMac 
 OSXAndro  idiOS       WindowsGP 
  Usandbro wsers      Itisfreet  
   ouseandi sope     nsourceun   
    deranOSI -ap    provedlic    
     enseF#is us   edinawide     
      rangeofa p  plication      
       areasand  issupport       
        edbyboth anactive        
         opencom munitya         
          ndindu stry-l          
           eadin gcomp           
            anie spro            
             vid ing             
              pr of              
               e s               

We still have a small mismatch, but the fit is much better.

And this concludes our F# Advent post! This was a rather useless exercise, but then, the holidays are about fun rather than productivity. I had fun doing this, and hope you had some fun reading it. In the meanwhile, I wish you all a holiday period full of fun and happiness, and… see you in 2015! And, as always, you can ping me on twitter if you have comments or questions. Cheers!

Comments

Comment RSS