Antoine Kalmbach

The expression problem is a famous problem in programming languages.

“The Expression Problem is a new name for an old problem. The goal is to define a datatype by cases, where one can add new cases to the datatype and new functions over the datatype, without recompiling existing code, and while retaining static type safety (e.g., no casts).”

Using interfaces (like in Java) as the datatype example, the problem simply asks whether it is possible to derive the interface and add new methods to the interface, without having to recompile existing code or to resort to using casts.

Obviously, in a OOP language it’s easy to derive interfaces, but the problem uncovers the rigidity of the type system: you can’t modify (i.e. extend) the interface, because you have to modify all the classes classes that implement the interface.

Conversely, in functional programming languages, adding new methods operating on the interface is easy. Consider the canonical OCaml example:

type shape = Circle of float | Rectangle of float * float

let area shp = match shp with
    Circle radius -> 3.14159 *. radius *. radius
  | Rectangle (width, height) -> width *. height
  
let vertices shp = match shp with 
    Circle _ -> infinity
  | Rectangle (_, _) -> 4

So in FP, you could create a function called volume that computes the volume for the existing types, and you needn’t touch the above code. However, as soon as you do that, you realize you’ve made a silly mistake: our shapes are flat so their volume is zero. Quickly, you realize you need a three-dimensional Cube shape.

let volume shp = match shp with
    Circle _ -> 0.
  | Rectangle _ -> 0.
  | Cube s   -> a *. a *. a        (* Cube isn't defined *)

Oops!

Here’s the onion: to get the Cube working, you’ll have to modify the existing code in two places: the definition of shape and both methods area and vertices.

In OOP, this isn’t the case, since you can just derive the new IShape interface and be done with it, but the problem arises when you’re adding the volume function, because you need to modify IShape, and thus every class that derives it.

In FP, adding new functions over the datatypes is easy, but adding new cases to the datatype is tricky, because you have to modify existing functions. In OOP, adding new functions over the datatype is hard, because you need to modify each implementation; adding new cases to the datatype is easy, since all you need to do is derive the interface. FP has invented a multitude of ways to deal with this problem, ranging from type classes, traits to protocols; OOP usually solves with either patterns or open classes. Ruby’s refinements can be used for this purpose as well.

Polymorphic variants

That’s a quick introduction to the problem. I think the expression problem is a perfect litmus test of sorts for programming languages, that is, the measure of the expressive power of the language is the quality of the solutions the language presents to the expression problem.

The expression problem is theoretically solvable in any language, but to varying degrees of elegance. In Java one must resort to using the visitor pattern, and in my mind this is the most inelegant way of going about it. I would rate the solutions on a spectrum: with the most basic solution being the visitor pattern, at the other end we have something like polymorphic variants and type classes. Multimethods and protocols are somewhere in between.

When you compare polymorphic variants of OCaml with Haskell’s type classes, there’s a marked difference in brevity. Polymorphic variants are succincter than type classes but cannot provide the same level of type safety.

type shape = [ `Circle of float | `Rectangle of float * float ]

let area shp = match shp with
    `Circle radius -> radius *. radius *. 3.14159
  | `Rectangle (w, h) -> w *. h

let vertices shp = match shp with
    `Circle radius -> infinity
  | `Rectangle (w, h) -> 4.

Not too different from the above declaration, the type is surrounded with brackets and the types are preceded with backticks. Recreating the volume function is easy.

let volume shp = match shp with
    `Circle _ -> 0.
  | `Rectangle _ -> 0.
  | `Cube a -> a *. a *. a

So now I’ve extended the shape type with another type Cube, and I haven’t touched vertices and area functions. The volume function can be done even more succinctly:

let short_volume shp = match shp with
    (* no volume in two dimensions! *)
    #shape -> 0.
  | `Cube a -> a *. a *. a

It is also possible to constrain the polymorphic variants:

let flatten shp = match shp with
    #shape as x -> x
  | `Cube a -> `Rectangle a

The type of this function is [ < `Circle of float | `Cube of float | `Rectangle of float * float ] -> [> shape]. The [< A | B] means a closed type: it can be only A or B, but nothing else, and [> Foo] means “Foo or something else”. So the flatten function accepts Circle, Rectangle or Cube and returns a shape (or possibly something else). Trying to run flatten (`Sphere 4) produces a type error:

# flatten (`Sphere 3);;
Characters 8-19:
  flatten (`Sphere 3);;
          ^^^^^^^^^^^
Error: This expression has type [> `Sphere of int ]
       but an expression was expected of type
         [< `Circle of float
          | `Cube of float * float
          | `Rectangle of float * float ]
       The second variant type does not allow tag(s) `Sphere

However, the following code compiles:

type polytope = [ shape | `Cube | `Octahedron ]

let frobnicate pt =
  let flattened = flatten pt in
  match flattened with
    #shape -> "Already flaaat!"
  | `Octagon -> "Eight coorneeeeerss"

The compiles, although we didn’t tell the compiler that flatten does not return Octagon. There are two ways to fix this: either explicitly annotate pt to be of type polytope, which produces this error:

Error: This expression has type polytope
       but an expression was expected of type
         [< `Circle of float | `Cube of float | `Rectangle of float * float ]
       The second variant type does not allow tag(s) `Octahedron

It is possible to further constrain the type with type annotations. We can make sure that the flatten function returns only flat shapes:

let safe_flatten shp : [< shape] = match shp with
    #shape as x -> x
  | `Cube a -> `Rectangle a
  | `Sphere r -> `Circle r

This produces the error:

Error: This pattern matches values of type [? `Octagon ]
       but a pattern was expected which matches values of type shape
       The second variant type does not allow tag(s) `Octagon

Not a silver bullet

Unfortunately, polymorphic variants are problematic. The problem with polymorphic variants is you quickly reach an absurd level of complexity and are forced to use annotations or subtyping to ensure maximal type safety. So although polymorphic variants are nice, and they do let us solve the expression problem, they’re an unsteady compromise between type safety and brevity. You can certainly make elegant abstractions with them but they get unwieldy quickly. They aren’t as efficient compared to regular variants either.

So what are the options? In OCaml 4.02, you can use extensible variant types:

type boring_shape = ..
type boring_shape += Circle of float | Square of float
                                                   
let boring_area shp = match shp with
    Circle r -> r *. r *. 3.14159
  | Square a -> a *. a
  | _ -> 0.

type boring_shape += Rectangle of float * float
let radical_area shp = match shp with
    Circle _ as c -> boring_area c
  | Square _ as s -> boring_area s
  | Rectangle (w, h) -> w *. h
  | _ -> 0.

An extensible variant is defined using .., and extension is done with the += operator. The caveat is that you must handle the default _ case in pattern matching. Extensible variants are another neat trick for solving the expression problem.

A measure of expressive power

The expression problem is a great litmus test that measures the expressive power of a programming language. The actual measurement of the test can be either the brevity of the code or its type safety. The solutions range from the clumsy Visitor Pattern in Java to polymorphic and extensible variants in OCaml and to type classes in Haskell. Clojure and Elixir have protocols that are both quite nice but not so type-safe since both are dynamically typed languages. What is more, since the expression problem is also about type safety, then strictly speaking the problem isn’t valid in a dynamic language. Any Lisper knows that Lisps are super expressive anyway.


Now that the design of the site is finished, I can finally focus on the essentials.

I’ve decided that this year I will be writing a bit more, here, and elsewhere. To that end, when it comes to this site, I’ve had to perform a simple but challenging task: lowering my standards.

Last year, I did not publish anything because I had absurd standards for content. In my mind, every blog post had to be a thoroughly researched and carefully argued piece, capable of standing the test of time.

This was a monumental mistake.

Researching something thoroughly requires an extraordinary amount of time, and writing opinionated articles that can stand the test of time requires an extraordinary amount of foresight — neither of which I yet have. I have already managed to delete one post which contained opinions I no longer agreed with. I thought its content was rubbish and I was a moron for publishing it, so my only recourse was to delete it, instead of learning from it.

My desire for more content stems from the process of gradual improvement: the more you write the better you get. I cannot do this unless I start from the very basics. I could, of course, simply write guides on how to do XYZ with $THING, but I want to tell stories, not write recipes. This doesn’t mean there won’t be any guides, however!

Another reason for wanting to write more stems from the simple funny fact that somebody reads this. Besides Google Analytics telling me so, a few weeks ago I even received a pull request for typo fixes.

So I know there’s at least one guy who actually reads every word. Disclosure: according to GA, most of the visits are “accidental redirects” (huh?), and that actual, longer visits aren’t common, but there were enough for me to extrapolate that there were, at the very least, two readers.

So, that was the background. And now comes the disclaimer, of sorts.

This site is a blog. As this site is a blog, the writings are, first and foremost, opinion pieces, not research articles, and opinions change. You may find me advocating for stricter type systems one day, and for looser the next. I will only guarantee that, at the time of writing, I will argue my points to the best of my abilities.

Longer, more in-depth stories, if ever completed, will be available under the articles category. This is to mark a distinction. To qualify as an article, the writing will a) contain properly researched writing and references b) be reviewed by somebody else. I’m currently working on something that may one day be considered something like that.

You’re welcome to snoop into the drafts folder of the GitHub repository of the site, but be warned, that stuff is obviously incomplete.

To be continued.


Software development tools are in a state of flux. There are two competing directions towards which static analysis tools—like linters and type checkers—are heading.

The traditional direction is to operate in a batch model. Fire up, perform analysis, report results, and die. This is a proven method. Batch-oriented software has been around for ages, and it works really well if the data you’re working with isn’t large.

The new direction is an online model: an analysis tool starts, calculates its data, reports results, and then stays on, monitoring for changes. When changes occur, the program analyzes the changes, and recomputes the effects. This execution model is efficient: incremental updates are easier to calculate than starting from scratch. This approach is arguably more modern1, since we’re leveraging the full potential of a client-server model.

The obvious caveat is such a process will eat up memory. These days this is becoming less and less of a problem, since a gigabyte of desktop-quality DDR3 memory costs about US$5.

However, if the workload is large, say 100k-1M lines, it actually makes more sense to compute an initial “model” of the system, and then incrementally update the model when the system changes.

My line of reasoning was inspired by this comment on Hacker News:

Over and over I see reports of iteration speed being critical to real-world projects, and that’s certainly my experience.

But it’s something I rarely see featured in tool design. So many of our tools are still batch oriented: they start from scratch, read everything, process everything, and then exit. That made sense in an era where RAM and CPU were expensive. But now that they’re cheap, I want tools that I start when I arrive in the morning and kill when I go home. In between, they should work hard to make my life easy.

If I’m going to use a type checker, I want it to load the AST once, continuously monitor for changes, and be optimized for incremental changes. Ditto compilers, syntax checkers, linters, test runners, debugging tools. Everything that matters to me in my core feedback loop of writing a bit of code and seeing that it works.

For 4% of my annual salary, I can get a machine with 128GB of RAM and 8 3.1 GHz cores. Please god somebody build tools that take advantage of that.

Arguably, the batch mode made sense in the past, when resources weren’t as plentiful as they are now. Now, we can afford a type checker that sits in the background, eating up half a gigabyte memory, and most of us won’t even blink at that, as long as it helps us write better code—and runs quickly.

To this end, one could call such software start-once software2, which

  1. Is launched once, via some mechanism and given a target, e.g. a codebase, to monitor for changes
  2. Computes an initial model, the analysis, of the target
  3. Whenever changes occur it instantly recomputes a new model and informs the user of the effects of the changes.

In this case, the target could be a collection of source files (a “project”), the model the compilation errors present in the source files, and the relevance is a way of communicating those errors to the user.

The basic idea can be taken from Facebook’s Flow which does exactly this when you launch it initially, it boots up a server that keeps the state of the AST in memory. The server runs the analysis and the client reports the results. There’s an initial up-front cost to all of this from starting the server but subsequent calls are fast.

There are some pseudo-implementations of this paradigm: you can have a batch process that is run every time the target changes, and then do effectively what is done in step three above. This is kind of what I described above, but the difference is that any model is lost inbetween batch runs.

In fact, file system notification based batch processes remind3 me of the the Chinese room experiment: such tools don’t really have an understanding of the model that is persistent, but due to a simultaneously crude and brilliant approach, we get the subtle impression that such a persistent, evolving understanding actually exists.

In start-once software, the goal is to always keep the model in working memory and update it incrementally. Such a program can react quickly to massive codebase changes. Naturally, speed comes at the cost of memory, but as mentioned in the quote, if it makes development faster, I think this is a perfectly justifiable cost.

This line of thinking doesn’t apply only to static analysis tools. It can work with any streaming data. Memory is cheap, but speed always isn’t. A good example is a log processor that computes the state of some system based on the content of some logs. A start-once processor would continuously monitor the log inputs and update its model of the system. If it has to churn through a lot of logs at the start, it may have an initial delay but because the model is persistent, any changes to the model can be computed quickly.

Storing the model can be done in two ways. If RAM becomes a limitation (will it?), a fast database should be used. AFter all, databases solely exist because of the prohibitively high cost of ephemeral memory compared to the cost of non-volatile memory. Traditionally!

Previously, because of the cost of ephemeral memory, we had to invent a cheaper form of storage. Now that memory is cheap, this isn’t so much of a threat anymore.

When it comes to fault-tolerance, precautions can be taken. The model could be stored as a persistent image—yes, raw memory or an accurate representation thereof, or a database. Once the program recovers, it restores the model immediately, reducing the boot time.

This model also be extended to web servers: instead of recompiling everything at every request (hello PHP), one could compile once and then compile changes incrementally. This idea isn’t new: hot swapping has been around for decades, and its advantages are obvious. Heavy-duty services that take a while to reload and restore benefit massively of hot swapping. This is routine in the JVM, Erlang and Lisp worlds.

Instead of shutting down your engine to upgrade it, you simply replace the parts to be upgraded with new ones.

Extending this to static analysis tools isn’t a massive step. If the philosophy works with server programs I see no reason why it couldn’t work with data analysis tools. At the cost of ephemeral resources like memory, I want static analysis tools that can handle large codebases and compute any change big or small in a fraction of a section.

I hope more tools will adopt this model. We certainly do have the resources now.

Addendum: examples of start-once static analysis tools

Flow

A static analysis tool for JavaScript by Facebook, written in OCaml. Upon launch, it starts a server that runs initial analysis and monitors for changes.

Hack

A PHP-inspired gradually typed programming language for the HHVM virtual machine. It runs a type checker in the background in the same way as Flow does. Also in OCaml!

gocode

Auto-completion service for Go. It uses a client-server model where completion requests are sent to the completion server via a JSON-based RPC. Conversely, every other Go-related static analysis tool (there are lots) is using the batch approach. That’s fine, since Go seems to be vehemently opposed to anything modern and advanced, and sometimes justifiably so. In this regard Gocode is a rare gem among the Go tooling ecosystem.

  1. Yes, I know REPLs have been around since the eighties, but this isn’t exactly the same thing. 

  2. Facebook uses the term “online” when describing Flow 

  3. They aren’t exactly the same thing. Since I’m talking about computer programs performing some kind of data analysis, and mentioning the Chinese room experiment, I realize that I’m opening a massive can of worms; the comparison is a mere simile here. The users of continuous (i.e. inotify-based) batch analysis tools are on the observing end of the Chinese room experiment.