# Half stack web frameworks

In my previous post, I discussed how web development had become weird. In this post, I will discuss what exactly is it that makes it so weird. I will also present an alternative to JavaScript-based SPAs that look and behave like them, yet at the base are built using standard full-stack frameworks. They can leverage modern JavaScript libraries like React and compilers like Babel while simultaneously avoiding the confusing tooling ecosystem and providing a rich and responsive user experience, all the while retaining an pleasant developer experience.

## What exactly is wrong with the tooling ecosystem?

I think, largely, the reason why web development has become weird was that front-end development cannot figure itself out. We are wasting effort and time by building more and more elaborate abstractions that fundamentally exist only because of an unhappy accident: the web is the only cross-platform application container. It is also a very accessible medium. To create a web application, fundamentally, one needs to present the right kind of mark-up to a browser that renders it.

Let’s stop here. Just because the web became what it is by accident, doesn’t make it a bad thing in itself. Everybody loves platform independence. Everybody loves accessibility. The web is easy to develop for and it can reach almost everybody. This is a reality we have to deal with, a reality in which web development is (a) popular, (b) ubiquitous and (c) easy.

The combination of those properties creates an interesting melting pot of rapidly evolving technologies. Rapid progress is a nice thing in itself, but a bad thing to the ecosystem when it evolves blindly. Web development doesn’t evolve blindly, rather, it is myopic.

## Progress, progress, progress!

To put this into context, we must understand that currently, most software is disposable. Because software is disposable, we eagerly toss a half-functioning solution into the bin and rewrite it, rather than taking it apart and rebuilding a better version. This leads to programs getting rewritten and rewritten, sometimes doing things differently but most of the time it’s just the same thing under a different layer of paint.

But I digress. That is more of a problem with software development in general. We can review a more concrete example: the JavaScript tooling ecosystem. To develop front-end in JS, you need three different tools;

• A package manager - npm, bower or yarn
• A module bundler - webpack, rollup or browserify
• A task runner - gulp, grunt, brunch

Each of these segments work completely differently. So to get started with, that’s three different tooling systems you have to learn. Better yet, each individual tool inside each system is unique in its configuration syntax. So if you learn how to configure Grunt you will have to learn Gulp and Brunch from scratch. Joy.

Yeah, yeah, I get it. Bower was cool because it built flattened trees when npm didn’t. Gulp had a nicer configuration syntax, and Brunch was easy to get started with. Yarn is more secure and more reliable than npm. Webpack can inline your CSS and images and is more configurable than Browserify.

At this point, you may be wondering that I desire a world in which there is but one alternative to every task. This is not the case. I only ask for reservation: if there are no fundamental ideological differences, if there are no personal incompatibilities between the developing organizations, is there any valid argument for building your own version of a tool, instead of contributing to an existing system?

I don’t think this is the case at this scale — obviously, a world with just one kind of tool or library for one thing is stupid, but the sweet spot does definitely not lie at seven.

So if the answer is no, does that mean JavaScript developers are so strange that they cannot get their heads together and agree on something? Do they really think people enjoy keeping up with the Joneses all the time and learning a new tool every year?

When it comes to the first question, remember Joyent and the io.js schism. Oops. As to the second, I doubt it. Still, this is what we have to live with. A guide for building a modern JS front-end app consists of twelve distinct steps, all of them quite elaborate. I applaud the author for the gargantuan effort in that tutorial: it’s the best I’ve seen so far. But seriously, take a look at it! What the fuck?! I could have just rewritten my previous post with a link to that guide and rested my case!

I remember a book about Windows programming using C, and parts of this guide are arcane enough to evoke memories of that. I think one can enumerate the type system of Scala using less. Or how to write a Scheme interpreter.

The usability of the tooling ecosystem is absolutely disgraceful. No other developer segment has this many hoops to jump through and nobody else has to learn so many different tools just to get a simple web application running.

Why do we put up with this? Why isn’t any effort being put into simplifying the tooling stack, instead of making it more elaborate, powerful, and verbose? Consider webpack. It is a powerful utility that is supposed to combine all your assets — that is, code, CSS, images — into a single module that is used in your application. This is a powerful thing. The only problem is that its configuration is hell. I work with SBT every day, and my goodness, even SBT is easier to configure than Webpack. Ask any Scala developer what it means to say that. You will get funny looks. Even Java folks will consider this crazy, although, in fairness, they’ve moved into the post-framework age, and consider us mortals rather quaint.

## SPA development is more than just tools

The problems don’t stop here. A SPA must effectively handle client state entirely in the browser, though in isomoruniversal SPA apps part of the rendering and client state is processed on the server. This requires the use of architectural patterns like Redux and React Router.

These libraries are nice and intelligent, but I feel they are a wasted abstraction. Using the trick below I can create React apps that can approximate the performance of a real SPA app, without having to rely on these complicated architectural patterns.

Caveat lector. This is largely a matter of taste. If you really like Redux and React Router, by all means use them, but I find their usability to be sub-par to the MVC architecture of any full-stack framework. The architectural pattern — Flux — is a message-based event loop. The views generate user actions (button clicks) that are dispatched to stores (state containers) which update themselves (increment a number) then deliver state changes (an incremented number) to the views which re-render themselves. If a request is sent to the server, it must be split into two parts: first, a button click is registered, and its effect is rendered; second, a request is sent to the back-end and when it completes, an action describing a completed request is sent to the message dispatcher. So any interaction with the back-end requires two actions. Sounds complicated? Yeah, this is why I prefer a dumb MVC architecture (or Relay).

## In summary

So, to put this argument into a more cogent form, I’ll summarize them below.

### 1. Lack of emphasis on usability, a myopic focus on adding features.

Why doesn’t anyone integrate dependency management, module bundling and task running under the same program? Why do we have to use three different programs that are getting replaced every year? Tool “monoliths” like SBT may be ugly in parts, but they can do package management, compilation, debugging, testing – even if it’s DSL is garish and confusing, still, once you’re familiar with it, you don’t have to master six other horrifying DSLs. Just one.

### 2. Chasing novelty with little care about its impact on maintainability.

Babel lets us write JS in eleventy different dialects. While that is a cool thing in itself, it a horror show for developers. You ask, who wouldn’t want to use await, or ES6 classes? Well, how about the person who doesn’t want to learn how to use Babel?

With Babel, you can write in any version of JavaScript you want, since it all gets compiled down to ES5 anyway. This is great for building your flavor-of-the-month hack, but it’s also a terrific way of building unmaintainable software. For this zany hack to work, you need tracompilers that translate your modern code to old code. The requirement of that tool is too high a price to pay for some fancy language features.

### 3. Snubbing full-stack frameworks for their want novelty, although they generally feature exemplary usability

Clojure developers have found a way of eschewing frameworks over composable libraries. For some reason, everybody else is really bad at this, so we build frameworks, i.e., sets of libraries that govern the design of your program in a certain way. Monolithic frameworks like Rails or Django are fundamentally dated — though this is easily fixed — but they are usable. Setting up a functional application with these takes a few minutes, and it just works.

## A new direction: renovate, not rewrite

In my opinion, front-end development can be done in an alternate, saner way. It doesn’t mean going back to the stone age of Apache or Rails with ActiveRecord. Rather, it means refurbishing these old, battle-tested technologies with modern components without tossing the whole chassis into the bin.

In other words, there is an alternative to the current JavaScript SPA horror show. Using the following technologies, as an example:

1. A REST API built in a scalable and performant language

Examples: Scala, Haskell, Go, Clojure, Java, Rust, OCaml, Elixir

This gives us a clear advantage when scaling and deploying our application. Data access is made opaque and is in no way tied to the front-end - which is ultimately just presentation and some client state. The language needs the following:

• A stable library ecosystem, especially for data access, e.g., database drivers
• A functioning web server and associated libraries

With these properties, you should be quite comfortable in your back-end development.

2. Client state, presentation and back-end communication handled using a monolithic framework

Examples: Ruby on Rails, Django, Pyramid, MeteorJS, Udash, Play

Rails may be dated in some parts — coupling your front-end with data access is one thing — but as an infrastructure it is functional, mature, easy to understand and stable. The Ruby ecosystem is large and is well documented, even the secondary documentation (StackOverflow etc.) is abundant.

3. A wrapper that turns ordinary HTTP page requests into XHRs

Examples: Turbolinks (for Ruby on Rails and Django)

Turbolinks is perhaps a hack but it is clever: any HTTP request that would normally cause a page reload, like a link or a form submission, is converted into an XHR. Then, the page redraws itself by swapping out the <body> element from the returned response.

Turbolinks is a “pseudo-SPA” application in that it simply reroutes ordinary page requests (links, form submissions) as XHRs and then from the new page, it merges the <head> element and swaps the <body> element. By using a gem like react-rails you can combine this with react, however, it does not use React’s virtual DOM when redrawing the body content. It only mounts and unmounts the components when the page swaps, retaining the actual DOM bindings.

Just because these frameworks aren’t making headlines doesn’t mean they are stuck in the stone age. These monolithic frameworks still, after years of maturation, possess novelty value in one, unparalleled aspect: usability. These frameworks may not lend themselves to universal applications, but they’re still capable of absorbing new technologies like websockets and GraphQL.

Some parts of them are stuck in the past, of which the most striking one is combining data access with data control and presentation in the same program. This is easily fixed: make your Rails controllers call an external, opaque service to render is data. The job of the full-stack framework is reduced to managing client state and data presentation, which go together.

So, what can be done? Here’s an example.

## A REST-backed Rails app with React as the templating engine

react-rails is a Rails gem that gives us React components in the asset pipeline, supporting server-side rendering and Turbolinks (caveat: see above)

Under the hood, when rendering on the server, react-rails uses Babel and ExecJS to prerender the content. Better yet, your content is still rendered by a simple Rails controller like the following.

The controller lives in app/controllers/foos_controller.rb:

class FoosController < ApplicationController
# maps to GET /foos (on the front-end)
def index
# incurs a GET /foos on the back-end
@foos = Foo.all.to_json
end

# maps to POST /foos (on the front-end)
def create
# this is a POST /foos on the back-end
Foo.create(:bar => params['bar'])

# turbolinks turns this into a XHR
redirect_to '/foos'
end
end


The model is just a Her model, an ORM that uses a REST API, which you can customize. In apps/models/foo.rb:

class Foo < Her::Model
attributes :bar, :id
end


Now Foo.find(1) maps to GET /foos/1 in the back-end, and so forth.

The view is generated by app/views/foos/index.html.erb

<%=
react_component(
'Foos',
{ foos: @foos, token: form_activity_token, action: url_for(action: 'create') },
{ prerender: true }
)
%>


This maps to a React component app/assets/javascripts/components/foos.es6.jsx:

class Foos extends React.Component {
render() {
<div>
<ul>
{this.props.foos.map((foo) => {
return <li>{foo.bar}</li>
})}
</ul>
// dataRemote is a Rails trick that makes the form make an XHR
<form action={this.props.action} method="POST" dataRemote="true">
<input type="hidden" name="authenticity_token" value={this.props.token} />
<input type="text" name="bar" value="Blah blah" />
</form>
</div>
}
}


Try doing that with less code in any JS app! The controller looks like any standard Rails controller. In fact, it is exactly like one, yet the magic of React & Turbolinks lets us wrap this into a SPA-like experience.

Combining these elements, we get an application that can reach nine-tenths of the performance and responsiveness of a 100% JavaScript SPA, while simultaneously avoiding the messy tooling ecosystem.

• A total absence of extraneous tooling, the framework has these built-in. No need for Webpack or Babel, these are just another gems you add to your dependency list.

• A boring, but familiar, framework that handles routing, message dispatch and API integration for us. Routing and state management are the worst parts of SPA development. Now our state is just another Rails

• Responsiveness close enough to that of a real SPA. It will never match a real SPA in speed, since the requests map to Rails controllers, but it will be extremely pleasant to develop in.

• A scalable back-end without any data access logic in the front-end (the usual front-end back-end split), the framework handles only UI state and presentation logic.

There are some obvious compromises in such a solution, which are both good and bad.

The biggest compromise is in performance, which is due to the following:

• Does not use React’s DOM to its full power. Turbolinks just swaps the body element. This could be improved by making it use the React virtual DOM. This is the bad part. The good part is that we don’t have to create XHRs ourselves in React components.

• Forces the user to use JSX, throws ERB/HAML in the bin. It is true that the example application could be indeed built without JSX — just don’t use react-rails — but I find JSX to be a nicer templating syntax than ERB. advantage of React

But it would be naïve to assume this brings us the whole of React. It brings us the templating syntax and binding mechanisms, but since Turbolinks effectively causes a re-rendering of the whole HTML page, this doesn’t fully leverage the server-side rendering aspect of React.

So, overall, the good part of this compromise is that we get to use JSX, which has a nicer, functional approach compared to ERB, but the bad part is that we don’t harness the full power of React.

• Turbolinks effectively reverses React server-side rendering. Whereas in a normal SPA app the server-side rendering is the “base” template, in this case a new server-side rendering is produced on every interaction. In a normal SPA app, one just updates the DOM with new state — i.e., props — not with a new DOM.

There is a solution: skip Turbolinks and use XHRs in React components. A simple solution in a controller:

def create
@f = Foo.create(:bar => params[:bar])
if request.xhr?
# send a JSON of all the Foos
render :json => Foo.all.to_json
else
# send HTML with a React component
redirect_to action: 'index'
end
end


If the request is made from a component, it’s can now use setState (or a store) to update its new state. In this paradigm, Rails is acting as the state store.

A better example would be to make the Rails app support GraphQL and use Relay to communicate with the Rails part, see below.

I think, given the simplicity of the above application, I think it’s fair to say that these compromises are warranted. If the actual set-up were any more complicated I wouldn’t be so certain. But, for the simplicity, we must trade performance.

### A functioning example

I’ve created a functioning example and put it into two repositories:

• Front-end – Rails 5 & react-rails & Her – https://github.com/ane/rails-react-frontend

A Rails 5 app combining react-rails and Her to talk to the back-end.

To install, clone the repo, run bundle install, run foreman start. This will start the Rails server and the live re-loader.

• It’s a dead simple Sinatra REST API that uses Sqlite3. This is obviously not suitable for production.

To install, clone the repo, run bundle install, run rackup.

This application will never match a real SPA. A part of the front-end is not in the browser, so we will rely on a second web-server to run it. So it is an illusion, but as an illusion it is close enough, and it is easy to use.

## Conclusion

JavaScript front-end development, as it currently stands, is painful to develop in. One has to master many command line tools that instead of being unified as a single tool, each continue to diverge and grow larger and more powerful. The result is a confusing developer experience.

In this post, I showed that we can scrape the good parts of modern JS developments and use them to modernize an older application stack that mimics the user experience of a SPA, but is not one. The application uses a clever library — Turbolinks — to convert page requests into XHRs, creating an illusion of a single-page application.

The end result is a half stack web framework: we yank data access from a monolithic full-stack framework (Rails) and make it use a REST API and we replace its presentation logic (ERB) with React. The framework is left to handle client state, routing and asset pipelining, which are the painful parts of SPA development, and the UI is rendered using React. So the Model–View–Controller is distributed into three places: Rails for UI state, React for UI rendering, and the REST API is the actual business logic. Effectively, this reduces Rails to a thin SPA-like front-end over a REST API!

Where to go from here? Here are some interesting things that could be explored:

• Turbolinks with React. Use React to parse the HTML returned by Turbolinks (if rendered on the server) and use the React virtual DOM to update the DOM, instead of blindly swapping the body element.
• GraphQL. Although Her is nice, we could use GraphQL when communicating with the backend and also use it as a communication method between Rails and React.
• TypeScript. I like static typing, but currently react-rails doesn’t really work that well with TypeScript.
• React On Rails. A different kind of React & Rails integration, which lets you use Webpack. React On Rails is more flexible than react-rails: you get the full power of Webpack and NPM here, so this is both good and bad.

All in all, this solution is a compromise.

Compared to a full-stack Rails app, we have to do extra work in creating a REST API backend, but the result is an app that’s easier to manage due to the separation of concerns. With a separate data access layer — the REST API — complex business logic is contained in a single place. It is easy to couple several clients to such a front-end, and our Rails app is just one of these.

But, compared to a full-fledged SPA, this app will never be as quick, it will never be as fluid, and it may not be what cutting-edge front-end development this day represents, but it is is simple, there is one build tool (bundler), and it is fun to develop in.

I might miss fancy things like state hydration and Redux, but the insanity of Webpack, Gulp, Babel and NPM, I will not miss.

# Web development has become weird

Call me old-fashioned, call me a curmudgeon, but I think web development has become stupid and superficial. The unending quest towards single-page apps (SPAs) has made web development extremely painful and the current trend is diverging towards seven different directions at once. On one end, we have rich SPAs that can be built as native applications, on the other we have something completely orthogonal, of which a schism is beginning to form.

The underlying problem is unfortunately that the web is being misused as an application container instead of the original text transport protocol it was made to be. It’s no use crying over spilled milk; the web has been subverted, transformed, improved upon, so much so we don’t know what the original even looked like.

## How it was

In 2006, the hot new thing was Ruby on Rails or Django. If you weren’t using them, odds were you were using PHP or ASP.NET. Most intranet software ran on SharePoint or, I kid you not, WordPress. Users didn’t really care either way.

People liked Rails and Django because they made web development stupidly simple. No more SQL, just create your models and migrations. An architecture that made sense, MVC, was applied, and web apps became a little bit better. Meanwhile, the overall web development experience got a lot better.

Of course, the web was slower back then. Chrome wasn’t around, so JavaScript usage was very limited. Google began prototyping under-the-hood requests in Gmail around 2006, but before that nobody had heard of AJAX. The concept of doing more than one page request per page load was completely unheard of. The users liked faster page loads, so when Chrome came around with V8, customers started suddenly giving a shit about what browser they used.

## Where it all began

On the surface, the appeal in SPAs was obvious. It started with Gmail and AJAX. No more slow page loads, the applications behaved like native applications, and soon they even looked like them! Innovative as that was, now we’re beginning to use so many web applications that are in the web only that we’re slowly starting to forget what the native app experience was.

The problem was that it wasn’t enough, you needed a backend. Before, when there was one application, now there were two, and they usually were completely different from each other. The backend–front-end split was fuzzy to begin with, this introduced an uncertainty and a possibly pointless abstraction. Put the “slow” and “heavy” things to the backend, let the front-end handle rendering and the user interface, all the backend had to do was supply serialized data. Even back then, people started asking questions about the SEO effects of rendering a page entirely in JavaScript. No solution was given, although one solution existed, but was weird.

So while the backend folks built eleventy versions of Sinatra, the front-end folks got busy. In a short time we had Backbone, Angular, and Knockout, then we got frameworks like Durandal and Meteor.js. Finally, Facebook looked at the performance of desktop applications, then looked at the performance of web applications, thought, “holy shit”, and did something about it.

People got scared. It was mixing business and presentation logic, they said. It was mixing JavaScript with something eerily like XML, and everyone said XML sucked. Then people got over their usual trepidation towards $newTechnologyOfTheYear and got on with their lives. Now React is being used left and right. The only problem was, React was a templating engine at heart. Facebook did not build a bridge for existing front-end frameworks, so that people could have just dropped in React instead of say, Handlebars or even ERB. Facebook did not do this because they already had their own way of rendering content. They didn’t need one. Build your own, they said. Faced with just a templating engine, developers got confused. “How do I do routes with this?” they asked. So we built routing engines and state containers, and got on with our lives. Soon after that, someone understood React ran quite fine on a Node.js server, and people started rendering pages in two places: the backend and the front-end. Now, people are using React – a JavaScript library to be run inside a browser – to create native mobile applications. Meanwhile, other folks think, all of this, this excession, is simply too much, and want pages to load quickly. Couple this with the at least bizarre experience of JavaScript development in 2016, things are looking weird. The tooling iterates at an impossible speed, a new build system emerges every year, and developers must stay on top of things. Having to stay on top of things is, generally, a good thing. Software progresses, it progresses so fast that we must constantly learn for us to stay employable and the profession to stay enjoyable. But at this speed, when it seems we’re not really learning from the past, it’s not doing anyone any good. React took a good idea from desktop applications, event-driven user interface rendering, and executed it brilliantly as they ported it to the web. The thing is, it’s still nothing new. Ten years ago we were building crappy and weird-looking software in C#, now we’re building crappy and broken software in a mix of JavaScript and other languages, and they run in the browser, or on smartphones, and they’re responsive, so that when you tilt your tablet sideways, that big fat menu disappears. Huh. That’s what they call the churn. The churn. New technologies come and they kill the old technologies, but in the midst of it all, stand you and I, wondering what the hell to do with this mess. From the other side of it all, from the ivory tower of the real world, the business analysts cast their shadow and remind us these technologies are tools, they’re meant to be replaced, they’re disposable. So are we, if we can’t learn new ones, they remind keep reminding us. ## So? I make it sound as if web development is impossible, but that couldn’t be further from the truth. Browsers are getting better and faster. Our applications are prettier, faster, more accessible, more usable. The web is replacing desktop applications and this trend is accelerating – whether this is a good or bad thing, I don’t know. The only problem is that the development experience keeps reinventing itself at such a pace you may as well put yourself into stasis and wait for things to settle. Wait for front-end development to become boring. Odds are you can sleep for quite a bit until that happens. The second option is just to pick whatever works right now and use it. The optimistic part is that we, as web developers, are learning, we’re doing some cool things and unifying two halves of the same thing. The backend guys are innovating and tooling progress is insane and exciting. So I cannot state that we haven’t gotten anywhere, we have innovated, learned, and improved the Web. But by how much? Are our end users happier? # A concrete solution Given the task of implementing a web application, what would I do, given the state of the art in 2016? I spent about four years developing SPAs with many frameworks. I hate them all. Given that sentiment, this is what I would do: 1. Using a language of your choice, build a business logic API that can be used via REST or some other RPC protocol. The language and its associated tooling should be performant and support rapid iteration. 2. Use a batteries-included web framework, spiced with a rendering framework of your choice, to create front-end. 3. Build many front-ends, not just for the web, but for mobile and perhaps even desktop, and keep them thin. 4. The web front-end can be spiced up (but not replaced) using JavaScript. Come to think of it, I would have done the same thing in 2006. Point 4. originates from my experiences of creating and maintaining SPA applications. I think SPAs are, by and large, a bogus concept. A web application loading another page isn’t intrinsically a bad idea, if your application is fast enough. Conversely, if your SPA is slow, you’re doing it wrong. SPAs were invented for speed, because conventional web frameworks were slow. This is not the case anymore. Sure, you won’t see Rails, Django or Play beat the TechEmpower benchmarks, but we’ve come a long way from five years ago, which is when people started to play around with SPAs. Given the speed improvements, why not go full-stack? Why a front-end and a back-end? The answer for this is not simple. It is because we’re dealing with two incompatible abstractions: 1. Building your application as an API means you need a client application to provide the user interface. 2. To build such an interface, your application has to deal with the fact that HTTP, and thus REST, is stateless. 3. Web applications are usually stateful. 4. This leads inevitably to the requirement of building an abstraction in the middle that handles client state, which your API does not support. 5. Building such an abstraction – the front-end – requires a lot of work, e.g. by using a MVC (or MVVM whatever) model. Double the work, half the fun. So, the back-end abstraction is incompatible with client state, but the front-end application requires client state. Conversely, a full-stack application is often a heavy monolith: it needs to handle data access, its modification and its presentation in the same package. Here, as they say, be dragons. We want to keep business logic and presentation logic separate, hence, a full-stack framework does not work on its own. As a solution, I offer a synthesis. It’s mixing a REST back-end with a full-stack frontend. The back-end can be built using whatever language is performant and maintainable. Build your front-end with a boring framework like Rails, Django or Pyramid; let it fetch its data from the REST API, i.e., treat the API as the data source. Let the front-end handle client state on its own. What you get in return: 1. The ease of use of said framework. These frameworks were invented for a reason. You get routing, templating, asset pipelines etc. out-of-the-box. 2. You can still do AJAX requests easily to build rich user interfaces. 3. A reusable API in the backend you can use in other applications, keep your web front-end an equal citizen. If you don’t want to deal with framework bloat, or if you’re scared of non-JavaScript applications, be my guest, build your own front-end using the essentials. Splurge in Gulp, ES6, React, and Redux. Or use TypeScript. But I dare say, after having worked with both full-stack frameworks (e.g. Rails) and SPA+REST frameworks, the compromise above is much more pleasant. In the end though, it doesn’t really matter: with the exception of a few, our end users couldn’t care less. They really don’t give a shit. So, pick whatever technology works for you and your users. The above is just one option. # Communicators: Actors with purely functional state In Scala, Akka actors, as in the traditional Actor model, may modify private state. The accepted convention is to have a mutable object (e.g. a Map), a var, and mutate it like so: class Library extends Actor { var books = scala.collection.mutable.Map.empty[String, String] def receive: Receive = { case AddBook(isbn, title) => books += (isbn -> title) } } object Library { case class AddBook(isbn: String, title: String) }  This is a bad idea. There are several reasons for this. First, Scala eschews vars, they should only be used when absolutely necessary (read: never). There is also an additional need for thread-safety for the collection, not because of the receive method itself. The receive method is guaranteed to run inside a single thread. However, an unsuspecting user might still launch a Future and modify the collection, leading to unpredictable behaviour. Such concurrent mutations on a var put strain on the garbage collector, in fact, it often necessitates the existence of a garbage collector.1 Lastly, as with any mutable state and possible lack of referential transparency, the code can become hard to reason about. Thankfully, Akka actors offer a possibility to do this completely functionally. The function context.become allows an Actor to change its receive method on-the-fly. In other words, it lets the Actor change its state and communication model. Here’s the above implemented using this paradigm: class Library extends Actor { def receive = active(Map.empty) def active(books: Map[String, String]): Receive = { case AddBook(isbn, title) => { // for immutable maps, += returns a new collection context.become(active(books += (isbn -> title))) } } }  The active function returns a new Receive, receiving the current actor state as its parameter. Adding logic to it is now easy: class Library extends Actor { def receive = active(Map.empty) def active(books: Map[String, String]): Receive = { case AddBook(isbn title) => { if (books.size < 10) { context.become(active(books += (isbn -> title))) } else { sender() ! "Too many books" } } } }  The above code is now thread-safe and doesn’t use mutable collections, but what if our logic gets more complicated? What if we need to talk to another Actor, or talk to the sender of the message? This is where we stumble upon a design feature of Akka: all of its Actors are actually compiled down into a callback-based implementation. There is no guarantee that a Future launched in a receive case will be running in the same thread as the next! One could argue that this is not a feature but a flaw, but I won’t go that far. Hence, code dealing with Futures in Akka actors needs to deal with the unforgiving reality that there is no guarantee of thread safety. Case in point: class Library(popReservation: String => Future[String]) extends Actor { def receive = active(Map.empty) def active(books: Map[String, String]): Receive = { case AddBook(isbn, title) => { ... } // as before case AskForBook(isbn) => { popReservation(isbn) foreach { i => // AAH!!! context.become(active(books - i)) sender() ! s"Here you go:$i"
}
}
}
}



Why am I screaming in the comments? First, as calling map for our Future launches a new thread, we have no idea whether sender() returns the same value in the new thread, and second, we may be modifying the books collection concurrently with other threads - leaving the garbage collector to collect our mess. So we strain the GC and risk giving the book to the wrong caller!

Since the actual execution of a Future is left to the execution context, which in the case of Actors is the ActorSystems dispatcher, we may or may not be invoking sender() in the right thread — there is simply no guarantee. We can’t reason about it, it has been hidden from us.

To deal with this, Akka has introduced the pipe pattern, which is an implicit given to Futures which solves this:

class Library(popReservation: String => Future[String]) extends Actor {

def active(books: Map[String, String]): Receive = {
case AddBook(isbn, title) => { ... } // as before
val reservation: Future[String] = popReservation(isbn) map { i =>
s"Here you go: $i" context.become(active(books - isbn)) // AAH! } // but sender() is still the same reservation pipeTo sender } } }  Another option is to fix the reference of sender: val s = sender() val reservation: Future[String] = popReservation(isbn) map { i => s ! s"Here you go:$i"
context.become(active(books - isbn)) // AAH!
}


Ok, now we’ve fixed sender(), but what about the books collection? Let’s add a PopBook(isbn: String) case class, and handle that for removals:

class Library(popReservation: String => Future[String]) extends Actor {

def active(books: Map[String, String]): Receive = {
case AddBook(isbn, title) => { ... } // as before
case PopBook(isbn) => context.become(active(books - isbn))
val reservation: Future[String] = popReservation(isbn) map { i =>
s"Here you go: \$i"
self ! PopBook(i)
}
// but sender() is still the same
reservation pipeTo sender
}
}
}


Sending messages to self is always thread-safe - the reference does not change over time. So, at this point, it seems clear that making actor code thread-sane involves the use of:

• immutable state - call context.become with a closure over the new actor state,
• converting asynchronous state modifications as messages to be handled later, and
• making sure the sender() reference is consistent

What about complicated states? What if we need to react differently to these messages, e.g., when the library is closed? I sense that you’re about to mention Akka’s FSM construct, which builds a state machine, encapsulating state and transitions to what is essentially syntactic sugar, and on the surface, seems like a good idea.

## Enter Akka FSMs

At a closer look, it essentially leads us to repeat the same mistakes as above, and the arguments against it are argumented here. In summary, it boils down to:

1. Akka FSM’s is too restrictive. You cannot handle multi-step or complicated state transitions, and modeling undeterministic behaviour is impossible.
2. You are tied to Akka completely, you must use Akka testkit for your tests. Anyone who has worked with testkit knows this to be a burden.
3. State transitions have identity instead of being truly functional, that is, FSMs alter the current state instead of producing a new one.

Moreover, and I think this is the biggest shortcoming, the Akka FSM are finite-state automata — they are characterised by the state transition function (Input, State) => State. Since we know actors are more about communication than anything else, this model is insufficient, and what we need is a state machine that can produce output: a finite state transducer. Its state transition function has the signature (Input, State) => (Output, State) - every transition produces an output, and Scala can model this efficiently:

trait FSA[State, Input, Output] {
def transition(s: State, i: Input): (Option[Output], State)
}


With all these flaws, despite being a nice idea at a glance, it’s obvious that for any complicated logic Akka FSM’s aren’t sufficient.

Let’s envision a radical version of actors, accounting for all the flaws described above:

• State transitions should be about producing a new state, i.e. (Input, State) => (Output, State)
• Actor computations will deal with asynchronous code, we must deal with this intelligently
• Keep I/O logic out of actors - the actor only communicates with the external world
• Actors should only mutate their state with with context.become

The last bullet point is especially important, as it constrains state changes to be entirely functional, as you can simply make a function def foo(state: State): Receive, and keep calling it recursively, by transitioning states thusly:

def active(state: State): Receive = {
case someInput: Input => context become active(state)
}


This idea is not new. Erlang actors have worked like this for actual decades, and arguments for using this method in Scala can be found left and right, summarized particularly well in Alexandru Nedelcu’s Scala best practices.

active(Sum) ->
{From, GetValue} -> From ! Sum;
{n} -> active(Sum + n)
end.


Putting emphasis on the last point, I’ve come up with a moniker called communicators.

## Actor, meet communicator

Let’s define the Communicator trait first independently:

trait Communicator[State, Input, Output] extends Actor {
/** This is the initial actor state */
def initial: State

/** The state transition function */
def process(state: State, input: Input): Future[(Option[Output], State)]

/** The output processing function */
def handle(state: State, output: Output, origin: ActorRef): Future[Unit]
}


initial is simply the initial state machine state, process is the state transition function and handle is the function that will deal with dispatching the result of process. Because we’re producing content in another thread, we want to make sure the reference of sender is fixed, and by using this with the pipeTo pattern, we get thread safety. Let’s extend the Actor trait to get receive

trait Communicator[State, Input, Output] extends Actor {
/** This is the initial actor state */
def initial: State

/** The state transition function */
def handle(state: State, product: Output, origin: ActorRef): Future[Unit]

/** The output processing function */
def process(state: State, input: Input): Future[(Option[Output], State)]

/** I/O handling which the deriving class must implement */
}


The active function is the actual output-producing function. The user is left to define three things:

• the initial actor state in initial
• the output dispatch function handle
• the state transition function process
• the active function which handles input and output

To see this in action, first, let’s define the application states.

object Library {
// Library state
case class LibraryState(open: Boolean, books: Map[String, String])

// Input alphabet
sealed trait LibraryInput
case class SetOpen(o: Boolean)                  extends Input
case class AddBook(isbn: String, title: String) extends Input
case class GetBook(isbn: String)                extends Input

// Output alphabet
sealed trait LibraryOutput
case object SorryWeAreClosed                        extends Output
case object DoNotHaveIt                             extends Output
case object SorryReserved                           extends Output
case class Book(isbn: String, title: String)        extends Output
case class Reservation(isbn: String, title: String) extends Output
}


The actual state is just a case class: this gives us the nice copy function for easy updates. Then we use polymorphism to implement the input and output alphabets. Then we implement the actor itself:

class Library(getReservation: String => Future[Boolean])
extends Communicator[LibraryState, LibraryInput, LibraryOutput] {

import Library._

def initial = State(false, scala.collection.immutable.Map.empty)

override def active(newState: LibraryState): Receive = {
case (output: LibraryOutput, origin: ActorRef) => handle(output, origin)

case input: LibraryInput => {
val origin = sender()
process(newState, input) map {
case (output, state) =>
output foreach { o =>
self ! (o, origin)
}
self ! state
}
}
}

override def process(state: State, input: Input): Future[(Option[Output], State)] =
input match {
case SetOpen(o) => Future.successful((None, state.copy(open = o)))

case (GetBook(_) | AddBook(_, _)) if !state.open =>
Future.successful((Some(SorryWeAreClosed), state))

case GetBook(isbn) => {
val book =
for {
(isbn, title) <- state.books.get(isbn)
} yield {
getReservation(isbn) map { reserved =>
if (!reserved) {
(Some(Book(isbn, title)), state.copy(books = state.books - isbn))
} else {
(Some(SorryReserved), state)
}
}
}

book getOrElse Future.successful((Some(DoNotHaveIt), state))
}

Future.successful((None, state.copy(books = state.books + (isbn -> title))))
}

override def handle(state: State, output: Output, origin: ActorRef): Future[Unit] = {
Future {
origin ! output
}
}
}


## Decoupling Akka

So, now we’ve made a very thin actor, with little I/O logic inside it, but it’s still an actor. Let’s decouple it entirely from actor semantics. First, we define a StateMachine[I, O] trait:

trait StateMachine[I, O] {
def process(input: I): Future[(Option[O], StateMachine[I, O])]
}


And excise the state logic from the Communicator, moving it to the State case class:

case class LibraryState(open: Boolean, books: Map[String, String], getReservation: String => Future[Boolean])(
implicit ec: ExecutionContext)
extends StateMachine[LibraryInput, LibraryOutput] {

def process(input: LibraryInput): Future[(Option[LibraryOutput], LibraryState)] = {
input match {
case SetOpen(o) => Future.successful((None, copy(open = o)))

case (GetBook(_) | AddBook(_, _)) if !open =>
Future.successful((Some(SorryWeAreClosed), copy()))

case GetBook(isbn) => {
val book =
for {
title <- books.get(isbn)
} yield {
getReservation(isbn) map { reserved =>
if (!reserved) {
(Some(Book(isbn, title)), copy(books = books - isbn))
} else {
(Some(SorryReserved), copy())
}
}
}

book getOrElse Future.successful((Some(DoNotHaveIt), copy()))
}

Future.successful((None, copy(books = books + (isbn -> title))))
}
}
}


You may be wondering: wait, where’s the handle implementation? We kept that out from the state machine class since it’s not its responsibility - so we keep that in the Communicator:

class Library(getReservation: String => Future[Boolean])
extends Communicator[LibraryInput, LibraryOutput, LibraryState] {
import context.dispatcher

def initial = LibraryState(false, scala.collection.immutable.Map.empty, getReservation)

override def handle(output: LibraryOutput, origin: ActorRef): Unit = origin ! output

override def active(newState: LibraryState): Receive = {
case (output: LibraryOutput, origin: ActorRef) => handle(output, origin)

case state: LibraryState => context become active(state)

case input: LibraryInput => {
val origin = sender()
newState.process(input) map {
case (output, state) => {
output foreach { o =>
self ! (o, origin)
}
self ! state
}
}
}
}
}



So, all state is kept neatly in a separate entity that’s entirely unit testable in its own right without having to rely on Akka testkit or the like – input and output dispatch and state transitions are done in the active method.

I know the state case class manipulation introduces more boilerplate, but as long as that boilerplate isn’t complicated, I think this is a fair compromise. Plus, one can use lenses to remove some of the boilerplate, e.g., by defining handy update functions. One could cook up something doggedly interesting using Cats and StateT - as long as you provide a function of the kind (I, S) => (Option[O], S), the sky is the limit.

Thanks to Jaakko Pallari (@jkpl) for previewing this.

1. This is actually false, as Aaron Turon, a core Rust developer, proves in his article about getting lock-free structures without garbage collection