Antoine Kalmbach

Apache Camel is a routing and mediation engine. If that doesn’t say anything to you, let’s try this: Camel lets you connect endpoints together. These endpoints can vary. They can simple local components, like files, or external services like ActiveMQ or web services. It has a common language format for the data, so that your data can be protocol agnostic, and an intuitive DSL for specifying the connections and how the data should be processed between messages.

The common language consists of exchanges and messages. These are translated into protocol-specific formats (like a HTTP request) by components, which provide the technical implementation of that service, i.e., the translation of a simple Message into an actual HTTP request.

The connection method is an intuitive DSL that speaks in terms such as from and to. Informally, you can create a route that can, for example, read messages from ActiveMQ, and write them to a file. The language is much richer than this, grouping together things like aggregation, filtering, routing, splitting, load balancing, the list goes on.

Choosing what component to instantiate is done using an URI. An URI will identify the target component, e.g., rabbitmq://myserver:1234/... instantiates the RabbitMQ component, file:... instantiates the file component, netty4:... instantiates the Netty component (version 4.0). As long as the component is available in the classpath, it will be instantiated in the background by Camel. The total number of available components is huge! You have e.g.:

  • ActiveMQ, RabbitMQ, Kafka, AVRO connectors
  • Files and directories
  • REST, SOAP, WSDL, etc.
  • More esoteric ones like SMPP – yes, you can send SMSes with Camel!

So what’s the point? Let’s assume we need to integrate an upstream system Xyz into Bar. Xyz provides data to you using a binary JSON format, using some known protocol, like ActiveMQ. Then you need to apply some transformations to the data, finally sending it to Bar, which accepts XML, and requires the information to be POSTed to someURL.

In a non-camel setting, using your favorite language, to do this, you

  1. Using an ActiveMQ connector, you build your queue reader and de-serializer
  2. Apply your business logic (whatever that is) to the de-serialized data
  3. Transform into XML
  4. POST the data towards someURL using some HTTP library

Fairly straightforward, right? All you need are an ActiveMQ library, a HTTP library and something that works with JSON and XML.

Here’s where it gets hairy. Three months in, you are informed that the upstream source is converting to RabbitMQ. Oh well, you think, it’s nicer, faster, and implements a saner version of AMQP, why not. So you refactor ActiveMQ to RabbitMQ and there it is.

The point of Camel is this. The previous step requires you to manually refactor your ActiveMQ logic to RabbitMQ. But you’re just sending messages, you don’t really care about the protocol. You’re just sending messages to an endpoint, it’s the data you should care about, nothing else.

So here’s when Apache Camel comes in. It let’s you specify an URL like

rabbitmq://localhost/blah?routingKey=Events.XMC.*

to use the RabbitMQ component, and to painlessly switch to Kafka, you’d add a dependency to the camel-kafka artifact and specify the URL as

kafka:localhost:9092?topic=test

and the Camel Kafka component handles message delivery for you. Since you’re sending canonical camel messages, you needn’t trouble yourself on how this message is already sent. It is likely that you will have to add or remove some message headers though.

Now, you may be asking, is that it? Is it really that simple?

The answer is that it depends. Some components are better than others. If you want to be truly protocol and component agnostic, and you want to refactor from protocol Foo to Bar just by switching the URL of foo://... to bar://, you need to make sure that

  1. You can configure everything for that endpoint using the URI
  2. Message exchanges do not require extra shenanigans to work (no custom headers or a special format required)

Case in point, let’s compare switching from ActiveMQ to RabbitMQ. The first glaring difference is that the ActiveMQ component does not accept the host part in the URI. So we need to do something like

CamelContext ctx = new DefaultCamelContext();
ctx.addComponent("activemq", 
    ActiveMQComponent.activeMQComponent("tcp://USER:PASS@HOSTNAME?broker.persistent=false"));

This makes any activemq:... URI in the context ctx connect to the parameters configured.

Conversely, the RabbitMQ component lets you directly set this in the URI part (multiple addresses can be given with the addresses parameter). So if you’re going with ActiveMQ to RabbitMQ, your code actually becomes simpler, but the complexity merely moves to the URI. The other way around, you have to move your URI-configuration to actual code (or XML, but please, don’t).

So where does this lead us? Ideally, the situation is that given between a choice between three components, you could use an external configuration file that configures a simple URI. The right component is identified based on the URI, pulled out of the classpath. This assumes that, in order of importance,

  1. the endpoints are volatile and finite and can vary between different implementations,
  2. each implementation has a Component which is in the classpath, and
  3. said volatility varies often enough it warrants dynamic configurability via configuration editing and app restarts.

If all of the above hold true, Camel might a good fit for you. Otherwise, I’d be careful: the abstraction isn’t free! What this leads to is a kind of complexity shoveling: although with the RabbitMQ component we don’t need to use code to configure it, we move it to the URI. So it’s still a configuration point. Yet, it’s a nicer configuration point. As in the example above, we see that the connection contains three configurable variables USER, PASS, and HOSTNAME. So, in addition to having to configure the system using code, we have to still configure it otherwise, lest we hard-code the values into the application.

The above approach suffers from decentralization: you now have two places where you customize your system. The first is defining the custom component for a system in code. The second is configuring said custom component via other means.

Our ability to centralize configuration – any configuration, not just that of Camel – depends on the power of the configuration language. Too powerful, you end up in DSL hell. Not powerful enough, people write their own horror shows to add power.

Lastly, we run in the problem of universal pluggability, or universal composition. We imagine that systems like Camel let us “run anything” and “connect everything”, but the reality is different. Systems are usually made of a finite set of components. For practical purposes, it makes no sense to depend on every Camel component. Therefore, you need to pick your dependencies from this finite set of known endpoints. This effectively shatters the myth of universal pluggability.

Most importantly though, nobody really needs this. What really matters is the simplicity of extension. A well designed component is completely configurable through its URI parameters. These are easy to add to your Camel-based system: you only need to understand the new configuration, add the dependency and you’re done.

In summary, if you’re considering Apache Camel, make sure you check both of these, of which the second is most important.

  1. The components are volatile and you need to change them often, so that you can justify the pluggable hole (the changing URI!)
  2. The components you want exist and are completely configurable via that pluggable hole

If you’re unsure of the first item, you can still treat Camel as a lazy way to future-proof the system, e.g., by using one component now, while knowing that another may be used in the future. To that end, you need to make sure that the components fit the above requirements.

I’m currently working on a Clojure library for a Clojure-based routing DSL. It’s shaping up to be quite nice! Here’s an example of the routing DSL:

(route (from "netty4-http:localhost:80/foo")
       (process 
         (comp println body in))
       (to "rabbitmq://localhost:5672/foo"))

My goal is to make the DSL terse and functional (which the current model really isn’t) and to add Akka Camel Consumers and Producers to it. The nice thing about Clojure is that the macro system lets me define these really easily!

Overall, Camel is a nice abstraction, well worth the effort and years that has been put into it. It’s not a free abstraction, since there’s always a slight compatibility or configuration overhead. If it works, it removes programmers from the protocol level, moving them to the data level. This is the level where you should be working at, if your goal is to shuffle data around. For this purpose, when it works, Camel is excellent.

Conversely, if it doesn’t, it puts programmers at an awkward position: you’re still working with both data and protocol, and you have the overhead of the framework to deal with. Worse, your code is now polluted by the requirements of Camel endpoints, when the goal of Camel is to completely remove the requirements imposed by endpoints in general.

That said, in integration scenarios, Camel works most of the time, so you should always have a think about it before you start using it.


In my previous post, I discussed how web development had become weird. In this post, I will discuss what exactly is it that makes it so weird. I will also present an alternative to JavaScript-based SPAs that look and behave like them, yet at the base are built using standard full-stack frameworks. They can leverage modern JavaScript libraries like React and compilers like Babel while simultaneously avoiding the confusing tooling ecosystem and providing a rich and responsive user experience, all the while retaining an pleasant developer experience.

What exactly is wrong with the tooling ecosystem?

I think, largely, the reason why web development has become weird was that front-end development cannot figure itself out. We are wasting effort and time by building more and more elaborate abstractions that fundamentally exist only because of an unhappy accident: the web is the only cross-platform application container. It is also a very accessible medium. To create a web application, fundamentally, one needs to present the right kind of mark-up to a browser that renders it.

Let’s stop here. Just because the web became what it is by accident, doesn’t make it a bad thing in itself. Everybody loves platform independence. Everybody loves accessibility. The web is easy to develop for and it can reach almost everybody. This is a reality we have to deal with, a reality in which web development is (a) popular, (b) ubiquitous and (c) easy.

The combination of those properties creates an interesting melting pot of rapidly evolving technologies. Rapid progress is a nice thing in itself, but a bad thing to the ecosystem when it evolves blindly. Web development doesn’t evolve blindly, rather, it is myopic.

Progress, progress, progress!

To put this into context, we must understand that currently, most software is disposable. Because software is disposable, we eagerly toss a half-functioning solution into the bin and rewrite it, rather than taking it apart and rebuilding a better version. This leads to programs getting rewritten and rewritten, sometimes doing things differently but most of the time it’s just the same thing under a different layer of paint.

But I digress. That is more of a problem with software development in general. We can review a more concrete example: the JavaScript tooling ecosystem. To develop front-end in JS, you need three different tools;

  • A package manager - npm, bower or yarn
  • A module bundler - webpack, rollup or browserify
  • A task runner - gulp, grunt, brunch

Each of these segments work completely differently. So to get started with, that’s three different tooling systems you have to learn. Better yet, each individual tool inside each system is unique in its configuration syntax. So if you learn how to configure Grunt you will have to learn Gulp and Brunch from scratch. Joy.

Yeah, yeah, I get it. Bower was cool because it built flattened trees when npm didn’t. Gulp had a nicer configuration syntax, and Brunch was easy to get started with. Yarn is more secure and more reliable than npm. Webpack can inline your CSS and images and is more configurable than Browserify.

Not about killing innovation

At this point, you may be wondering that I desire a world in which there is but one alternative to every task. This is not the case. I only ask for reservation: if there are no fundamental ideological differences, if there are no personal incompatibilities between the developing organizations, is there any valid argument for building your own version of a tool, instead of contributing to an existing system?

I don’t think this is the case at this scale — obviously, a world with just one kind of tool or library for one thing is stupid, but the sweet spot does definitely not lie at seven.

So if the answer is no, does that mean JavaScript developers are so strange that they cannot get their heads together and agree on something? Do they really think people enjoy keeping up with the Joneses all the time and learning a new tool every year?

When it comes to the first question, remember Joyent and the io.js schism. Oops. As to the second, I doubt it. Still, this is what we have to live with. A guide for building a modern JS front-end app consists of twelve distinct steps, all of them quite elaborate. I applaud the author for the gargantuan effort in that tutorial: it’s the best I’ve seen so far. But seriously, take a look at it! What the fuck?! I could have just rewritten my previous post with a link to that guide and rested my case!

I remember a book about Windows programming using C, and parts of this guide are arcane enough to evoke memories of that. I think one can enumerate the type system of Scala using less. Or how to write a Scheme interpreter.

The usability of the tooling ecosystem is absolutely disgraceful. No other developer segment has this many hoops to jump through and nobody else has to learn so many different tools just to get a simple web application running.

Why do we put up with this? Why isn’t any effort being put into simplifying the tooling stack, instead of making it more elaborate, powerful, and verbose? Consider webpack. It is a powerful utility that is supposed to combine all your assets — that is, code, CSS, images — into a single module that is used in your application. This is a powerful thing. The only problem is that its configuration is hell. I work with SBT every day, and my goodness, even SBT is easier to configure than Webpack. Ask any Scala developer what it means to say that. You will get funny looks. Even Java folks will consider this crazy, although, in fairness, they’ve moved into the post-framework age, and consider us mortals rather quaint.

SPA development is more than just tools

The problems don’t stop here. A SPA must effectively handle client state entirely in the browser, though in isomoruniversal SPA apps part of the rendering and client state is processed on the server. This requires the use of architectural patterns like Redux and React Router.

These libraries are nice and intelligent, but I feel they are a wasted abstraction. Using the trick below I can create React apps that can approximate the performance of a real SPA app, without having to rely on these complicated architectural patterns.

Caveat lector. This is largely a matter of taste. If you really like Redux and React Router, by all means use them, but I find their usability to be sub-par to the MVC architecture of any full-stack framework. The architectural pattern — Flux — is a message-based event loop. The views generate user actions (button clicks) that are dispatched to stores (state containers) which update themselves (increment a number) then deliver state changes (an incremented number) to the views which re-render themselves. If a request is sent to the server, it must be split into two parts: first, a button click is registered, and its effect is rendered; second, a request is sent to the back-end and when it completes, an action describing a completed request is sent to the message dispatcher. So any interaction with the back-end requires two actions. Sounds complicated? Yeah, this is why I prefer a dumb MVC architecture (or Relay).

In summary

So, to put this argument into a more cogent form, I’ll summarize them below.

1. Lack of emphasis on usability, a myopic focus on adding features.

Why doesn’t anyone integrate dependency management, module bundling and task running under the same program? Why do we have to use three different programs that are getting replaced every year? Tool “monoliths” like SBT may be ugly in parts, but they can do package management, compilation, debugging, testing – even if it’s DSL is garish and confusing, still, once you’re familiar with it, you don’t have to master six other horrifying DSLs. Just one.

2. Chasing novelty with little care about its impact on maintainability.

Babel lets us write JS in eleventy different dialects. While that is a cool thing in itself, it a horror show for developers. You ask, who wouldn’t want to use await, or ES6 classes? Well, how about the person who doesn’t want to learn how to use Babel?

With Babel, you can write in any version of JavaScript you want, since it all gets compiled down to ES5 anyway. This is great for building your flavor-of-the-month hack, but it’s also a terrific way of building unmaintainable software. For this zany hack to work, you need tracompilers that translate your modern code to old code. The requirement of that tool is too high a price to pay for some fancy language features.

3. Snubbing full-stack frameworks for their want novelty, although they generally feature exemplary usability

Clojure developers have found a way of eschewing frameworks over composable libraries. For some reason, everybody else is really bad at this, so we build frameworks, i.e., sets of libraries that govern the design of your program in a certain way. Monolithic frameworks like Rails or Django are fundamentally dated — though this is easily fixed — but they are usable. Setting up a functional application with these takes a few minutes, and it just works.

A new direction: renovate, not rewrite

In my opinion, front-end development can be done in an alternate, saner way. It doesn’t mean going back to the stone age of Apache or Rails with ActiveRecord. Rather, it means refurbishing these old, battle-tested technologies with modern components without tossing the whole chassis into the bin.

In other words, there is an alternative to the current JavaScript SPA horror show. Using the following technologies, as an example:

  1. A REST API built in a scalable and performant language

    Examples: Scala, Haskell, Go, Clojure, Java, Rust, OCaml, Elixir

    This gives us a clear advantage when scaling and deploying our application. Data access is made opaque and is in no way tied to the front-end - which is ultimately just presentation and some client state. The language needs the following:

    • A stable library ecosystem, especially for data access, e.g., database drivers
    • A functioning web server and associated libraries
    • Speed, multi-threading, performance

    With these properties, you should be quite comfortable in your back-end development.

  2. Client state, presentation and back-end communication handled using a monolithic framework

    Examples: Ruby on Rails, Django, Pyramid, MeteorJS, Udash, Play

    Rails may be dated in some parts — coupling your front-end with data access is one thing — but as an infrastructure it is functional, mature, easy to understand and stable. The Ruby ecosystem is large and is well documented, even the secondary documentation (StackOverflow etc.) is abundant.

  3. A wrapper that turns ordinary HTTP page requests into XHRs

    Examples: Turbolinks (for Ruby on Rails and Django)

    Turbolinks is perhaps a hack but it is clever: any HTTP request that would normally cause a page reload, like a link or a form submission, is converted into an XHR. Then, the page redraws itself by swapping out the <body> element from the returned response.

Turbolinks is a “pseudo-SPA” application in that it simply reroutes ordinary page requests (links, form submissions) as XHRs and then from the new page, it merges the <head> element and swaps the <body> element. By using a gem like react-rails you can combine this with react, however, it does not use React’s virtual DOM when redrawing the body content. It only mounts and unmounts the components when the page swaps, retaining the actual DOM bindings.

What?! Your answer is Rails? In 2016?

Just because these frameworks aren’t making headlines doesn’t mean they are stuck in the stone age. These monolithic frameworks still, after years of maturation, possess novelty value in one, unparalleled aspect: usability. These frameworks may not lend themselves to universal applications, but they’re still capable of absorbing new technologies like websockets and GraphQL.

Some parts of them are stuck in the past, of which the most striking one is combining data access with data control and presentation in the same program. This is easily fixed: make your Rails controllers call an external, opaque service to render is data. The job of the full-stack framework is reduced to managing client state and data presentation, which go together.

So, what can be done? Here’s an example.

A REST-backed Rails app with React as the templating engine

react-rails is a Rails gem that gives us React components in the asset pipeline, supporting server-side rendering and Turbolinks (caveat: see above)

Under the hood, when rendering on the server, react-rails uses Babel and ExecJS to prerender the content. Better yet, your content is still rendered by a simple Rails controller like the following.

The controller lives in app/controllers/foos_controller.rb:

class FoosController < ApplicationController
  # maps to GET /foos (on the front-end)
  def index
    # incurs a GET /foos on the back-end
    @foos = Foo.all.to_json
  end 
  
  # maps to POST /foos (on the front-end)
  def create
    # this is a POST /foos on the back-end
    Foo.create(:bar => params['bar'])

    # turbolinks turns this into a XHR
    redirect_to '/foos'
  end
end

The model is just a Her model, an ORM that uses a REST API, which you can customize. In apps/models/foo.rb:

class Foo < Her::Model
  attributes :bar, :id
end

Now Foo.find(1) maps to GET /foos/1 in the back-end, and so forth.

The view is generated by app/views/foos/index.html.erb

<%= 
react_component(
  'Foos', 
  { foos: @foos, token: form_activity_token, action: url_for(action: 'create') }, 
  { prerender: true }
) 
%>

This maps to a React component app/assets/javascripts/components/foos.es6.jsx:

class Foos extends React.Component {
  render() {
    <div>
      <ul>
        {this.props.foos.map((foo) => {
           return <li>{foo.bar}</li>
        })}
      </ul>
      // dataRemote is a Rails trick that makes the form make an XHR
      <form action={this.props.action} method="POST" dataRemote="true">
        <input type="hidden" name="authenticity_token" value={this.props.token} />
        <input type="text" name="bar" value="Blah blah" />
        <input type="submit" value="Add!" />
      </form>
    </div>
  }
}

Try doing that with less code in any JS app! The controller looks like any standard Rails controller. In fact, it is exactly like one, yet the magic of React & Turbolinks lets us wrap this into a SPA-like experience.

Combining these elements, we get an application that can reach nine-tenths of the performance and responsiveness of a 100% JavaScript SPA, while simultaneously avoiding the messy tooling ecosystem.

  • A total absence of extraneous tooling, the framework has these built-in. No need for Webpack or Babel, these are just another gems you add to your dependency list.

  • A boring, but familiar, framework that handles routing, message dispatch and API integration for us. Routing and state management are the worst parts of SPA development. Now our state is just another Rails

  • Responsiveness close enough to that of a real SPA. It will never match a real SPA in speed, since the requests map to Rails controllers, but it will be extremely pleasant to develop in.

  • A scalable back-end without any data access logic in the front-end (the usual front-end back-end split), the framework handles only UI state and presentation logic.

There are some obvious compromises in such a solution, which are both good and bad.

Compromises made

The biggest compromise is in performance, which is due to the following:

  • Does not use React’s DOM to its full power. Turbolinks just swaps the body element. This could be improved by making it use the React virtual DOM. This is the bad part. The good part is that we don’t have to create XHRs ourselves in React components.

  • Forces the user to use JSX, throws ERB/HAML in the bin. It is true that the example application could be indeed built without JSX — just don’t use react-rails — but I find JSX to be a nicer templating syntax than ERB. advantage of React

    But it would be naïve to assume this brings us the whole of React. It brings us the templating syntax and binding mechanisms, but since Turbolinks effectively causes a re-rendering of the whole HTML page, this doesn’t fully leverage the server-side rendering aspect of React.

    So, overall, the good part of this compromise is that we get to use JSX, which has a nicer, functional approach compared to ERB, but the bad part is that we don’t harness the full power of React.

  • Turbolinks effectively reverses React server-side rendering. Whereas in a normal SPA app the server-side rendering is the “base” template, in this case a new server-side rendering is produced on every interaction. In a normal SPA app, one just updates the DOM with new state — i.e., props — not with a new DOM.

    There is a solution: skip Turbolinks and use XHRs in React components. A simple solution in a controller:

    def create
        @f = Foo.create(:bar => params[:bar])
        if request.xhr?
          # send a JSON of all the Foos
          render :json => Foo.all.to_json
        else
          # send HTML with a React component
          redirect_to action: 'index'
        end
    end
    

    If the request is made from a component, it’s can now use setState (or a store) to update its new state. In this paradigm, Rails is acting as the state store.

    A better example would be to make the Rails app support GraphQL and use Relay to communicate with the Rails part, see below.

I think, given the simplicity of the above application, I think it’s fair to say that these compromises are warranted. If the actual set-up were any more complicated I wouldn’t be so certain. But, for the simplicity, we must trade performance.

A functioning example

I’ve created a functioning example and put it into two repositories:

  • Front-end – Rails 5 & react-rails & Her – https://github.com/ane/rails-react-frontend

    A Rails 5 app combining react-rails and Her to talk to the back-end.

    To install, clone the repo, run bundle install, run foreman start. This will start the Rails server and the live re-loader.

  • Back-end – https://github.com/ane/rails-react-backend

    It’s a dead simple Sinatra REST API that uses Sqlite3. This is obviously not suitable for production.

    To install, clone the repo, run bundle install, run rackup.

This application will never match a real SPA. A part of the front-end is not in the browser, so we will rely on a second web-server to run it. So it is an illusion, but as an illusion it is close enough, and it is easy to use.

Conclusion

JavaScript front-end development, as it currently stands, is painful to develop in. One has to master many command line tools that instead of being unified as a single tool, each continue to diverge and grow larger and more powerful. The result is a confusing developer experience.

In this post, I showed that we can scrape the good parts of modern JS developments and use them to modernize an older application stack that mimics the user experience of a SPA, but is not one. The application uses a clever library — Turbolinks — to convert page requests into XHRs, creating an illusion of a single-page application.

The end result is a half stack web framework: we yank data access from a monolithic full-stack framework (Rails) and make it use a REST API and we replace its presentation logic (ERB) with React. The framework is left to handle client state, routing and asset pipelining, which are the painful parts of SPA development, and the UI is rendered using React. So the Model–View–Controller is distributed into three places: Rails for UI state, React for UI rendering, and the REST API is the actual business logic. Effectively, this reduces Rails to a thin SPA-like front-end over a REST API!

Where to go from here? Here are some interesting things that could be explored:

  • Turbolinks with React. Use React to parse the HTML returned by Turbolinks (if rendered on the server) and use the React virtual DOM to update the DOM, instead of blindly swapping the body element.
  • GraphQL. Although Her is nice, we could use GraphQL when communicating with the backend and also use it as a communication method between Rails and React.
  • TypeScript. I like static typing, but currently react-rails doesn’t really work that well with TypeScript.
  • React On Rails. A different kind of React & Rails integration, which lets you use Webpack. React On Rails is more flexible than react-rails: you get the full power of Webpack and NPM here, so this is both good and bad.

All in all, this solution is a compromise.

Compared to a full-stack Rails app, we have to do extra work in creating a REST API backend, but the result is an app that’s easier to manage due to the separation of concerns. With a separate data access layer — the REST API — complex business logic is contained in a single place. It is easy to couple several clients to such a front-end, and our Rails app is just one of these.

But, compared to a full-fledged SPA, this app will never be as quick, it will never be as fluid, and it may not be what cutting-edge front-end development this day represents, but it is is simple, there is one build tool (bundler), and it is fun to develop in.

I might miss fancy things like state hydration and Redux, but the insanity of Webpack, Gulp, Babel and NPM, I will not miss.


Call me old-fashioned, call me a curmudgeon, but I think web development has become stupid and superficial. The unending quest towards single-page apps (SPAs) has made web development extremely painful and the current trend is diverging towards seven different directions at once. On one end, we have rich SPAs that can be built as native applications, on the other we have something completely orthogonal, of which a schism is beginning to form.

The underlying problem is unfortunately that the web is being misused as an application container instead of the original text transport protocol it was made to be. It’s no use crying over spilled milk; the web has been subverted, transformed, improved upon, so much so we don’t know what the original even looked like.

How it was

In 2006, the hot new thing was Ruby on Rails or Django. If you weren’t using them, odds were you were using PHP or ASP.NET. Most intranet software ran on SharePoint or, I kid you not, WordPress. Users didn’t really care either way.

People liked Rails and Django because they made web development stupidly simple. No more SQL, just create your models and migrations. An architecture that made sense, MVC, was applied, and web apps became a little bit better. Meanwhile, the overall web development experience got a lot better.

Of course, the web was slower back then. Chrome wasn’t around, so JavaScript usage was very limited. Google began prototyping under-the-hood requests in Gmail around 2006, but before that nobody had heard of AJAX. The concept of doing more than one page request per page load was completely unheard of. The users liked faster page loads, so when Chrome came around with V8, customers started suddenly giving a shit about what browser they used.

Where it all began

On the surface, the appeal in SPAs was obvious. It started with Gmail and AJAX. No more slow page loads, the applications behaved like native applications, and soon they even looked like them! Innovative as that was, now we’re beginning to use so many web applications that are in the web only that we’re slowly starting to forget what the native app experience was.

The problem was that it wasn’t enough, you needed a backend. Before, when there was one application, now there were two, and they usually were completely different from each other. The backend–front-end split was fuzzy to begin with, this introduced an uncertainty and a possibly pointless abstraction. Put the “slow” and “heavy” things to the backend, let the front-end handle rendering and the user interface, all the backend had to do was supply serialized data. Even back then, people started asking questions about the SEO effects of rendering a page entirely in JavaScript. No solution was given, although one solution existed, but was weird.

So while the backend folks built eleventy versions of Sinatra, the front-end folks got busy. In a short time we had Backbone, Angular, and Knockout, then we got frameworks like Durandal and Meteor.js. Finally, Facebook looked at the performance of desktop applications, then looked at the performance of web applications, thought, “holy shit”, and did something about it.

People got scared. It was mixing business and presentation logic, they said. It was mixing JavaScript with something eerily like XML, and everyone said XML sucked. Then people got over their usual trepidation towards $newTechnologyOfTheYear and got on with their lives. Now React is being used left and right.

The only problem was, React was a templating engine at heart. Facebook did not build a bridge for existing front-end frameworks, so that people could have just dropped in React instead of say, Handlebars or even ERB. Facebook did not do this because they already had their own way of rendering content. They didn’t need one. Build your own, they said.

Faced with just a templating engine, developers got confused. “How do I do routes with this?” they asked. So we built routing engines and state containers, and got on with our lives. Soon after that, someone understood React ran quite fine on a Node.js server, and people started rendering pages in two places: the backend and the front-end.

Now, people are using React – a JavaScript library to be run inside a browser – to create native mobile applications. Meanwhile, other folks think, all of this, this excession, is simply too much, and want pages to load quickly.

Couple this with the at least bizarre experience of JavaScript development in 2016, things are looking weird. The tooling iterates at an impossible speed, a new build system emerges every year, and developers must stay on top of things.

Having to stay on top of things is, generally, a good thing. Software progresses, it progresses so fast that we must constantly learn for us to stay employable and the profession to stay enjoyable. But at this speed, when it seems we’re not really learning from the past, it’s not doing anyone any good. React took a good idea from desktop applications, event-driven user interface rendering, and executed it brilliantly as they ported it to the web.

The thing is, it’s still nothing new. Ten years ago we were building crappy and weird-looking software in C#, now we’re building crappy and broken software in a mix of JavaScript and other languages, and they run in the browser, or on smartphones, and they’re responsive, so that when you tilt your tablet sideways, that big fat menu disappears. Huh.

That’s what they call the churn.

The churn. New technologies come and they kill the old technologies, but in the midst of it all, stand you and I, wondering what the hell to do with this mess. From the other side of it all, from the ivory tower of the real world, the business analysts cast their shadow and remind us these technologies are tools, they’re meant to be replaced, they’re disposable. So are we, if we can’t learn new ones, they remind keep reminding us.

So?

I make it sound as if web development is impossible, but that couldn’t be further from the truth. Browsers are getting better and faster. Our applications are prettier, faster, more accessible, more usable. The web is replacing desktop applications and this trend is accelerating – whether this is a good or bad thing, I don’t know.

The only problem is that the development experience keeps reinventing itself at such a pace you may as well put yourself into stasis and wait for things to settle. Wait for front-end development to become boring. Odds are you can sleep for quite a bit until that happens. The second option is just to pick whatever works right now and use it.

The optimistic part is that we, as web developers, are learning, we’re doing some cool things and unifying two halves of the same thing. The backend guys are innovating and tooling progress is insane and exciting. So I cannot state that we haven’t gotten anywhere, we have innovated, learned, and improved the Web. But by how much? Are our end users happier?

A concrete solution

Given the task of implementing a web application, what would I do, given the state of the art in 2016? I spent about four years developing SPAs with many frameworks. I hate them all. Given that sentiment, this is what I would do:

  1. Using a language of your choice, build a business logic API that can be used via REST or some other RPC protocol. The language and its associated tooling should be performant and support rapid iteration.
  2. Use a batteries-included web framework, spiced with a rendering framework of your choice, to create front-end.
  3. Build many front-ends, not just for the web, but for mobile and perhaps even desktop, and keep them thin.
  4. The web front-end can be spiced up (but not replaced) using JavaScript. Come to think of it, I would have done the same thing in 2006.

Point 4. originates from my experiences of creating and maintaining SPA applications. I think SPAs are, by and large, a bogus concept. A web application loading another page isn’t intrinsically a bad idea, if your application is fast enough. Conversely, if your SPA is slow, you’re doing it wrong. SPAs were invented for speed, because conventional web frameworks were slow. This is not the case anymore. Sure, you won’t see Rails, Django or Play beat the TechEmpower benchmarks, but we’ve come a long way from five years ago, which is when people started to play around with SPAs.

Given the speed improvements, why not go full-stack? Why a front-end and a back-end?

The answer for this is not simple. It is because we’re dealing with two incompatible abstractions:

  1. Building your application as an API means you need a client application to provide the user interface.
  2. To build such an interface, your application has to deal with the fact that HTTP, and thus REST, is stateless.
  3. Web applications are usually stateful.
  4. This leads inevitably to the requirement of building an abstraction in the middle that handles client state, which your API does not support.
  5. Building such an abstraction – the front-end – requires a lot of work, e.g. by using a MVC (or MVVM whatever) model. Double the work, half the fun.

So, the back-end abstraction is incompatible with client state, but the front-end application requires client state. Conversely, a full-stack application is often a heavy monolith: it needs to handle data access, its modification and its presentation in the same package. Here, as they say, be dragons. We want to keep business logic and presentation logic separate, hence, a full-stack framework does not work on its own.

As a solution, I offer a synthesis. It’s mixing a REST back-end with a full-stack frontend. The back-end can be built using whatever language is performant and maintainable. Build your front-end with a boring framework like Rails, Django or Pyramid; let it fetch its data from the REST API, i.e., treat the API as the data source. Let the front-end handle client state on its own. What you get in return:

  1. The ease of use of said framework. These frameworks were invented for a reason. You get routing, templating, asset pipelines etc. out-of-the-box.
  2. You can still do AJAX requests easily to build rich user interfaces.
  3. A reusable API in the backend you can use in other applications, keep your web front-end an equal citizen.

If you don’t want to deal with framework bloat, or if you’re scared of non-JavaScript applications, be my guest, build your own front-end using the essentials. Splurge in Gulp, ES6, React, and Redux. Or use TypeScript. But I dare say, after having worked with both full-stack frameworks (e.g. Rails) and SPA+REST frameworks, the compromise above is much more pleasant.

In the end though, it doesn’t really matter: with the exception of a few, our end users couldn’t care less. They really don’t give a shit. So, pick whatever technology works for you and your users. The above is just one option.