Wednesday, December 25, 2013

A Consultant's Wishlist

As a consultant I am called in to solve specific problems and I would like to start working on the problems at hand as soon as I can. But, very often, there are other problems that I have to overcome before I am able to spend my time on the problems I am being paid to solve. Here is my wishlist.

A Reliable Internet Connection

Most people in Sweden have a really good Internet connections, but it is surprising how many big companies don't. Even more surprising is that it is not seen as a major problem. A consultant usually costs at least 100 euro an hour, having us idle because of a slow network is bad business practice.

Some companies even take it further. If the connection is not flaky enough they make sure there is an even more flaky proxy that will make sure that every task performed will take special work since many of the tools we use for daily web development are not easily configured to work with a proxy.

Please give me fast, reliable networking without obstructions.

Personal Computer

I want to work with my personal computer. I have configured it to make me as effective as possible and I don't want to be forced to work on another computer just because that is corporate policy. Having my personal computer also allows me to work from wherever I am, at home, on the train, etc. Sure, I can configure the computer that I am given to work the way I want it to work, but that is time spent on a different problem than I was hired to solve.

Software as a Service

I also find it frustrating to have to use legacy tools. Many of them are not working well and I would like to do without them. We are living in the era of Software as a Service and a lot of the best tools are provided as services and as a consultant I already have accounts for them and it is only a click a way to allow me to become part of the company team.


I have an email address. A remarkable thing about an email address is that it works no matter where I am. I don't want to use corporate email and be part of various groups that I care nothing about. I want to use my own email that I can access from anywhere through a good interface.


If a company is using Github that means that I can be setup and working on the source code in a matter of minutes.

Apart from being easy to setup and familiar to me, Github has a number of advantages over other solutions.

  • Built-in markdown parsing allowing documentation with simple navigation to other documents, issues, code, pull-requests and commits.
  • Issue tracking seamlessly integrated with version control allowing for easy tracking of issues through the code.
  • Visualization of branches allows developers and other interested parties to see what is being worked on at any time.
  • Pull-requests is a simple way for developers to do code-reviews and to communicate publicly about code.


Most teams I work with often work (or try to work) with an agile process. This often means that we use a Kanban board of some kind. I have tried a number of them and Trello is the one I like the most. It is lightweight, provides (near) real time updates to the board, and it has clients for Web, Android and IOS. This makes it easy to see what is being worked on and easy to update or add new tasks even if I am not at the computer.

Travis CI

Travis is continuous integration as a service. It is ridiculously easy to set up and it integrates really well with Github. I know that it is easy to setup up a Jenkins server to do this but what is the point when Travis is already setup and running.

Campfire or Skype

Campfire and Skype are awesome tools for collaborating teams who want to keep in sync across time and space.

Platform as a Service

I also find it much better if the product being worked on is hosted by a service provided that can be easily accessed from anywhere. Depending on what level of control you need there are a number of options. I like Heroku and Nodejitsu for hosting Ruby and Node and it is often all I really need. If I need more control it is easy to switch to Amazon or another IaaS providers if the need comes up.


I want the best for my clients. To do this I want to work with really good tools, from anywhere. I want to work in an environment that keeps me happy and productive. Merry Christmas!

Wednesday, November 06, 2013

References from Habits of a Responsible Programmer

A list of references from my talk, Habits of a Responsible Programmer at Øredev. First out is the blog post that inspired the talk


Some books about habits and how our brain works in good and not so good ways.


Steve Yegges blog post that made me realize that it is time to learn to touch type along with some tools that can make it fun to learn.

Clear Code

It is worth learning Smalltalk, just to understand the book, Smalltalk Best Practice Patterns.

Programming Techniques

SICP is a classic and it deserves to be read but Concepts, Techniques, and Models of Computer Programming is just as good if not better.

Read Code

Source code to a number of projects with beautiful code.

  • CloudFoundry - beautiful Ruby code, OS Platform as a Service.
  • Levelup - Node wrapper for the LevelDB key-value store.


A good short article by Marin Fowler on the tradeoffs involved with writing explicit or implicit code.


The book by Fowler is a timeless reference book and it is required reading for anyone serious about programming. Kerievsky's book gives more in depth examples and finally Reg's blog post discusses reasons for not refactoring just to please your own ego.

Simple vs. Easy

Great talk!


I like Sandi Metz way of explaining why testing is important.


Documentation matters, but keep it simple.


The original is no longer available, but the videos are uploaded to You Tube.


Generating Code

Tools for generation code. Here documents are the simplest possible way, but if you want to generate multiple files it is better to use Thor or Yo.


The ultimate book on continuous delivery, a must read for anyone interested in automation. And, that should be everyone!


Wonderful book about different people stereotypes.


Two articles on estimation by Martin Fowler and Dan North.

Life, the Universe, and Everything

Books about happiness, the mind, and other things.

Thursday, October 17, 2013

Tunneling to localhost via SSH

Sometimes when working with a new web site I have customers who want to see the site while it is still in development. One way of doing this is to have alternative demo servers where all we do is just serve up our work in progress. This works fine most of the time, but sometimes I just want to be serve up my local machine.


One easy way to do this is to use localtunnel. localtunnel is a Ruby gem that is meant for exactly this purpose. Here's how:

$ gem install localtunnel
$ localtunnel --key ~/.ssh/ 80
  This localtunnel service is brought to you by Twilio.
  Port 80 is now publicly accessible from ...

It is now possible to access localhost:80 via the URL Simple as pie!

But what if you don't like pie? Or Ruby, for that matter. Or, what if you don't like to serve your secret data through another company (Twilio in this case who graciously gives away the hosting for free).

Well you are in luck, it is easy to set up your own tunnel via SSH, provided you have access to a server that is accessible from the Internet. And, everybody has access to such a server via Amazon EC2 or similar service. Make sure the server is accessible on all high ports. On Amazon this is done by opening all incoming ports above 1024 in a security group.

Setting up a tunnel via SSH

In SSH lingo a tunnel from an external server to my local server is called a reverse proxy. Here is how to set one up. First you need to configure the remote ssh daemon to allow setting up remote interfaces. A remote interface is one that can be accessed from a server other than localhost, which is what we want.

Here is how to do it, the server called is my AWS server.

    # Login to the remote server
    $ ssh -i ~/.ssh/id_rsa

    # Edit the sshd configuration
    $ sudo vi /etc/ssh/sshd_config
    # Find the line #GatewayPorts no
    # Change it to GatewayPorts yes
    # Save and exit

    # Restart the daemon
    $ sudo /etc/init.d/sshd restart
    Stopping sshd:                                             [  OK  ]
    Starting sshd:                                             [  OK  ]

    # Exit the shell and return to your local machine
    $ exit

Now you are good to go. We assume you have a server running on port 3000 to display to the world.

    $ ssh -i ~/.ssh/id_rsa -N -R *:0:localhost:3000
    Allocated port 34070 for remote forward to localhost:3000

Now you can surf to

And it will access your local machine. :)

When you stop the command (Ctrl-C) the tunneling will stop.

Command explanation

-i  identity file (private key)
-N  Do not execute a remote command, just setup the port forwarding
-R  *         All interfaces should be forwarded
    0         Open forwarding on any available port (34070 in the example)
    localhost Forward to localhost
    3000      The localport to forward too.
ec2-user      The user on the remote server  The remote server

If you want to simplify it for yourself add the following script to a bin catalog.

    ## Script tunnel

    set -o errexit

    # default to port 3000

    ssh -i ~/.ssh/id_rsa -N -R \*:0:localhost:$port

Now all you have to do to enable remote access is tunnel 80, or whatever port you want to display.

Escaping the proxy Jail

The story could have ended here, but some people, trapped behind corporate firewalls, may not be allowed to use ssh. The traffic is blocked by a corporate proxy server. Well, there is a happy ending for you too and it is fittingly called corkscrew. It allows you to screw yourself out of the corporate jail and into the world.

Here is how you do it on Ubuntu, on OS X use brew instead.

    # Install corkscrew
    $ sudo apt-get install corkscrew

    # Edit your ~/.ssh/config, add
    Host *
      ProxyCommand corkscrew 8080 %h %p

    # If you need to authenticate to get through the proxy the line should read
    Host *
      ProxyCommand corkscrew 8080 %h %p ~/.ssh/proxyauth

    # And you need to add username:password to ~/.ssh/proxyauth
    $ echo "proxyusername:proxypassword" > ~/.ssh/proxyauth

%h and %p is filled in by ssh with the host and port of your destination.

Freedom's just another word for nothing left to lose,
Nothin' don't mean nothin', honey, if it ain't free.
Yeah, feeling good was easy, Lord, when he sang the blues,
You know feeling good was good enough for me,
Good enough for me and my Bobby McGee.

Thursday, June 20, 2013

Solving the Expression Problem in Javascript

I just watched a great presentation by Daniel Spiewak called Living in a Post-Functional World. I watched it mainly because I heard it was a great presentation on how to deal with modules, which it was. A concept which is just as important in Javascript as it is in Scala.

But at the end of the presentation Daniel talks about the Expression Problem as defined by Philip Wadler.

Here it is as summarized by Daniel Spiewak:

The Expression Problem

  • Define a datatype by cases
  • Add new cases to the datatype
  • Add new functions over the datatype
  • Don't recompile
  • Good Luck!

Functional Style

If we try to solve the problem in a functional style, we get something like this (also from Daniel's presentation).

sealed trait Expr
case class Add(e1: Expr, e2: Expr) extends Expr
case class Sub(e1: Expr, e2: Expr) extends Expr
case class Num(n: Int) extends Expr

def value(e: Expr): Int = e match {
  case Add(e1, e2) =>
    value(e1) + value(e2)

  case Sub(e1, e2) =>
    value(e1) - value(e2)

  case Num(n) => n

The functional style uses pattern matching. We see that it is easy to add new functions, such as a toString that returns a string representation of the expression without changing any code. If we add a new class, such as Mul, we have to change all the existing functions.

Here are the main points of this solution:

  • Dumb cases
  • Every function enumerates full algebra
  • Very easy to add new functions
  • Very difficult to add new cases

We get an open set of functions and a closed set of cases!

Object-Oriented Style

If we try to solve the problem in an object-oriented style, we get something like this (again from Daniel's presentation).

sealed trait Expr {
  def value: Int

case class Add(e1: Expr, e2: Expr) extends Expr
  def value = e1.value + e2.value
case class Sub(e1: Expr, e2: Expr) extends Expr
  def value = e1.value - e2.value
case class Num(n: Int) extends Expr
  def value = n

The object-oriented solution uses subtype polymorphism. We see that it is easy to add new classes, such as a Mul, but if we try to add new function, we have to change all the existing classes.

Here are the main points:

  • Smart cases, i.e. Objects
  • Every case enumerates all functions
  • Very easy to add new cases
  • Very difficult to add new functions

We get a closed set of functions and an open set of cases!

Dynamic Style

Now lets solve it with Javascript in a dynamic style. The solution we have looks a lot like the subtype polymorphic solution above.

function Add(e1, e2) {
    this.e1 = e1;
    this.e2 = e2;
Add.prototype.value = function() { return this.e1 + this.e2; };

function Sub(e1, e2) {
    this.e1 = e1;
    this.e2 = e2;
Sub.prototype.value = function() { return this.e1 - this.e2; };

function Num(n) {
    this.n = n;
Num.prototype.value = function() { return this.n; };

Just as in the polymorphic solution, it is easy to add a new class.

// Adding a new class
function Mul(e1, e2) {
    this.e1 = e1;
    this.e2 = e2;
Mul.prototype.value = function() { return this.e1 * this.e2; };

But, what about adding a new functions? It turns out that this is just as easy because of the dynamic nature of Javascript. We just add them to the prototype.

// Adding new functions to existing prototypes
Add.prototype.toString = function() {
  return '(' + this.e1.toString() + ' + ' + this.e2.toString() + ')';
Sub.prototype.toString = function() {
  return '(' + this.e1.toString() + ' - ' + this.e2.toString() + ')';
Num.prototype.toString = function() {
  return this.n;
Mul.prototype.toString = function() {
  return '(' + this.e1.toString() + ' * ' + this.e2.toString() + ')';

Now getting a string representation of an expression is a simple as:

var x = new Num(1);
var y = new Num(2);
var z = new Add(x, y);
var w = new Sub(x, y);
var e = new Mul(z, w);

e.toString(); // returns ((1 + 2) * (1 - 2))
Well, isn't that nice!
Sometimes I feel like I don't have a problem
I don't ever feel like I did before
But take me to a place I love, a dynamic place!
I don't ever feel like I did before
But take me to a place I love, a dynamic place, yeah, yeah, yeah!
Misquoting Red Hot Chili Peppers :)

Friday, May 24, 2013

A Critique of the Thoughtworks Tech Radar on Javascript Testing

The Thoughtworks' Tech Radar has come out again and there is no change in the recommendation on Javascript testing.

The radar recommends to "Adopt Jasmine paired with Node.js". This is very specific advice. It's not "Adopt Javascript testing paired with Node.js" but a specific tool, Jasmine. Compare this with more general advice such as "Adopt CSS Frameworks" or "Promises for asynchronous programming". Nothing specific there and, hence, nothing wrong.

There is no motivation as to why we should adopt Jasmine but, if we look at the TTF from October 2012(PDF), we can read:

The cream of the crop for out-of-browser testing is currently Jasmine. Jasmine paired with Node.js is the go-to choice for robust testing of both client- and serverside JavaScript.

"The cream of the crop"? This is certainly not the case! Jasmine is an elegant testing framework similar Ruby's RSpec but, it is not "the cream of the crop"! It has one major drawback. It sucks at testing asynchronous code! And, since asynchronous code is a central theme in Javascript programming we should not use Jasmine! There are at least two better alternatives:


In this example I want to test a simple asynchronous sort function called sleepsort, you can read more about it in Writing a Module.

The function signature looks like this:

// Sleepsort takes an array of numbers and calls callback(result)
// with the sorted array as the only argument
function sleepsort(array, callback) {

Testing this with Jasmine will look something like this:

it('sorts the array', function () {
  runs(function() {
    sleepsort([1, 3, 2, 7], function(result) {
      this.result = result;

  waits(1000); // Wait one second then check the rsult

  runs(function () {
    expect(this.result).toEqual([1, 2, 3, 7]);


This is horrible! Compare this to Mocha, with should.js assertions:

it('Sorts the array', function(done) {
  sleepsort([1, 3, 2, 7], function(result) {
    result.should.eql([1, 2, 3, 7]);

Thank you, Sir, may I have another!

Indeed you may, here it is with Buster.js, with Object style syntax:

"Sorts the array": function (done) {
  sleepsort([1, 3, 2, 7], function(result) {
    assert.equals(result, [1, 2, 3, 7]);

Both Mocha and Buster.js send in a done function to call when the execution has finished. Elegant, and above all, crystal clear.

It is very obvious to me that Jasmine no longer is "the cream of the crop" of Javascript testing. But, what do I know, perhaps it was a copy-paste error.

I've done my duty, now I can go to bed!

Duty Calls from XKCD

Tuesday, April 30, 2013

A Responsible Programmer

In the last few years I have been asked to help savor several web projects gone bad. The quality of the projects, code, environment, documentation, and morale has been low. To know what I think is right in situations like this, I started asking myself the question, "What would a responsible programmer do?".


Above anything else a responsible programmer values clarity. Not only does she value clear code, but also clear documentation, clear communication and a clear vision of where she and her project is going.


Write Consistent Code

The responsible programmer writes consistent code. Consistency helps other programmers read and understand her code. It lets them know what to expect. If she names constants with SCREAMING_SNAKE_CASE, they know that they wont change. When naming attributes in CSS and HTML, she will make all of them dash-er-ized, or none of them. This is easy stuff, but important. Consistency breeds familiarity. Familiarity is good, it removes worry and increases confidence.

When the responsible programmer contributes code to other projects, she will make sure that she, consistently, follows the style of the project. Sometimes it is not easy to tell what style a project uses, the responsible way is to ask what style is preferred and then use that style. By just asking the question, she will often trigger a review of the code which will help set a consistent style in the future.

By writing consistent code, a responsible programmer will make the program easier to understand and easier to maintain.

Don't Quick-Fix

A responsible programmer doesn't do quick fixes. When a bug needs to be fixed she fixes the root problem instead of fixing the symptom. If an event-handler suddenly starts receiving unnamed events the proper fix is not to ignore unnamed events but, to figure out why unnamed events come at all when not expected. She knows that fixing a symptom will only make the root cause much harder to find.

Write Short Functions

Short functions are easier to understand, easier to reason about and easier to test. It is the responsible thing to write. Enough said!

Separate Commands From Queries

CQS or Command Query Separation has become all the rave in the DDD world, but it was coined by Bertrand Meyer in the book, Object-Oriented Software Construction in 1988. A good book, read it!

The responsible programmer separates her commands from her queries because she knows that they are easier to test and that she can call the queries many times without anything bad happening. Separating your commands form your queries may be as easy as:

// Can be called whenever, no side-effects
function generateRoute(params) {
  return [params.major, params.minor, params.patch].join('/');

// Updates the hash with the new route.
function updateRoute(params) {
  location.hash = generateRoute(params);

Refactor Mercilessly

Since a responsible programmer values clarity, she refactors mercilessly when her understanding of the system changes. She knows that the time invested in making the code a little more clear will prevent bugs and frustration in the future.

Prefer Explicit

A responsible programmer prefers explicit code over implicit code. Even though her understanding of advanced concepts such as meta-programming, monads and continuations is substantial, she prefers explicit code over beautiful abstractions. New programmers (including her future self) have a lot easier to understand code that is explicit than code that is not.

Don't Fear Advanced Techniques

Since advanced techniques can make the code a lot simpler in certain situations she never shies away from advanced techniques when she realizes that they are called for. At the end of the day meta-programming and "advanced" functional programming techniques are just tools that should be used when appropriate.

Check Boundaries

A responsible programmer always checks the boundaries of her system to make sure that invalid data doesn't enter into the core of the application. This way she can avoid defensive programming in the core domain where clarity is even more essential than anywhere else.

Wrap External Services

External services is one of the main reasons that development takes time, the responsible programmer makes sure to always wrap external services with a local interface. This simplifies both testing and exchanging the services.

External Libraries

A responsible programmer will never use an external library she doesn't trust. She will never add a library into her code base unless there is a significant reason for adding it. When she adds an external library she will learn it. She will learn how she configures it, how it is to be called, what are good practices for using it, what bugs there are, etc.


A code base can be compared to a balanced tree. A balanced tree is data-structure that will rebalance itself when new items are added to it. This makes the cost of modifying the tree more expensive but has the benefit that accessing items in the tree can be performed in an optimal way.

A responsible programmer treats her code base as a balanced tree. She will never add code without thinking about balance. She knows that if the project loses its balance it may have to be entirely rewritten to regain balance again.

The balance of the code may shift as the code matures, when a major shift is called for the responsible programmer will refactor mercilessly to obtain a new optimal balance.


A responsible programmer writes and maintains documentation as the needs come up. The needs differ between projects, but most projects benefit from a system overview, a domain overview, a style guide, and code comments.

A responsible programmer makes assumptions all the time when coding, she writes the assumptions as comments in the code when she makes them. She tags them to make it possible to generate a list of assumptions.

The System Overview

The system overview is a drawing and a description of all the servers that are involved in the system. This includes databases, queues, web-servers, external services, etc. The description describes how the pieces fit together.

The Domain Overview

This is a drawing and a high level description of how the core domain of the system works. It includes the major concepts of the domain and what they mean.

The Style Guide

The style guide may be as easy as referring to Github's style guide or to write your own that alters someone else's. Anyway you do it, it is worth having it written down.

Code Comments

Comments in code should be very sparse and only added to point out idiosyncrasies. When assumptions are made, they can be written as comments with a tag to make it possible to generate a list. Example:

// ASSUMPTION: The list is expected to be small and will
// be entirely loaded from the server
function loadCities() {


The responsible programmer tests! She doesn't test for the sake of testing or to increase code coverage, she tests to be sure that the code works as she expects it to.

She knows that dynamic environments such as Javascript and web browsers are fragile and that it is easy to break code without meaning to.

How to write good tests have been written about elsewhere and I wont spend any time on it here. I can recommend the last chapter in Sandi Metz' book, Practical Object-Oriented Design in Ruby if you are interested in good techniques for testing in dynamic programming languages.


The responsible programmer owns her environments. In a project there are at least three environments to care about, Production, Test, and Development. There is also the development machine itself.

The responsible programmer can set up all project environments with a single command that installs everything that is needed, including databases, seed data, libraries, search engines, tools, environment variables, SSH-keys. Everything!

This will allow a new programmer starting in the project to be setup within minutes. It will also allow her to experiment with anything without having to worry about destroying the setup and losing days debugging the environment.

Her personal development machine is also perfectly configured at all times. If she learns a new trick, she will immediately incorporate it into her toolset and into her configuration.

She is automatically prepared for catastrophes. If her hard disk crashes she can just buy a new one at the local supermarket and install her configuration files and be ready to go within hours.

To make this happen she always keeps backups of her configuration files. She keeps the non-secret ones on Github and the secret ones, such as SSH keys and passwords elsewhere.

Continuous Integration/Deployment

Another part of the environment is continuous integration. If a project doesn't use continuous integration it is a clear sign that it is not healthy.

The continuous integration server is just another environment and setting up a new is done with a single command just like the others.

The responsible programmer will set up and maintain continuous integration just like she does every other environment.


In order to achieve the environment goals, a responsible programmer knows how to script. Scripting is not only essential for keeping your environments up to date, they are essential for automating simple tasks. Scripts are useful for generating code, testing, refactoring, renaming, installation, automating checklists, etc.

A responsible programmer ask herself, "When did I do something once? Never!" Writing a script that does what she wants frees her from having to remember the sequence of instructions required to do a task and let's her focus on more important stuff. It also serves as runnable documentation.


A responsible programmer knows her tools. She will always try to learn more about them and she will replace them if other tools are invented that work better. But she doesn't change her tools for the latest fashion.

The command line is a very powerful tool and so is scripting language and a scriptable editor.

Version Control

The responsible programmer uses version control to communicate with her future self and with other programmers. She knows that a clear commit message will help her and others understand what has happened to the system.

She prunes her commits. When she has made several changes to a code base she will make sure that she commits the different changes separately by using something like git add --patch. She also knows that if she commits something by mistake she can alter the commit message or add forgotten files with git commit --amend and that she can change the contents of her history with git rebase --interactive or with git reset


Own It

A responsible programmer owns her project, she will not allow anything bad to happen to her code. This is a difficult goal to achieve when she comes into a project that has already gone bad, but it is a worthy goal and not one that should be taken lightly. All projects must have at least one person who owns the code. When people start talking about the code as if it is someone else's, it is time to shut the project down.

If a responsible programmer decides to take ownership of a project, she makes sure that she has the authority to make the decisions that she deems necessary. No authority, no responsibility, it's as simple as that.


Sometimes projects require estimates, most of the time they don't but, sometimes they actually do. A responsible programmer knows how to estimate. She knows that an estimate is just a guess and the bounds of any task, however trivial, always have a small probability of taking infinitely long to finish (earth quake, meteor strike, blackout). There is also a small probability that the code is already being written at the time of estimation.

Being aware that the code may take infinitely long to finish she is very careful not to make any promises and she is very clear about her estimates being guesses.

Don't Do as They Are Told

Some people may not see this as a sign of a responsible programmer but I beg to differ. When a programmer is told to do something, she will try to figure out what the real problem is. She may do this in several different ways. She may sit down and think through the "task" and come up with an alternate solution that solves the problem simpler or, even better, makes the problem go a way completely. She may ask questions to help clarify the problem for her. Why is this a problem? Why do you do it like this? Why? Why? Why?

Some people may not like this and tell her to "Just fucking do it!". Her reply to this is something along the lines of "Just fucking do it yourself!" but she is usually a lot more polite so she may very well say "I don't understand what your problem is and, therefore, I am not the best person to solve it, please ask someone else."

She believes it is her job to understand what she is doing and why, and that life is too short to be a drone!

If she feels that she is not able to be a responsible programmer on a project, the responsible thing to do is to leave.


I have found that asking myself the question "What would a responsible programmer do?" liberating. It clarifies what I should do in situations of doubt.

At the end of the day, the responsible programmer can look through the commit log and see a beautiful list of tasks that she has completed. She can look through each commit and see that they are cohesive and well described by their commit message. She can git blame the code and see that every line of code that has her name on it reads well. She can look at her days work, and she can feel proud!

Sunday, April 07, 2013

Javascript Conditionals

As we all know, Javascript is a very flexible language. In the article I will show different ways to execute conditional code by using some common idioms from Javascript and general object-oriented techniques.

Default values

Javascript does not support default values for arguments and it is common to use an if statement or a conditional expression to set default values.

function swim(direction, speed, technique) {
  // Default value with if statement
  if (!direction) direction = 'downstream';

  // Default value with conditional operator
  var speedInMph = speed ? speed : 2;

I usually prefer to use an or-expression instead. The short-circuiting or, ||, avoids the repetition of the conditional operator and is, in my opinion, more readable. Another advantage of avoiding repetition is that a slow executing function condition such as fastestSwimmer() will avoid the performance penalty of calling the function twice.

  // Default value with or.
  var swimTechnique = technique || 'crawl';

  // Function is only invoked once
  var siwmmer = fastestSwimmer() || 'Michael Phelps';

Naturally, this technique is not limited to default arguments, it can be used to set default values from object literals too.

options = { kind: 'Mountain Tapir', }
var kind = options['kind'] || 'Baird Tapir';

A simple, yet useful, technique.

Update 2013-04-13, as Jeffery mentions in a comment, the technique only works for values that are not falsy. If values such as 0 or false are acceptable values, you will have to explicitly test for undefined instead.

Call callback if present

Another idiom in Javascript, especially in Node, is passing callbacks to other functions. But, sometimes the callbacks past in are optional. In this case we can use short-circuiting and, &&, instead.

function updateStatistics(data, callback) {
  var result = doSomethingWithData(data);
  // Call the callback if it is defined
  if (callback) return callback(result);

function (data, callback) {
  var result = doSomethingWithData(data);
  // The last evaluated value of the `&&` is returned
  return callback && callback(result);

I, personally, prefer the first form with the explicit if because I think it communicates my intent better but, it is good to know about the technique anyway.

Update 2013-04-13, A better use for the technique is for testing for the presence of objects before getting their properties.

function callService(url, options) {
  ajaxCall(url, options && options.callback);

Lookup Tables

If you have code that behaves differently based on the value of a property, it can often result in conditional statements with multiple else ifs or a switch cases.

if (kind === 'baird')
else if (kind === 'malayan')
else if (kind === 'mountain')
else if (kind === 'lowland')
  throw new Error('Invalid kind ' + kind);

I find this kind of code ugly and I don't think it looks any better with a switch statement. I prefer to use a lookup table if there is more than two options.

var kinds = {
  baird: bairdBehavior,
  malayan: malayanBehavior,
  mountain: mountainBehavior,
  lowland: lowlandBehavior

var func = kinds[kind];
if (!func)
  throw new Error('Invalid kind ' + kind);

I find this code a lot clearer since makes it clear that the else clause handles an exceptional case and that the normal cases works similarly.

Missing Objects

If similar conditionals appear in multiple places in my code, it is a sign that I am missing an object somewhere. Since Javascript is duck typed I can use the same technique as above to create objects instead of just functions.

var kinds = {
  baird: { act: bairdBehavior, info: bairdInfo },
  malayan: { act: malayanBehavior, info: malayanInfo },
  mountain: { act: mountainBehavior, info: mountainInfo },
  lowland: { act: lowlandBehavior, info: lowlandInfo },

var tapir = kinds[kind];
if (!tapir)
  throw new Error('Invalid kind of tapir ' + kind);

I prefer to have this kind of code on the borders of my application. That way the code inside my core domain doesn't have to deal with complicated conditional logic. Polymorphism for the win!

Null Objects

If I notice that in many places I have to check fornulls, it is usually a sign that I haven't handled the special null case properly. In the example above I have handled it properly since I throw an Error if the kind of tapir does not exist. But sometimes it is not an error when the value is missing.

// If a non existant kind is used, tapir will become null
var tapir = kinds[kind];

// In other places of the code
if (tapir)

// Somewhere else
if (tapir)

This type of code is rather unpleasant and it is time to break out the Null Object.

var tapir = kinds[kind];
// If a non existant kind is used I use a Null Object
if (!tapir)
  tapir = { act: doNothing, info: unknownTapirInfo }

// In other places of the code the conditionals are gone.

// Somewhere else, no special case here.

Null Objects are not appropriate everywhere but, I often find it very enlightening to have them in mind when I write code.


There are a lot of elegant ways to deal with conditional code in Javascript. I didn't even mention inheritance since it works similarly to the object approach I showed above. But if I need multiple instances of something I would of course use polymorphism through inheritance instead.

Friday, February 08, 2013

Web Workers

I recently wrote a program, Word Maestro, which requires extensive calculations in Javascript. The calculations, permutations and searching, are very CPU intensive and hangs the GUI when performed in the foreground.

Web workers to the rescue! Web workers are supported by most moderns browsers with the exception of IE (Surprise!). IE10 release candidate supports them, but it is not very wide spread yet. More info can be found at Can I Use

How do Web Workers Work?

A web worker is just a plain Javascript file with anything in it. If you start an empty file empty-worker.js it will start up just fine and do absolutely nothing.

// empty-worker.js

To start a web worker you create a new Worker and give the constructor a URL as the only parameter. The URL must come from the same domain as the page loading the Worker.

// This code goes inside a script tag or in a file loaded by <script src>
// Start the empty worker, which does nothing.
var worker = new Worker('empty-worker.js');

In order to have any use for our worker we need it to communicate with us. The way a worker communicates is be sending messages. The method that does this is called postMessage(object). It takes any type of argument, primitives as well as arrays and objects.

// eager-worker.js
postMessage('I am eager for work!');
// self.postMessage('I am eager for work!'); // Safest way

It is also possible to prefix the call with this or self , they both refer to the same WorkerGlobalScope. self is the safest way since that will not change with the calling context the way this does.

Our eager-worker.js starts up and posts a message and we need to receive it. We can do that by setting the onmessage property on our worker reference.

// In script tag or file loaded by script tag
var worker = new Worker('empty-worker.js');
worker.onmessage = function(event) {

Reloading the page will result in the following output in the console. Notice that the data sent by the worker is available in the data property of the MessageEvent

MessageEvent {ports: Array[0], data: "I am eager for work!", source: null, lastEventId: "", origin: ""}

An alternative way of attaching a listener to the workers is to use addEventListener(&apos;message&apos;, listener). Adding the event listener this way has the advantage of allowing us to attach multiple listeners to the same worker. I have not had the need for this yet.

worker.addEventListener('message', function(event) {
worker.addEventListener('message', function(event) {

Reloading the page with the above code, will result in two lines in the console log. Notice that I am only logging the data part of the event.

One, I am eager for work!
Two, I am eager for work!

Our eager-worker.js is really eager to work so he keeps on telling his boss that he want to work every second.

// eager-worker.js
setInterval(function() {
  postMessage('I am eager for work!');
}, 1000);

This of course annoys the boss tremendously so he decides to tell the worker to do something by sending him a message with postMessage.

// main.js
var worker = new Worker("eager-worker.js");
worker.onmessage = function(event) {
worker.postMessage('Stop bugging me and do something!');

Our eager-worker.js is not listening yet, so the boss can scream all he wants without any success. Let's change that by implementing the onmessage method in the worker as well. addEventListener also works.

// eager-worker.js
postMessage('I am eager for work!');

var timer = setInterval(function() {
  postMessage('I am eager for work!');
}, 1000);

onmessage = function(event) {
  postMessage('Alright Boss!');

Now the output is less annoying.

I am eager for work!
Alright Boss!

Now that we know the basics of web workers, lets look of some other interesting issues that come up.

Debugging Web Workers

If you try to use console.log in your web workers you will get an error messages such as this:

Uncaught ReferenceError: console is not defined

So this is an issue with web workers, it is not possible to use console or alert to debug them.

It is not a big problem because with Chrome it is possible to debug workers. In the lower right corner of the source tab of the Chrome Developer Tools, there is a Workers panel.

Checking the checkbox Pause on start will open up a new inspector window which allow us to debug the worker just as if it was a normally loaded script. Nice!


If there are script errors in the web worker, it will send back an error event instead of a message event. The errors can be handled via the onerror property of by subscribing to the error event.

worker.onerror(function(event) {
// or
worker.addEventListener('error', function(event) {

The above code will result in an event that looks like this, showing you the filename and line number to handle the message.

ErrorEvent {lineno: 6, filename: "http://localhost/web-workers/eager-worker.js", message: "Uncaught ReferenceError: missing is not defined", clipboardData: undefined, cancelBubble: false}

Web Worker Script Loading

A web worker can load additional scrips with importScripts(URL, ...). The URLs can be relative and, if so, are relative to the file doing the importing.

importScripts('../data/swedish-word-list.js', 'word-maestro.js', 'messageHandler.js')

A larger example

In this example I will show how easy it is to create a delegating worker that allows me to call normal methods on an object.

The messages are sent using a simple protocol with an object containing two properties.

// The message object
var message = {
  method: 'The method I wish to call',
  args:   ['An array of arguments']

main.js starts the delegating-worker.js and sends messages to it.

// main.js
var worker = new Worker("delegating-worker.js");
worker.onmessage = function(event) {

setInterval(function() {
  // Call the method echo with the argument ['Work']
  worker.postMessage({method: 'echo', args: ['Work']});
}, 4200);

setInterval(function() {
  // Call the method ohce with the argument ['Work']
  worker.postMessage({method: 'ohce', args: ['Work']});
}, 1100);

The delegating-worker.js loads the external script echo.js, which declares the variable Echo in the global worker scope. In the onmessage method I unpack the event and delegate the method call to Echo via apply. I use apply since I want to allow a variable number of arguments. The reply is sent back to main.js along with the method that was called.

// Declares Echo

onmessage = function(event) {
  var method =;
  var args =;

  // I use apply since to allow a variable number of arguments
  var reply = Echo[method].apply(Echo, args);
  self.postMessage({method: method, reply: reply});

The Echo service is a simple object with two methods.

var Echo = {
  // Return the word recieved.
  echo: function(word) {
    return word;
  // Reverse the word and return it.
  ohce: function(word) {
    return word.split('').reverse().join('');

Structuring the code in this way makes it easy to reuse the functionality in a non worker context.

Limitations of Web Workers

Since web workers are working in the background, they do not have access to the DOM, window, document or even the console. Any communication with these objects will have to be done by sending messages back to the main script

Checking for Web Worker support

Checking for Web Worker support is easy, just check if window.Worker is defined and show an error page or use an alternative solution if it is not.

function workersSupported() {
  return window.Worker;

if (!workersSupported()) {
  window.location = './unsupported-browser.html';

Wrap up

Using workers is easy, if you want to see a more thorough example, check out the source code (in Coffeescript) for Word-Maestro.