Sunday, May 11, 2014

Ping-Pong Pairing Over Git

When practicing new programming techniques I am a fan of ping-pong pairing. Ping-pong pairing is a way of pairing with TDD that evenly distributes the amount each programmer spends in front of the keyboard and the amount of test code versus actual code each programmer writes.

The description from the C2 Wiki reads:

Pair Programming Ping Pong Pattern

* A writes a new test and sees that it fails.
* B implements the code needed to pass the test.
* B writes the next test and sees that it fails.
* A implements the code needed to pass the test.

And so on. Refactoring is done whenever the need arises by whoever is driving.

Two programmers sit in from of one computer, with one keyboard and one screen.

Ping-pong pairing is great in a learning context because it keeps both programmers in front of the keyboard and it encourages conversation.

But, there are a couple of problems. Keyboards and editors! What if one programmer uses Dvorak and the other Qwerty? Or, one programmer cannot even think of writing another line of code without Das Keyboard while the other prefers an ergonomic keyboard? Or, one programmer uses Vim on OSX and the other Notepad on Windows?

Ping-Pong Pairing Over Git

What if we alter the setup to give each user their own computer, keyboard, screen, rubber duck, or whatever tickles their fancy? It would seem that this isn't paring any more! But, if we place the pair side-by-side and let them communicate over Git, we actually get a very nice flow. There is still only one person typing on the keyboard at a time, but they are typing on their own keyboard.

The pattern above is changed to:

Ping Pong Paring Over Git Pattern

* A writes a new test and sees that it fails.
* A commits the failing test and pushes the code to Git
* B pulls the code from Git.
* B implements the code needed to pass the test.
* B writes the next test and sees that it fails.
* B commits the failing test and pushes the code to Git
* A pulls the code from Git.
* A implements the code needed to pass the test.

And so on.

I've tried this pattern on a couple of code retreats and it is actually pretty smooth. To make it even more smooth I implemented a simple command line utility, tapir that allows for simple communication between the two computers and automates the pulling of the new code. It works like this. Each programmer starts a listener on their machine that pulls the code when it receives a message.

# Start a listener on 'my-topic', run git pull when a message arrives
tapir --cmd listen --script 'git pull' mytopic

Write a simple script to combine git push with a calling tapir mytopic

# ./push script pushes code to git and pings the tapir-server
git push
tapir mytopic

Now, instead of calling git push, you call ./push and the code is automatically pulled on the other machine, eliminating one step from the loop.


Ping-pong pairing over Git is nice! If you are interested in trying it out I have a roman numerals kata with setup code for multiple languages, currently: Clojure, ClojureScript, Javascript, Lua, VimScript, Objective-C, PHP, Ruby and C.

The tapir command line utility is also pretty interesting as it uses ServerSentEvents to communicate over a standard http server,

Why is the utility called tapir? Because, pingpong and ping-pong were already taken and I like tapirs! :).

Tuesday, March 25, 2014

Running Scripts with npm

Most people are aware that is is possible to define scripts in package.json which can be run with npm start or npm test, but npm scripts can do a lot more than simply start servers and run tests.

Here is a typical package.json configuration.

// package.json
// Define start and test targets
  "name": "death-clock",
  "version": "1.0.0",
  "scripts": {
    "start": "node server.js",
    "test": "mocha --reporter spec test"
  "devDependencies": {
    "mocha": "^1.17.1"
// I am using comments in JSON files for clarity.
// Comments won't work in real JSON files.

start, actually defaults to node server.js, so the above declaration is redundant. In order for the test command to work with mocha, I also need to include it in the devDependencies section (it works in the dependencies section also, but since it is not needed in production it is better to declare it here).

The reason the above test command, mocha --reporter spec test, works is because npm looks for binaries inside node_modules/.bin and when mocha was installed it installed mocha into this directory.

The code that describes what will be installed into the bin directory is defined in mocha's package.json and it looks like this:

// Macha package.json
  "name": "mocha",
  "bin": {
    "mocha": "./bin/mocha",
    "_mocha": "./bin/_mocha"

As we can see in the above declaration, mocha has two binaries, mocha and _mocha.

Many packages have a bin section, declaring scripts that can be called from npm similar to mocha. To find out what binaries we have in our project we can run ls node_modules/.bin

# Scripts availble in one of my projects
$ ls node_modules/.bin
_mocha      browserify  envify      jshint
jsx         lessc       lesswatcher mocha
nodemon     uglifyjs    watchify

Invoking Commands

Both start and test are special values and can be invoked directly.

# Run script declared by "start"
$ npm start
$ npm run start

# Run script declared by "test"
$ npm test
$ npm run test

All other values will have to be invoked by npm run. npm run is actually a shortcut of npm run-script.

  "scripts": {
    // watch-test starts a mocha watcher that listens for changes
    "watch-test": "mocha --watch --reporter spec test"

The above code must be invoked with npm run watch-test, npm watch-test will fail.

Running Binaries Directly

All the above examples consists of running scripts that are declared in package.json but this is not required. Any of the commands in node_modules/.bin can be invoked with npm run. This means that I can invoke mocha by running npm run mocha directly instead of running it with mocha test.

Code Completion

With a lot of modules providing commands it can be difficult to remember what all of them are. Wouldn't it be nice if we could have some command completion to help us out? It turns out we can! npm follows the superb practice of providing its own command completion. By running the command npm completion we get a completion script that we can source to get completion for all the normal npm commands including completion for npm run. Awesome!

I usually put each of my completion script into their own file which I invoke from .bashrc.

. <(npm completion)

# Some output from one of my projects
$ npm run <tab>
nodemon                  browserify               build
build-js                 build-less               start
jshint                   test                     deploy
less                     uglify-js                express
mocha                    watch                    watch-js
watch-less               watch-server

Pretty cool!

Combining Commands

The above features gets us a long way but sometimes we want to do more than one thing at a time. It turns out that npm supports this too. npm runs the scripts by passing the line to sh. This allows us to combine commands just as we can do on the command line.


Lets say that I want to use browserify to pack my Javascript files into a bundle and then I want to minify the bundle with uglifyjs. I can do this by piping (|) the output from browserify into uglifyjs. Simple as pie!

  // Reactify tells browserify to handle facebooks extended React syntax
  "scripts": {
    "build-js": "browserify -t reactify app/js/main.js | uglifyjs -mc > static/bundle.js"
  // Added the needed dependencies
  "devDependencies": {
    "browserify": "^3.14.0",
    "reactify": "^0.5.1",
    "uglify-js": "^2.4.8"


Another use case for running commands is to run a command only if the previous command is successful. This is easily done with and (&&). Or (||), naturally, also works.

  "scripts": {
    // Run build-less if build-less succeeds
    "build": "npm run build-js && npm run build-less",
    "build-js": "browserify -t reactify app/js/main.js | uglifyjs -mc > static/bundle.js",
    "build-less": "lessc app/less/main.less static/main.css"

Here I run two scripts declared in my package.json in combination with the command build. Running scripts from other scripts is different from running binaries, they have to prefixed with npm run.


Sometimes it is also nice to be able to run multiple commands at the concurrently. This is easily done by using &amp; to run them as background jobs.

  "scripts": {
    // Run watch-js, watch-less and watch-server concurrently
    "watch": "npm run watch-js & npm run watch-less & npm run watch-server",
    "watch-js": "watchify app/js/main.js -t reactify -o static/bundle.js -dv",
    "watch-less": "nodemon --watch app/less/*.less --ext less --exec 'npm run build-less'",
    "watch-server": "nodemon --ignore app --ignore static server.js"
  // Add required dependencies
  "devDependencies": {
    "watchify": "^0.6.2",
    "nodemon": "^1.0.15"

The above scripts contain a few interesting things. First of all watch uses &amp; to run three watch jobs concurrently. When the command is killed, by pressing Ctrl-C, all the jobs are killed, since they are all run with the same parent process.

watchify is a way to run browserify in watch mode. watch-server uses nodemon in the standard way and restarts the server whenever a relevant file has changed.

watch-less users nodemon in a less well-known way. It runs a script when any of the less-files changes and compiles them into CSS by running npm run build-less. Please note that the option --ext less is required for this to work. --exec is the option that allows nodemon to run external commands.

Complex Scripts

For more complex scripts I prefer to write them in Bash, but I usually include a declaration in package.json to run the command. Here, for example, is a small script that deploys the compiled assets to Heroku by adding them to a deploy branch and pushing that branch to Heroku.


set -o errexit # Exit on error

git stash save -u 'Before deploy' # Stash all changes, including untracked files, before deploy
git checkout deploy
git merge master --no-edit # Merge in the master branch without prompting
npm run build # Generate the bundled Javascript and CSS
if $(git commit -am Deploy); then # Commit the changes, if any
  echo 'Changes Committed'
git push heroku deploy:master # Deploy to Heroku
git checkout master # Checkout master again
git stash pop # And restore the changes

Add the script to package.json so that it can be run with npm run deploy.

  "scripts": {
    "deploy": "./bin/"


npm is a lot more than a package manager for Node. By configuring it properly I can handle most of my scripting needs.

Configuring start and test also sets me up for integration with SaaS providers such as Heroku and TravisCI. Another good reason to do it.

Sunday, January 19, 2014

Clean Grunt

Grunt is the tool of choice for many client side web projects. But, often the gruntfiles look like a mess. I believe the reason for this is that many people don't care about keeping it clean.

On top of that, the file is often generated by a tool, such as Yeoman, and not cleaned up after. I happen to think that the gruntfile should be clean and here is a how to do it.

Here is how the project structure looks in development mode. I keep all my client side code in an app directory and I use Bower to install external components into app/components


I will use less, watch, concat, uglify, filerev and usemin to optimize it and turn it into this.


The above structure is good because it serves one CSS file, one Javascript file, and everything apart from index.html is named with an MD5 checksum that allow me to cache everything infinitely!

Loading External Tasks

Loading tasks in Grunt is done with grunt.loadNpmTasks but since all dependenciies is already declared in package.json there is no need to name them again. So instead we use matchdep to load all Grunt dependencies automatically.

// Load all files starting with `grunt-`

The relevant section in package.json contains these files. All Grunt plugins follow the grunt- naming convention.

 "devDependencies": {
    "bower": "~1.2.8",
    "grunt": "~0.4.2",
    "matchdep": "",
    "grunt-contrib-jshint": "",
    "grunt-contrib-less": "",
    "grunt-contrib-copy": "",
    "grunt-contrib-clean": "",
    "grunt-contrib-watch": "",
    "grunt-express-server": "",
    "grunt-contrib-cssmin": "",
    "grunt-usemin": "",
    "grunt-filerev": "",
    "grunt-contrib-concat": "",
    "grunt-contrib-uglify": ""


I think it is a good idea to run JsHint for all my files including the Gruntfile and here is how I configure it.

// JsHint configuration is read from packages.json
var pkg = grunt.file.readJSON('package.json');

    pkg: pkg,

    // JsHint
    jshint: {
        options: pkg.jshintConfig,
        all: [

Newer versions of JsHint can pick up configuration from package.json and I take advantage of this so I don't have a duplicate configuration in a .jshint file that is normally added when using a generated project.

The relevant section in package.json is defined like this:

 "jshintConfig": {
    "maxparams": 4,
    "maxdepth": 2,
    "maxcomplexity": 6,

I truncated the section for brevity but I kept my favorite configuration options that deal with complexity and forces me to keep my code simple.


As I wrote in CSS Good Practices, I think using a CSS preprocessor is a really good idea and I use Less in this project. Since Less is a superset of CSS all I have to do to use less is to change the extension from .css to .less and configure Grunt to convert Less files into CSS. In development mode I like to have the CSS files in the same place I would have put them if I wasn't using Less. To avoid accidentally checking the generated files into source control I add the following line to .gitignore

# .gitignore

Here is the configuration for generating a CSS file. I add two targets, one for development and one for release which is compressed.

// Less
less: {
    dev: {
        src: 'app/styles/main.css',
        dest: 'app/styles/main.less'
    release: {
        src: 'app/styles/main.css',
        dest: 'dist/app/styles/main.less',
        options: {
            compress: true


As you can see I only name one less file. I think it is a good idea to include all less files via import statements.

// Less files are automatically included and don't generate new requests.
@import 'other-less-file.less';


In development mode I also like to have a file watcher that generates the CSS files automatically when I change a less file. Here is the configuration.

// Watch
watch: {
    // watch:less invokes less:dev when less files change
    less: {
        files: ['app/styles/*.less'],
        tasks: ['less:dev']


It is also a good idea to be able to remove generated files with one command clean will do that for me.

// Clean
clean: {
    // clean:release removes generated files
    release: [

Concat, Uglify and Usemin Prepare

To concatenate and minify the Javascript files, I use concat and uglify. But I don't want the files used in index.html to be automatically included. To do this I need to use useminPrepare. It is one of two tasks included in grunt-usemin, the other is unsuprisingly called usemin and I will describe it later.

useminPrepare parses HTML files, looking for tags that follow a distinct pattern, &lt;!-- build:js outputfile.js --&gt; and extracts the filenames from script tags. These files are then injected into the concat and uglify tasks. So, there is no need to provide a configuration for those tasks.

<!-- app/index.html -->

<!-- build:js scripts/main.js -->
<script src="components/jquery/jquery.js" defer></script>
<script src="components/momentjs/moment.js" defer></script>
<script src="scripts/model.js" defer></script>
<script src="scripts/view.js" defer></script>
<script src="scripts/main.js" defer></script>
<!-- endbuild -->
/// userminPrepare
useminPrepare: {
    html: 'app/index.html',
    options: {
        dest: 'dist/app'

// Concat
concat: {
    options: {
        separator: ';'
    // dist configuration is provided by useminPrepare
    dist: {}

// Uglify
uglify: {
    // dist configuration is provided by useminPrepare
    dist: {}

There are a few things that are noteworthy above. useminPrepare.options.dest works in conjunction with the value defined in the build:js comment in the html file. I always designate the root directory of the generated code in the Gruntfile and I keep the relative path to the file in the HTML file. I do this because this configuration is reused by the usemin task later and configuring it this way in useminPrepare keeps it simpler later.

Also note that concat and uglify needs to have an empty dist property. Otherwise, useminPrepare cannot inject configuration into it.

Running grunt useminPrepare shows the generated configuration.

{ options: { separator: ';' },
dist: {},
  { files:
    [ { dest: '.tmp/concat/scripts/main.js',
          [ 'app/components/momentjs/moment.js',
            'app/scripts/main.js' ] } ] } }

dist: {},
  { files:
    [ { dest: 'dist/app/scripts/main.js',
        src: [ '.tmp/concat/scripts/main.js' ] } ] } }

Alright, now we have minified both CSS and Javascript, it is time to move the files that don't need minification, images and html files.


// Copy HTML and fonts
copy: {
    // copy:release copies all html and image files to dist
    // preserving the structure
    release: {
        files: [
                expand: true,
                cwd: 'app',
                src: [
                dest: 'dist/app'

Here I use a different configuration for the files. The expand option is what is important. I tells grunt to copy the files preserving the structure.

OK, now all the files have been moved into their proper place and all that is left is to checksum them and rename all the references.

Filerev, checksumming

filerev is my task of choice for adding the checksum of a file to its name. I use MD5 to checksum all assets, javascript, css and images with this configuration.

// Filerev
filerev: {
    options: {
        encoding: 'utf8',
        algorithm: 'md5',
        length: 20
    release: {
        // filerev:release hashes(md5) all assets (images, js and css )
        // in dist directory
        files: [{
            src: [


The final task is to change all the references in the HTML and CSS files to use the checksummed filenames and to change the script tags to reference the minified file. usemin is the task for this job.

// Usemin
// Replaces all assets with their revved version in html and css files.
// options.assetDirs contains the directories for finding the assets
// according to their relative paths
usemin: {
    html: ['dist/app/*.html'],
    css: ['dist/app/styles/*.css'],
    options: {
        assetsDirs: ['dist/app', 'dist/app/styles']

The only difficult thing about this is that usemin uses the paths from the files it parses when it searches for assets to replace references to. This means that options.assetsDirs must designate the directories where the parsed files are located. In my case the CSS files are in dist/app/styles and the HTML files are in dist/app. Hoohaah! Only one more thing before were done. Calling all the tasks in order.


I register the release task and tell it to invoke all the other files in the correct order.

// Invoked with grunt release, creates a release structure
grunt.registerTask('release', 'Creates a release in /dist', [

Example Code

This example comes from a workshop I give. If you are interested in one send me a note. If you would like to give one yourself you are welcome to use my example code. I also give a Grunt presentation

That's all folks!

Designing Programs, Caballo Blanco Style

I enjoy picking up new ideas on programming from different sources. It is cool how the mind can take an idea from one domain and naturally fit it into a totally different one. In this case I have picked up some programming tips from Caballo Blanco, the lone ultrarunner from the book, Born to Run

It is a great book even if you are not into running at all and it also contains tips about programming. (Of course this depends on having a warped mind, like I do :)

The text below is from the book but I have altered the wording to put it into a programming context.

Don't fight the code, take what it gives you. If you have a choice between one function or two, make three!

Caballo has spent so many years solving problems so he even has nicknames for them. Some are ayudantes, reliable problems which behave as you expect, problems that look and are straightforward to solve. Others are tricksters which look like ayudantes but will become difficult when you start digging into them. Some are chingón sitos, little bastards, just dying to trip you up.

Think, easy, light, smooth, and fast!

You start with easy because if that is all you get, that's not so bad! Then, work on light. Make it effortless like you don't give a shit how big the problem is or how long it is going to take. When you have practiced this so long that you have forget you are practicing, you work on getting it smooooth! You don't have to worry about the last one. If you get those three and you'll be fast!

I like this text because it makes it clear that what is important is to focus on the doing and not on being done. We don't need to think about being fast because if we fully focus on the problem at hand, we will automatically be as fast as we can!

Wednesday, December 25, 2013

A Consultant's Wishlist

As a consultant I am called in to solve specific problems and I would like to start working on the problems at hand as soon as I can. But, very often, there are other problems that I have to overcome before I am able to spend my time on the problems I am being paid to solve. Here is my wishlist.

A Reliable Internet Connection

Most people in Sweden have a really good Internet connections, but it is surprising how many big companies don't. Even more surprising is that it is not seen as a major problem. A consultant usually costs at least 100 euro an hour, having us idle because of a slow network is bad business practice.

Some companies even take it further. If the connection is not flaky enough they make sure there is an even more flaky proxy that will make sure that every task performed will take special work since many of the tools we use for daily web development are not easily configured to work with a proxy.

Please give me fast, reliable networking without obstructions.

Personal Computer

I want to work with my personal computer. I have configured it to make me as effective as possible and I don't want to be forced to work on another computer just because that is corporate policy. Having my personal computer also allows me to work from wherever I am, at home, on the train, etc. Sure, I can configure the computer that I am given to work the way I want it to work, but that is time spent on a different problem than I was hired to solve.

Software as a Service

I also find it frustrating to have to use legacy tools. Many of them are not working well and I would like to do without them. We are living in the era of Software as a Service and a lot of the best tools are provided as services and as a consultant I already have accounts for them and it is only a click a way to allow me to become part of the company team.


I have an email address. A remarkable thing about an email address is that it works no matter where I am. I don't want to use corporate email and be part of various groups that I care nothing about. I want to use my own email that I can access from anywhere through a good interface.


If a company is using Github that means that I can be setup and working on the source code in a matter of minutes.

Apart from being easy to setup and familiar to me, Github has a number of advantages over other solutions.

  • Built-in markdown parsing allowing documentation with simple navigation to other documents, issues, code, pull-requests and commits.
  • Issue tracking seamlessly integrated with version control allowing for easy tracking of issues through the code.
  • Visualization of branches allows developers and other interested parties to see what is being worked on at any time.
  • Pull-requests is a simple way for developers to do code-reviews and to communicate publicly about code.


Most teams I work with often work (or try to work) with an agile process. This often means that we use a Kanban board of some kind. I have tried a number of them and Trello is the one I like the most. It is lightweight, provides (near) real time updates to the board, and it has clients for Web, Android and IOS. This makes it easy to see what is being worked on and easy to update or add new tasks even if I am not at the computer.

Travis CI

Travis is continuous integration as a service. It is ridiculously easy to set up and it integrates really well with Github. I know that it is easy to setup up a Jenkins server to do this but what is the point when Travis is already setup and running.

Campfire or Skype

Campfire and Skype are awesome tools for collaborating teams who want to keep in sync across time and space.

Platform as a Service

I also find it much better if the product being worked on is hosted by a service provided that can be easily accessed from anywhere. Depending on what level of control you need there are a number of options. I like Heroku and Nodejitsu for hosting Ruby and Node and it is often all I really need. If I need more control it is easy to switch to Amazon or another IaaS providers if the need comes up.


I want the best for my clients. To do this I want to work with really good tools, from anywhere. I want to work in an environment that keeps me happy and productive. Merry Christmas!

Wednesday, November 06, 2013

References from Habits of a Responsible Programmer

A list of references from my talk, Habits of a Responsible Programmer at Øredev. First out is the blog post that inspired the talk


Some books about habits and how our brain works in good and not so good ways.


Steve Yegges blog post that made me realize that it is time to learn to touch type along with some tools that can make it fun to learn.

Clear Code

It is worth learning Smalltalk, just to understand the book, Smalltalk Best Practice Patterns.

Programming Techniques

SICP is a classic and it deserves to be read but Concepts, Techniques, and Models of Computer Programming is just as good if not better.

Read Code

Source code to a number of projects with beautiful code.

  • CloudFoundry - beautiful Ruby code, OS Platform as a Service.
  • Levelup - Node wrapper for the LevelDB key-value store.


A good short article by Marin Fowler on the tradeoffs involved with writing explicit or implicit code.


The book by Fowler is a timeless reference book and it is required reading for anyone serious about programming. Kerievsky's book gives more in depth examples and finally Reg's blog post discusses reasons for not refactoring just to please your own ego.

Simple vs. Easy

Great talk!


I like Sandi Metz way of explaining why testing is important.


Documentation matters, but keep it simple.


The original is no longer available, but the videos are uploaded to You Tube.


Generating Code

Tools for generation code. Here documents are the simplest possible way, but if you want to generate multiple files it is better to use Thor or Yo.


The ultimate book on continuous delivery, a must read for anyone interested in automation. And, that should be everyone!


Wonderful book about different people stereotypes.


Two articles on estimation by Martin Fowler and Dan North.

Life, the Universe, and Everything

Books about happiness, the mind, and other things.

Thursday, October 17, 2013

Tunneling to localhost via SSH

Sometimes when working with a new web site I have customers who want to see the site while it is still in development. One way of doing this is to have alternative demo servers where all we do is just serve up our work in progress. This works fine most of the time, but sometimes I just want to be serve up my local machine.


One easy way to do this is to use localtunnel. localtunnel is a Ruby gem that is meant for exactly this purpose. Here's how:

$ gem install localtunnel
$ localtunnel --key ~/.ssh/ 80
  This localtunnel service is brought to you by Twilio.
  Port 80 is now publicly accessible from ...

It is now possible to access localhost:80 via the URL Simple as pie!

But what if you don't like pie? Or Ruby, for that matter. Or, what if you don't like to serve your secret data through another company (Twilio in this case who graciously gives away the hosting for free).

Well you are in luck, it is easy to set up your own tunnel via SSH, provided you have access to a server that is accessible from the Internet. And, everybody has access to such a server via Amazon EC2 or similar service. Make sure the server is accessible on all high ports. On Amazon this is done by opening all incoming ports above 1024 in a security group.

Setting up a tunnel via SSH

In SSH lingo a tunnel from an external server to my local server is called a reverse proxy. Here is how to set one up. First you need to configure the remote ssh daemon to allow setting up remote interfaces. A remote interface is one that can be accessed from a server other than localhost, which is what we want.

Here is how to do it, the server called is my AWS server.

    # Login to the remote server
    $ ssh -i ~/.ssh/id_rsa

    # Edit the sshd configuration
    $ sudo vi /etc/ssh/sshd_config
    # Find the line #GatewayPorts no
    # Change it to GatewayPorts yes
    # Save and exit

    # Restart the daemon
    $ sudo /etc/init.d/sshd restart
    Stopping sshd:                                             [  OK  ]
    Starting sshd:                                             [  OK  ]

    # Exit the shell and return to your local machine
    $ exit

Now you are good to go. We assume you have a server running on port 3000 to display to the world.

    $ ssh -i ~/.ssh/id_rsa -N -R *:0:localhost:3000
    Allocated port 34070 for remote forward to localhost:3000

Now you can surf to

And it will access your local machine. :)

When you stop the command (Ctrl-C) the tunneling will stop.

Command explanation

-i  identity file (private key)
-N  Do not execute a remote command, just setup the port forwarding
-R  *         All interfaces should be forwarded
    0         Open forwarding on any available port (34070 in the example)
    localhost Forward to localhost
    3000      The localport to forward too.
ec2-user      The user on the remote server  The remote server

If you want to simplify it for yourself add the following script to a bin catalog.

    ## Script tunnel

    set -o errexit

    # default to port 3000

    ssh -i ~/.ssh/id_rsa -N -R \*:0:localhost:$port

Now all you have to do to enable remote access is tunnel 80, or whatever port you want to display.

Escaping the proxy Jail

The story could have ended here, but some people, trapped behind corporate firewalls, may not be allowed to use ssh. The traffic is blocked by a corporate proxy server. Well, there is a happy ending for you too and it is fittingly called corkscrew. It allows you to screw yourself out of the corporate jail and into the world.

Here is how you do it on Ubuntu, on OS X use brew instead.

    # Install corkscrew
    $ sudo apt-get install corkscrew

    # Edit your ~/.ssh/config, add
    Host *
      ProxyCommand corkscrew 8080 %h %p

    # If you need to authenticate to get through the proxy the line should read
    Host *
      ProxyCommand corkscrew 8080 %h %p ~/.ssh/proxyauth

    # And you need to add username:password to ~/.ssh/proxyauth
    $ echo "proxyusername:proxypassword" > ~/.ssh/proxyauth

%h and %p is filled in by ssh with the host and port of your destination.

Freedom's just another word for nothing left to lose,
Nothin' don't mean nothin', honey, if it ain't free.
Yeah, feeling good was easy, Lord, when he sang the blues,
You know feeling good was good enough for me,
Good enough for me and my Bobby McGee.

Thursday, June 20, 2013

Solving the Expression Problem in Javascript

I just watched a great presentation by Daniel Spiewak called Living in a Post-Functional World. I watched it mainly because I heard it was a great presentation on how to deal with modules, which it was. A concept which is just as important in Javascript as it is in Scala.

But at the end of the presentation Daniel talks about the Expression Problem as defined by Philip Wadler.

Here it is as summarized by Daniel Spiewak:

The Expression Problem

  • Define a datatype by cases
  • Add new cases to the datatype
  • Add new functions over the datatype
  • Don't recompile
  • Good Luck!

Functional Style

If we try to solve the problem in a functional style, we get something like this (also from Daniel's presentation).

sealed trait Expr
case class Add(e1: Expr, e2: Expr) extends Expr
case class Sub(e1: Expr, e2: Expr) extends Expr
case class Num(n: Int) extends Expr

def value(e: Expr): Int = e match {
  case Add(e1, e2) =>
    value(e1) + value(e2)

  case Sub(e1, e2) =>
    value(e1) - value(e2)

  case Num(n) => n

The functional style uses pattern matching. We see that it is easy to add new functions, such as a toString that returns a string representation of the expression without changing any code. If we add a new class, such as Mul, we have to change all the existing functions.

Here are the main points of this solution:

  • Dumb cases
  • Every function enumerates full algebra
  • Very easy to add new functions
  • Very difficult to add new cases

We get an open set of functions and a closed set of cases!

Object-Oriented Style

If we try to solve the problem in an object-oriented style, we get something like this (again from Daniel's presentation).

sealed trait Expr {
  def value: Int

case class Add(e1: Expr, e2: Expr) extends Expr
  def value = e1.value + e2.value
case class Sub(e1: Expr, e2: Expr) extends Expr
  def value = e1.value - e2.value
case class Num(n: Int) extends Expr
  def value = n

The object-oriented solution uses subtype polymorphism. We see that it is easy to add new classes, such as a Mul, but if we try to add new function, we have to change all the existing classes.

Here are the main points:

  • Smart cases, i.e. Objects
  • Every case enumerates all functions
  • Very easy to add new cases
  • Very difficult to add new functions

We get a closed set of functions and an open set of cases!

Dynamic Style

Now lets solve it with Javascript in a dynamic style. The solution we have looks a lot like the subtype polymorphic solution above.

function Add(e1, e2) {
    this.e1 = e1;
    this.e2 = e2;
Add.prototype.value = function() { return this.e1 + this.e2; };

function Sub(e1, e2) {
    this.e1 = e1;
    this.e2 = e2;
Sub.prototype.value = function() { return this.e1 - this.e2; };

function Num(n) {
    this.n = n;
Num.prototype.value = function() { return this.n; };

Just as in the polymorphic solution, it is easy to add a new class.

// Adding a new class
function Mul(e1, e2) {
    this.e1 = e1;
    this.e2 = e2;
Mul.prototype.value = function() { return this.e1 * this.e2; };

But, what about adding a new functions? It turns out that this is just as easy because of the dynamic nature of Javascript. We just add them to the prototype.

// Adding new functions to existing prototypes
Add.prototype.toString = function() {
  return '(' + this.e1.toString() + ' + ' + this.e2.toString() + ')';
Sub.prototype.toString = function() {
  return '(' + this.e1.toString() + ' - ' + this.e2.toString() + ')';
Num.prototype.toString = function() {
  return this.n;
Mul.prototype.toString = function() {
  return '(' + this.e1.toString() + ' * ' + this.e2.toString() + ')';

Now getting a string representation of an expression is a simple as:

var x = new Num(1);
var y = new Num(2);
var z = new Add(x, y);
var w = new Sub(x, y);
var e = new Mul(z, w);

e.toString(); // returns ((1 + 2) * (1 - 2))
Well, isn't that nice!
Sometimes I feel like I don't have a problem
I don't ever feel like I did before
But take me to a place I love, a dynamic place!
I don't ever feel like I did before
But take me to a place I love, a dynamic place, yeah, yeah, yeah!
Misquoting Red Hot Chili Peppers :)