Tuesday, October 04, 2011

7 Reasons to go to Øredev 2011

I’m am looking forward to Øredev 2011 more than I have looked forward to any of the previous ones. The reason for this is that Øredev has finally become a leading conference for dynamic programming.

Øredev has always been good in the enterprise sphere led by Java, .Net and mobile tracks, but it has been weaker in the area of dynamic programming languages. Last year was better than before, but this year is going to be great. Here are some speakers that you should not miss.

Yehuda Katz

Yehuda Katz was one of the driving forces behind the great Rails 3 refactoring, that made sure that Rails will remain the most productive web development environment for many years to come. Yehuda just released a new book, Rails 3 in Action, Its the first book about Rails 3.1, including the awesome Asset Pipeline, streaming, and reversible migrations.

Yehuda has also been involved with jQuery and written jQuery in Action.

He will be speaking about Rails and Sproutcore.

Felix Geisendörfer

Felix Geisendörfer is a core Node.js developer and he will of course be talking about Node. Node.js is a set of Javascript libraries that runs on top of the Google V8 virtual machine. What is interesting about Node, apart from being server-side Javascript is that it uses asynchronous programming as the default. This default makes Node extremely interesting for developing solutions involving multiple open connections, such as websockets, and for streaming video and audio. Node is definitely part of the future of the web. I have written extensively about it in the past.

Corey Haines

Cory Haines is a legend in the TDD community. He is also famous for his Code Retreats. He will give one workshop, Improving your TDD, and two presentations, Fast Ruby on Rails Tests and Come introduce yourself to the concepts and fundamental technique behind TDD

Ilya Grigorik

Ilya Grigorik is the founder of PostRank, that was recently acquired by Google. He is now working on Social Analytics at Google. At PostRank he used Ruby to perform analysis on very large amounts of data. While doing this he developed Goliath, a high-performance non-blocking web server using Ruby 1.9 and fibers. He will be talking about Goliath, Concurrency and Machine Learning

I can recommend that you follow Ilya on Twitter since his tweets has the highest signal-to-noise ratio I know of.

And, finally, make sure to check out Vim Golf, a really cool way to become a Vim wizard.

Trevor Burnham

Coffeescript is the new way to write Javascript without actually writing it :). Coffeescript is an elegant language, created by Jeremy Ashkenas, with features from Ruby and Python. The language is very pure and removes a lot of clutter. Coffeescript is compiled into good, efficient Javascript. Trevor Burnham has written a book, CoffeeScript: Accelerated JavaScript Development, on the subject and he will be giving two presentations about it, CoffeeScript: Design Patterns for the New JavaScript and Transforming Data into Pixels: Visualization with Canvas and CoffeeScript

Charles Nutter

Charles Nutter is the man behind JRuby. He has also create another language called Mirah, which he will be talking about in Have you tried Mirah yet?.

While doing all this he has obviously learned a thing or two about the JVM and about bytecodes. Who could be better to teach us about the internals of the JVM. Charles will be giving another talk about this in What the JVM Does With your Bytecode when Nobody’s Looking.

Simon Peyton Jones

Even though this list of people is mostly about dynamic programming languages, it has to include Simon Peyton Jones.

Haskell is one of the most statically typed languages there is. It is, probably, also the most elegant programming language in the world. It is purely functional, has lazy evaluation, pattern matching, and currying by default. Even if you never use Haskell in a real-life project learning Haskell will be worth your while. If you want to get a good introduction to Haskell I can highly recommend Programming in Haskell by Graham Hutton.


As you can see, this years Øredev is looking better than ever before and I have only included a select part of it in this post. Missing it should be considered professional misconduct!

Tuesday, August 09, 2011

jQuery Changes From 1.4.2 to 1.6

jQuery is a powerful library and it is possible to get by without using any of the new features. That’s why many of us just upgrade to a new version assuming that it is mostly bug and performance fixes. This is not the case. jQuery 1.4.2 was released in February 2010 and it’s been one and a half years and a number of releases since then.

I was going to write about the changes in 1.5 and 1.6, but I have noticed that many people have missed some of the new features of the 1.4 releases. And by the way, all examples are written in Coffeescript.

Selected changes from 1.4.2+


Most people know about live() and how it can be used to attach listeners to elements that don’t yet exist in the DOM. live() has a younger brother that was born in 1.4.2 and he is called delegate(). delegate() is more powerful. It gives you more precision in where to attach the listener.

$('.main-content').find('section').delegate 'p', 'click', -> 
  $(this).addClass 'highlight' 

As you can see above, delegate(), unlike live(), can be chained like normal jQuery calls.

jQuery.now(), jQuery.type() and jQuery.parseXML()

Nothing special here, just some utilities that are good to know about.

$.now() is (new Date()).getTime() 
$.type('hello') is 'string' 
xml = """ 
<rss version='2.0'> 
<title>RSS Title</title> 
xmlDoc = $.parseXML xml 
($(xmlDoc).find 'title').text() is 'RSS Title' 

All these utilities are interesting, but the truly good thing that came out of jQuery-1.5+ is Deferred().


With the release of jQuery 1.5 the internal implementation of $.ajax was changed to use Deferred(), and, even better, the implementation was deemed so useful, that it became part of the public API.

Here is how you can use it via the $.ajax method.

Declare a function hex that calls the /hex url on the server, which will return a hex value between 00 and FF.

  hex = -> 
    $.ajax { url: '/hex' } 

Call the function multiple times, and you get a new Deferred back for each call. Notice the lack of handlers for the calls.

  red = hex() 
  green = hex() 
  blue = hex() 

Attach handlers to each of the Deferreds to do something useful with the returned value. Notice the use of done instead of success. success is deprecated and will be removed in 1.8.

  red.done (hh)-> 
    $("#r").css 'background-color', "##{hh}0000" 
  green.done (hh)-> 
    $("#g").css 'background-color', "#00#{hh}00" 
  blue.done (hh)-> 
    $("#b").css 'background-color', "#0000#{hh}" 

I am not limited to adding one handler, I can attach as many as I like. Here I attach another one for logging the success of the red-call.

  red.done (hh) -> 

If this was all there was to it, I would be happy, but it is not by using the $.when method I get a new Deferred object that orchestrates multiple Deferreds in a very simple way. Let me create a color-Deferred that waits for the others to return and then calls a new callback when they are all done.

  color = $.when(red, green, blue) 
  color.done (r, g, b) -> 
    $("#color").css 'background-color', "##{r[0]}#{g[0]}#{b[0]}" 

The results from the requests are given in the same order as the Deferred objects are passed in. The results are given as an array with three arguments [ data, ‘success’ , deferred ].

You can see the example, a simple Sinatra app, in action on Heroku and the source code can be found on Github

Very nice! Apart from the examples I have shown, there are methods working with Deferred. Methods for creating, jQuery.Deferred(), resolving, resolve(), resolveWith(), and rejecting, reject(), rejectWith().

To attach handlers to the events, you can use done() for resolve, fail for reject, always() for both resolve and reject. You can also use then(doneCallback, failCallback) to attach both a done and a fail handler with one call.

Deferreds are really cool, check them out here

Sunday, June 05, 2011

Tips from Rails Anti-Patterns

Another good Ruby book is out, Rails Anti-Patterns. The book is loaded with good tips on everything from following the Law of Demeter to cleaning up your views with the use of helper methods.

Here are some things I picked up from the book.

delegate can take a :prefix argument

The delegate method from active_support is used for delegating calls to another object without having to write out the full delegating methods. It can take a prefix option to customize the name of the delegating methods.

class Address
  @attr_accessors :street, :zip

class Person
  attr_reader :address
  # prefix => true, results in the model name being used as prefix
  delegate :street, :zip, :to => :address, :prefix => true
  #@person.address_street, @person.address_zip

class Person
  attr_reader :billing_address, :delivery_address
  # prefix => string, uses the string as prefix
  delegate :street, :zip, :to => :address, :prefix => delivery
  #@person.delivery_street, @person.delivery_zip
  delegate :street, :zip, :to => :address, :prefix => billing
  #@person.billing_street, @person.billing_zip

Transaction Scope

The code executed in the ActiveRecord callbacks execute in the same transaction as the actual call to save, create, update, or delete. Knowing this helps to eliminate unneccessary explicit transactions.

# Using a before filter
class Drink
  before_create :remove_ingredients_from_bar

  def remove_ingredients
    ingredients.each do |ingredient|

# Is better than using an explicit transaction
class Drink
  def create_drink
    transaction do

  def remove_ingredients
    ingredients.each do |ingredient|

Association Methods

It is possible to add methods directly on the activerecord associations. This is especially handy if the method uses information from both sides of the relation.

class Drink
  #has_field :minimum_drinking_age

class Customer
  #has_field :age

  has_many :drinks do
    def allowed
      # proxy_owner is the object defining the relation, Customer
      where(['minimum_drinking_age < ?', proxy_owner.age])

When to make a model active

If there is no user interface for adding, removing, or managing data, there is no need for an active model. A denormalized column populated by a hash or array of possible values is fine.

This is really just the application of the KISS principle, Keep It Simple Stupid, but I have never seen it as clearly described before reading this book.

Haml []

A nice feature of Haml that I didn’t know about, is the [] operator. When given an object, such as [record], [] acts as a combination of div_for and content_for, outputting a tag with the id and class attributes set appropriately.

-# This Haml
<!-- Results in this HTML -->
<span class='team' id='team_1'></span>

RESTful actions

When using resources in Rails there are seven methods that are used.

index, create, show, update, delete, edit, and new. The first five naturally map to get(collection), post(collection), get(singular), put(singular), and delete(singular), but what isn’t as obvious is that.

The new and edit actions are really just different ways of representing the show action.

This is of course obvious when you think about it, but once again Chad and Tammer has written it down in plain simple English.

Rake Tasks

How should you treat your application specific Rake tasks on order to test them easily. Once again the solution is very simple.

Write the domain specific code as a class method on the appropriate model associated with the task.

Then all you have to do is call the method from the task.

task :fill_bar_with_ingredients do

It is not always appropriate to add this functionality to the existing models. This is a clue that another model is needed in your domain.

task :fill_bar_with_ingredients do

Database Index

Since most applications have far more reads than writes, you should add indexes to every field that appears in a WHERE clause or an ORDER clause. You should also add indexes for every combination of fields that are used that are combined with AND.

As always, don’t follow this advice blindely, if a table only has three rows then perhaps the index is overkill…


Apart from these tips, there are tons of other useful information, making this book a must-read if you are doing Rails development.

Wednesday, May 25, 2011

Ruby, an Exceptional Language

Based on the book Exceptional Ruby by Avdi Grimm, I have developed a strategy for how I should deal with exceptions in Ruby.

Being a very dynamic language, Ruby allows very flexible coding techniques. Exceptions are not an exception :).

When I am developing a library in Ruby I typically create one Error module and one StdError class. The Error module is a typical tag module and does not contain any methods.

Tag Module

# Tag module for the Tapir library 
module Tapir
  module Error

The reason for the tag module is that I can use it to tag exceptions occurring inside my library without having to wrap them in a nested exception.

module Tapir
  class Downloader
    def self.get url
      HTTP.get url
    rescue StandardError -> error   # Rescue the error 
      # Namespace the error by tagging it with ::Tapir::Error 
      raise                         # And raise it again 
# Client usage 
  Tapir::Downloader.get 'http://non.existent.url/' 
rescue Tapir::Error => error
  puts "Stupid tapir, gave me error #{error.message}" 

This is beautiful. I am scoping an internal error as my own. Since Ruby is dynamic there is no need to declare a new class that wraps all the methods in the StandarError I have access to them anyway. Duck typing for the win!

A Nested Exception Class

In some cases the tag module is not enough. Perhaps the exception was not created by another exception. In that case I need a real class since it is not possible to raise modules. But while I am at it I usually make the class a nested exception in order to simplify wrapping of other exceptions if the need comes up. This is how I do that.

module Tapir
  # I usually call the class `StdError` since it prevents the user of 
  # the library from rescuing the global `StandardError`. 
  class StdError < StandardError 
    extend Error             # Extend the Error tag module 
    attr_reader :original    # Add an accessor for the original, if one exists 
    # Create the error with a message and an original that defaults to 
    # the exception that is currently active, in this thread, if one exists 
    def initialize(msg, original=$!) 
      @original = original; 
# Client Usage 
rescue Tapir::Error => error      # rescue the tag module 
  puts "Bad tapir #{error.message}, due to #{error.original.message}" 
# or if I want to be more specific 
rescue Tapir::StdError => error   # rescue the specific error 
  puts "Bad tapir #{error.message}, due to #{error.original.message}" 

Notice that I don’t have to wrap the exception explicitly, since I default the Exception to the last error that is stored in $!.

Now the only reason for me to want to create a Tapir::StdError apart from it being misuse of my library is if I want to add additional information to the exception that already occurred. In that case I may also want to extend the Tapir::StdError and create an exception with additional fields.

module Tapir
  # Create a specific exception to add more information for the client 
  class TooOldError < StdError
    attr_reader :age, :max_age
    def initialize(msg, original=$!, age, max_age) 
      super(msg, original) 
      @age, @max_age = age, max_age
# Client usage 
rescue TooOldError => error
  # Use the specific error properties 
  puts "Hey, your are #{error.age}, that is too damn old!" 

Throw – Catch

Ruby also has an alternative to raise and rescue called throw and catch.

They should not be used as an alternative to exceptions, instead they are escape continuations that should be used to escape from nested control structures across method calls. Powerful! Here is an example from Sinatra

# Here is the throw 
   # Pass control to the next matching route. 
    # If there are no more matching routes, Sinatra will 
    # return a 404 response. 
    def pass(&block) 
      throw :pass, block
# and here is where it is caught 
    def process_route(pattern, keys, conditions) 
      catch(:pass) do 
        conditions.each { |cond| 
          throw :pass if instance_eval(&cond) == false } 
# Allowing usage such as 
  get '/guess/:who' do 
    pass unless params[:who] == 'Frank' 
    'You got me!' 
  get '/guess/*' do 
    'You missed!' 


Wrap up

This is how I use exceptions in Ruby now, thanks to ideas from the book. Other good ideas from the book are the three guarantees:

  • The weak guarantee, if an exception is raised, the object will be in a consistent state.
  • The strong guarantee, if an exception is raised, the object will be left in its initial state.
  • The nothrow guarantee, no exceptions will be raised by this method.

And a nice way of categorizing exceptions based on three different usages by the client. (My categories are not exactly the same as Avdis)

  • User Error, the client has used the library wrong.
  • Internal Error, something is wrong with the library. We are looking into the problem…
  • Transient Error, something is now working right now, but the same call may succeed in a while. It is a good idea to provide a period after whick the call will probably succeed. the client to try again.

It is a great book which contains a lot more information than I covered here. Get it, it is well worth the money.

Sunday, May 15, 2011

A Not Very Short Introduction To Node.js

Node.js is a set of asynchronous libraries, built on top of the Google V8 Javascript Engine. Node is used for server side development in Javascript. Do you feel the rush of the 90's coming through your head. It is not the revival of LiveWire, Node is a different beast. Node is a single threaded process, focused on doing networking right. Right, in this case, means without blocking I/O. All the libraries built for Node use non-blocking I/O. This is a really cool feature, which allows the single thread in Node to serve thousands of request per second. It even lets you run multiple servers in the same thread. Check out the performance characteristics of Nginx and Apache that utilize the same technique.

Concurrency x Requests

The graph for memory usage is even better.

Concurrency x Memory

Read more about it at the Web Faction Blog

OK, so what's the catch? The catch is that all code that does I/O, or anything slow at all, has to be called in an asynchronous style.

// Synchronous 
var result = db.query("select * from T"); 
// Use result 
// Asynchronous 
db.query("select * from T", function (result) { 
    // Use result 

So, all libraries that deal with IO has to be re-implemented with this style of programming. The good news is that even though Node has only been around for a couple of years, there are more than 1800 libraries available. The libraries are of varying quality but the popularity of Node shows good promise to deliver high-quality libraries for anything that you can imagine.


Node is definitely not the first of its kind. The non-blocking select() loop, that is at the heart of Node, dates back to 1983.

Twisted appeared in Python (2002) and EventMachine in Ruby (2003).

This year a couple of newcomers appeared.

Goliath, which builds on EventMachine, and uses fibers to allow us to program in an synchronous style even though it is asynchronous under the hood.

And, the Async Framework in .Net, which enhances the compiler with the keywords async and await to allow for very elegant asynchronous programming.

Get Started

This example uses OSX as an example platform, if you use something else you will have to google for instructions.

# Install Node using Homebrew 
$ brew install node
==> Downloading http://nodejs.org/dist/node-v0.4.7.tar.gz
######################################################################## 100.0% 
==> ./configure --prefix=/usr/local/Cellar/node/0.4.7 
==> make install
==> Caveats
Please add /usr/local/lib/node to your NODE_PATH environment variable to have node libraries picked up. 
==> Summary
/usr/local/Cellar/node/0.4.7: 72 files, 7.5M, built in 1.2 minutes

When installed you have access to the node command-line command. When invoked without arguments, it start a REPL.

$ node
> function hello(name) {
... return 'hello ' + name; 
... }
> hello('tapir') 
'hello tapir' 

When invoked with a script it runs the script.

// hello.js 
setTimeout(function() { 
}, 2000); 
$ node hello.js


As I mentioned above, Node is focused on networking. That means it should be easy to write networking code. Here is a simple echo server.

// Echo Server 
var net = require('net'); 
var server = net.createServer(function(socket) { 
    socket.on('data', function(data) { 

And here is a simple HTTP server.

// HTTP Server 
var http = require('http'); 
var web = http.createServer(function(request, response) { 
  response.writeHead(200, { 
    'Content-Type': 'text/plain' 
  response.end('Tapirs are beautiful!\n'); 

Quite similar. A cool thing is that the servers can be started from the same file and node will, happily, serve both HTTP and echo requests from the same thread without any problems. Let's try them out!

# curl the http service 
$ curl localhost:4001 
Tapirs are beautiful! 
# use netcat to send the string to the echo server 
$ echo 'Hello beautiful tapir' | nc localhost 4000 
Hello beautiful tapir


Node comes with a selection of built in modules. Ryan Dahl says that they try to keep the core small, but even so the built-in modules cover a lot of useful functionality.

  • net - contains tcp/ip related networking functionality.
  • http - contains functionality for dealing with the HTTP protocol.
  • util - holds common utility functions, such as log, inherits, pump, ...
  • fs - contains filesystem related functionality, remember that everything should be asynchronous.
  • events - contains the EventEmitter that is used for dealing with events in a consistent way. It is used internally but it can be used externally too.

An example

Here is an example of a simple module.

// module tapir.js 
// require another module 
var util = require('util'); 
function eat(food) { 
  util.log('eating '+ food); 
// export a function 
exports.eat = eat; 

As you can see it looks like a normal Javascript file and it even looks like it has global variables. It doesn't. When a module is loaded it is wrapped in code, similar to this.

var module = { exports: {}}; 
(function(module, exports){ 
  // module code from file 
})(module, module.exports); 

As you can see the code is wrapped in a function and an empty object with an export property is sent into it. This is used by the file to export only the functions that it want to publish.

The require function works in symphony with the module and it returns the exported functions to the caller.

Node Package Manager, npm

To allow simple handling of third-party packages, Node uses npm. It can be installed like this:

$ curl http://npmjs.org/install.sh | sh

And used like this:

$ npm install -g express
mime@1.2.1 /usr/local/lib/node_modules/express/node_modules/mime
connect@1.4.0 /usr/local/lib/node_modules/express/node_modules/connect
qs@0.1.0 /usr/local/lib/node_modules/express/node_modules/qs
/usr/local/bin/express -> /usr/local/lib/node_modules/express/bin/express
express@2.3.2 /usr/local/lib/node_modules/express

As you can see, installing a module also installs its dependencies. This works because a module can be package with meta-data, like so:

// express/package.json 
  "name": "express", 
  "description": "Sinatra inspired web development framework", 
  "version": "2.3.2", 
  "author": "TJ Holowaychuk <tj@vision-media.ca>", 
  "contributors": [ 
    { "name": "TJ Holowaychuk", "email": "tj@vision-media.ca" }, 
    { "name": "Guillermo Rauch", "email": "rauchg@gmail.com" } 
  "dependencies": { 
    "connect": ">= 1.4.0 < 2.0.0", 
    "mime": ">= 0.0.1", 
    "qs": ">= 0.0.6" 
  "keywords": ["framework", "sinatra", "web", "rest", "restful"], 
  "repository": "git://github.com/visionmedia/express", 
  "main": "index", 
  "bin": { "express": "./bin/express" }, 
  "engines": { "node": ">= 0.4.1 < 0.5.0" } 

The package.json contains information about who made the module, its dependencies, along with some additional information to enable better searching facilities.

Npm installs the modules from a common respository, which contains more than 1800 modules.

Noteworthy Modules

Express is probably the most used of all third-party modules. It is a Sinatra clone and it is very good, just like Sinatra.

// Create a server 
var app = express.createServer(); 
// Mount the root (/) and redirect to index 
app.get('/', function(req, res) { 
// Handle a post to /quiz 
app.post('/quiz', function(req, res) { 

Express uses Connect to handle middleware. Middleware is like Rack but for Node (No wonder that Node is nice to work with when it borrows its ideas from Ruby :)

      // Add a logger 
      // Serve static file from the current directory 
    , connect.static(__dirname) 
      // Compile Sass and Coffescript files, on the fly 
    , connect.compiler({enable: ['sass', 'coffeescript']}) 
      // Profile all requests 
    , connect.profiler() 

Another popular library is Socket.IO. It handles the usual socket variants, such as WebSocket, Comet, Flash Sockets, etc...

var http = require('http'); 
var io = require('socket.io'); 
server = http.createServer(function(req, res){...}); 
// socket.io attaches to an existing server 
var socket = io.listen(server); 
socket.on('connection', function(client){ 
  // new client is here! 
  client.on('message', function(){ ... }) 
  client.on('disconnect', function(){ ... }) 

MySql has a library for Node.

  // Note the callback style 
  function(err, results, fields) { 
    if (err) { throw err; } 

And Mongoose can be used for accessing MongoDB.

// Declare the schema 
var Schema = mongoose.Schema
  , ObjectId = Schema.ObjectId; 
var BlogPost = new Schema({ 
    author    : ObjectId
  , title     : String
  , body      : String
  , date      : Date
// Use it 
var BlogPost = mongoose.model('BlogPost'); 
// Save 
var post = new BlogPost(); 
post.author = 'Stravinsky'; 
instance.save(function (err) { 
// Find 
BlogPost.find({}, function (err, docs) { 
  // docs.forEach 

Templating Engines

Everytime a new platform makes its presence, it brings along a couple of new templating languages and Node is no different. Along with the popular ones from the Ruby world, like Haml and Erb (EJS in Node), comes some new ones like Jade and some browser templating languages like Mustache and jQuery templates. I'll show examples of Jade and Mu (Mustache for Node).

I like Jade, because it is a Javascript dialect of Haml and it seems appropriate to use if I'm using Javascript on the server side.

!!! 5 
    title= pageTitle
      if (foo) { 
    h1 Jade - node template engine
      - if (youAreUsingJade) 
        p You are amazing
      - else 
        p Get on it! 

I'm not really sure if I like Mustache or not, but I can surely see the value of having a templating language which works both on the server side and in the browser.

    <li><a href="{{url}}">{{name}}</a></li> 
  <p>The list is empty.</p> 


Node comes with assertions built in, and all testing frameworks build on the Assert module, so it is good to know.

assert.ok(value, [message]); 
assert.equal(actual, expected, [message]) 
assert.notEqual(actual, expected, [message]) 
assert.deepEqual(actual, expected, [message]) 
assert.strictEqual(actual, expected, [message]) 
assert.throws(block, [error], [message]) 
assert.doesNotThrow(block, [error], [message]) 
assert.fail(actual, expected, message, operator) 
// Example 
// assert.throws(function, regexp) 
  function() { throw new Error("Wrong value"); }, 

Apart from that there are at least 30 different testing frameworks to use. I have chosen to use NodeUnit since I find that it handles asynchronous testing well, and it has a nice UTF-8 output that looks good in the terminal,

// ./test/test-doubled.js 
var doubled = require('../lib/doubled'); 
// Exported functions are run by the test runner 
exports['calculate'] = function (test) { 
    test.equal(doubled.calculate(2), 4); 
// An asynchronous test 
exports['read a number'] = function (test) { 
    test.expect(1); // Make sure the assertion is run 
    var ev = new events.EventEmitter(); 
    process.openStdin = function () { return ev; }; 
    process.exit = test.done; 
    console.log = function (str) { 
        test.equal(str, 'Doubled: 24'); 
    ev.emit('data', '12'); 


There are already a lot of platforms providing Node as a service (PaaS , Platform as a Service). Most of them are using Heroku style deployment by pushing to a Git remote. I'll show three alternatives that all provide free Node hosting.

Joyent (no.de)

Joyent, the employers of Ryan Dahl, give you ssh access so that you can install the modules you need. Deployment is done by pushing to a Git remote.

$ ssh node@my-machine.no.de
$ nmp install express
$ git remote add node node@andersjanmyr.no.de:repo
$ git push node master
Counting objects: 5, done. 
Delta compression using up to 2 threads. 
Compressing objects: 100% (3/3), done. 
Writing objects: 100% (3/3), 321 bytes, done. 
Total 3 (delta 2), reused 0 (delta 0) 
remote: Starting node v0.4.7... 
remote: Successful
To node@andersjanmyr.no.de:repo
  8f59169..c1177b0  master -> master


Nodester, gives you a command line tool, nodester, that you use to install modules. Deployment by pushing to a Git remote.

$ nodester npm install express
$ git push nodester master
Counting objects: 5, done. 
Delta compression using up to 2 threads. 
Compressing objects: 100% (3/3), done. 
Writing objects: 100% (3/3), 341 bytes, done. 
Total 3 (delta 2), reused 0 (delta 0) 
remote: Syncing repo with chroot
remote: From /node/hosted_apps/andersjanmyr/1346-7856c14e6a5d92a6b5374ec4772a6da0.git/. 
remote:    38f4e6e..8f59169  master     -> origin/master
remote: Updating 38f4e6e..8f59169
remote: Fast-forward
remote:  Gemfile.lock |   10 ++++------
remote:  1 files changed, 4 insertions(+), 6 deletions(-) 
remote: Checking ./.git/hooks/post-receive
remote: Attempting to restart your app: 1346-7856c14e6a5d92a6b5374ec4772a6da0
remote: App restarted.. 
remote:     \m/ Nodester out \m/ 
To ec2-user@nodester.com:/node/hosted_apps/andersjanmyr/1346-7856c14e6a5d92a6b5374ec4772a6da0.git
   38f4e6e..8f59169  master -> master

Cloud Foundry

Cloud Foundry is one of the most interesting platforms in the cloud. It was genius by VM Ware to open source the platform, allowing anyone to set up their own cloud if they wish. If you don't want to setup your own Cloud Foundry Cloud, you can use the service hosted at cloundfoundry.com.

With Cloud Foundry, you install the modules locally and then they are automatically deployed as part of the vmc push. Push in this case does not mean git push, but instead, copy all the files from my local machine to the server.

$ npm install express  # Install locally 
mime@1.2.1 ./node_modules/express/node_modules/mime
connect@1.4.0 ./node_modules/express/node_modules/connect
qs@0.1.0 ./node_modules/express/node_modules/qs
express@2.3.0 ./node_modules/express
$ vmc push
Would you like to deploy from the current directory? [Yn]: Y
Application Name: snake
Application Deployed URL: 'snake.cloudfoundry.com'? 
Detected a Node.js Application, is this correct? [Yn]: 
Memory Reservation [Default:64M] (64M, 128M, 256M, 512M, 1G or 2G) 
Creating Application: OK
Would you like to bind any services to 'snake'? [yN]: 
Uploading Application: 
  Checking for available resources: OK
  Packing application: OK
  Uploading (1K): OK
Push Status: OK
Staging Application: OK
Starting Application: ........OK


There are of course a bunch of tools that come with a new platform, Jake, is a Javascript version of Rake, but I am happy with Rake and I don't see the need to switch. But, there are some tools that I cannot live without when using Node.


If you use the vanilla node command then you have to restart it every time you make a change to a file. That is awfully annoying and there are already a number of solutions to the problem.

# Nodemon watches the files in your directory and reloads them if necessary 
$ npm install nodemon
nodemon@0.3.2 ../node_modules/nodemon
$ nodemon server.js 
30 Apr 08:21:23 - [nodemon] running server.js 
# Saving the file 
30 Apr 08:22:01 - [nodemon] restarting due to changes... 
# Alternative 
$ npm install supervisor
$ supervisor server.js 
DEBUG: Watching directory '/evented-programming-with-nodejs/.  


Another tool that it is hard to live without is a debugger. Node comes with one built in. It has a gdb flavor to it and it is kind of rough.

$ node debug server.js
debug> run
debugger listening on port 5858 
break in #<Socket> ./server.js:9 
debug> p data.toString(); 
// Javascript 
var echo = net.createServer(function(socket) { 
  socket.on('data', function(data) { 
      debugger; // <= break into debugger 

If you want a GUI debugger, it is possible to use the one that comes with Chrome by installing the node-inspector. It is started similarly to the built in debugger, but the --debug is an option instead of a subcommand.

$ node-inspector & 
visit to start debugging
$ node --debug server.js debugger listening on port 5858 

After that you can just fire up Chrome on the URL, and you can debug the node process just as if it was running in the browser.


Idioms, patterns, techniques, call it what you like. Javascript code is littered with callbacks, and event more so with Node. Here are some tips on how to write good asynchronous code with Node.

Return on Callbacks

It is easy to forget to escape from the function after a callback has been called. An easy way to remedy this problem is to call return before every call to a callback. Even though the value is never used by the caller, it is an easy pattern to recognize and it prevents bugs.

function doSomething(response, callback) { 
  doAsyncCall('tapir', function(err, result) { 
    if (err) { 
      // return on the callback 
      return callback(err); 
    // return on the callback 
    return callback(null, result); 

Exceptions in Callbacks

Exceptions that occur in callbacks cannot be handled the way we are used to, since the context is different. The solution to this is to pass along the exception as a parameter to the callback. In Node the convetion is to pass the error as the first parameter into the callback.

function insertIntoTable(row, function(err, data) { 
  if (err) return callback(err); 
  // Everything is OK 
  return callback(null, 'row inserted'); 

Parallel Execution

If you have multiple tasks that need to be finished before you take some new action, this can be handled with a simple counter. Here is an example of a simple function that starts up a bunch of functions in parallel and waits for all of them to finish before calling the callback.

// Do all in parallel 
function doAll(collection, callback) { 
  var left = collection.length; 
  collection.forEach(function(fun) { 
    fun(function() { 
      if (--left == 0) callback(); 
// Use it 
var result = []; 
  function(callback) { 
    setTimeout(function() {result.push(1); callback();}, 2000 )}, 
  function(callback) { 
    setTimeout(function() {result.push(2); callback();}, 3000 )}, 
  function(callback) { 
    setTimeout(function() {result.push(3); callback();}, 1000 )} 
  ], function() { return result; } 
// returns [3, 1, 2] 

Sequential Execution

Sometimes the ordering is important. Here is a simple function that makes sure that the calls are executed in sequence. It uses recursion to to make sure that the calls are handled in the correct order. It also uses the Node function process.nextTick() to prevent the stack from getting to large for large collections. Similar results can be obtained with setTimeout() in browser Javascript. It can be seen as a simple trick to achieve tail recursion.

function doInSequence(collection, callback) { 
    var queue = collection.slice(0); // Duplicate 
    function iterate() { 
      if (queue.length === 0) return callback(); 
      // Take the first element 
      var fun = queue.splice(0, 1)[0]; 
      fun(function(err) { 
        if (err) throw err; 
        // Call it without building up the stack 
var result = []; 
  function(callback) { 
    setTimeout(function() {result.push(1); callback();}, 2000 )}, 
  function(callback) { 
    setTimeout(function() {result.push(2); callback();}, 3000 )}, 
  function(callback) { 
    setTimeout(function() {result.push(3); callback();}, 1000 )} 
  ], function() { return result; }); 
// Returns [1, 2, 3] 

Library Support for Asynchronous Programming

If you don't want to write these functions yourself, there are a few libraries that can help you out. I'll show two version that I like.


Fibers are also called co-routines. Fibers provide two functions, suspend and resume, which allows us to write code in a synchronous looking style. In the Node version of fibers, node-fibers, suspend and resume are called yield() and run() instead.

var print = require('util').print; 
function sleep(ms) { 
    var fiber = Fiber.current; 
    setTimeout(function() { fiber.run(); }, ms); 
Fiber(function() { 
    print('wait... ' + new Date + '\n'); 
    print('ok... ' + new Date + '\n'); 
print('back in main\n'); 

Fibers are a very nice way of writing asynchronous code but, in Node, they have one drawback. They are not supported without patching the V8 virtual machine. The patching is done when you install node-fibers and you have to run the command node-fibers instead of node to use it.

The async Library

If you don't want to use the patched version of V8, I can recommend the async library. Async provides around 20 functions that include the usual 'functional' suspects (map, reduce, filter, forEach...) as well as some common patterns for asynchronous flow control (parallel, series, waterfall...). All these functions assume you follow the Node convention of providing a single callback as the last argument of your async function.

async.map(['file1','file2','file3'], fs.stat, function(err, results){ 
    // results is now an array of stats for each file 
async.filter(['file1','file2','file3'], path.exists, function(results){ 
    // results now equals an array of the existing files 
    function(){ ... }, 
    function(){ ... } 
], callback); 
    function(){ ... }, 
    function(){ ... } 
], callback); 


Node is definitely an interesting platform. The possibility to have Javascript running through the whole stack, from the browser all the way down into the database (if you use something like CouchDB or MongoDB) really appeals to me. The easy way to deploy code to multiple, different cloud providers is also a good argument for Node.