Glenn Gillen

Rails Underground - Day 2

update: videos are now online

I was a little late to the kick-off of the keynote on day 2 with Yehuda. For the most part if is another discussion on what's happening in Rails 3 so I don't think I've missed anything I hadn't heard before. He then started to go into more detail on how things have changed than he has previously, probably due to the progress since I last saw him in March…

Yehuda Katz - Keynote

Yehuda went through the upcoming changes in approach and philosophy on rails Dev. Having well defined contracts, consistent conventions, and allowing things be be changeable.

One of the changes of not is splitting off ideas and interfaces into modules which can then be imported into a class at runtime. He used the analogy that your parents didn't define everything you could do when you were born, you can learn new things. So to when you define ta class you don't have to define the complete interface up front, nor if you want to make changes to you need to re-implement the entire class. Instead you can have sub-sections of interfaces included or changed at runtime where appropriate.

In Rails 3 this translates into better separation of the components and clearly defined interfaces. No longer are you forced to use ActionView, you just need to use something that is "ActionView compliant". The same goes for ActiveRecord, your models are just "ActionModel compliant".

An ActionView compliant interface just needs to implement the following 3 methods:

for_controller(paths, assigns, controller)

An ActiveModel compliant interface needs the following 4 methods:


That last model_name method needs to return plural, partial name, and a few other things rails expects internally. It'll only be 10-20 lines of code to implement manually, or there is a module to include if you just want the default behaviour.

Next up is a cleaner separation of components, one which we got a demo of was splitting out the ActiveModel::Validations. Now you can create your own ActiveModel compliant persistance/ORM layer, and pull in the existing validations for free regardless of where it's stored (YAML, CouchDB, etc.).

By implementing the above you get a system that fully works with ActionPack, so all the usual form and error helpers will just work.

The other benefit is that pulling in support for these doesn't need you to load the entire Rails stack and rob your machine of RAM, it's only a 1.5mb hit.

Next was the new ActionController::Metal demo. At first I wondered why I'd bother, it was slightly more vebose than just using Rackable or rolling it manually, and seemed to carry more overhead than Sinatra. But it gives you the ability to opt into additional features easily. If you wall before/after filters you can pull in the callbacks module. If you want better rendering options (various formats, etc) pull in the rendering module, if you want different layouts pull in that module. ActionController::Base is basically just ActionController::Metal with all of the modules pulled in, so you've now got the way to strip back to metal, or anything in between.

The final piece is ActiveSupport. If for some reason you want to pull in the entire ActiveSupport behemoth you just call:

require 'active_support/all'

Otherwise you can call in the various newly separated modules, one of note was

require 'active_support/ruby/shim'

Which backports a bunch of useful ruby1.9 features on datatypes that could be easily implemented in pure ruby. So you can use them now, and when you upgrade to ruby1.9 you get the performance improvement of having them implemented in C.

The end result is a new level of granularity. Just because you want to use DataMapper doesn't mean you have to say goodbye to all of ActiveRecord. So you can opt-out of just the tiny pieces you don't want. Conversely, you can start completely bare and opt-in on various features to build your own framework on top of rails. You could now build Sinatra on top of rails, and when the complexity of your app increases you can much more easily pull in the additional features you need. All in all, it sounds really cool. Cool enough that I might even try using it on my next app.

Dr. Nic Williams - Dead Simple JavaScript Unit Tests in Rails with Blue Ridge and Screw.Unit

Dr. Nic has dropped a very important bombshell right at the start of his talk. The while Australia may indeed be losing the current 5 game series, we are still holders of The Ashes. Now on with the less important aspects of the presentation.

Nic has a bunch of open source plugins and projects which he could have come and given a presentation on, but he's so impressed with BlueRidge that he feels it's the most important thing to happen to the Rails ecosystem since Cucumber. At a high level it allows you to do both in browser and headless testing, it fits in with the existing rake tasks, and means that javascript testing should just become part of the usual TDD/BDD development approach and not be an afterthought in Cucumber.

Generating a javascript test with BlueRidge creates a spec file, and a fixture file which will give you some test HTML. The idea being that most of your javascript ultimately want to modify the DOM, so run the app and copy out the rendered HTML you need to use as your stub.

An example of a test is as follows:


Screw.Unit(function() {
  describe('accounts/new initially has fields', function() {
    it("should have name field", function() {
      expect($('#account_name').size()).to(equal, 1);

  describe('accounts/new to fail if missing name', function() {
    it("should have erroneous name field", function() {
      expect($('div.fieldWithErrors #account_name').size()).to(equal, 1)

The first test will check that an account_name field exists, and in TDD it shouldn't. So next you should go insert the HTML you need into your stub HTML file. (note: you'll also need to ensure that the form doesn't actually submit on the test, so you need to add a function to return false when the button is clicked).

When you run the tests you should find that you get nicely green/red tests in browser for any passing/failing tests.


Screw.Unit(function() {  
  describe('accounts/new to fail if missing name', function() {
    it("should have erroneous name field", function() {
      expect($('div.fieldWithErrors #account_name').size()).to(equal, 1)

Next it is checking that when you submit the form, it wraps the empty form input with the standard rails error div. Run the test, get a failure. Implement the code and the tests go green. You'll probably notice that the test passes but you don't get the red box, so you need to include the CSS in the test (just below the require) to actually see what the user would:


Next Dr. Nic went through a big gotcha, in that that tests don't run in transactions. Which means the DOM isn't reset back to it's an initial state, and he highlighted it by adding an additional test to check that when you input a name that you don't get the error div. Unfortunately the test fails, as the div had been inserted by previous test.

The way to deal with it was using a before block in your test:


Screw.Unit(function() {  
 before(function() {

 describe('accounts/new to fail if missing name', function() {
   it("should have erroneous name field", function() {
     expect($('div.fieldWithErrors #account_name').size()).to(equal, 1)

If someone can come up with an easy way to snapshot the DOM/body.innerHTML and all attached events and substitute it back in after each test that would be awesome.

Defining your own matches is also pretty simple. You simply add an object with match (to do the comparison) and failure_message to display the feedback and push it onto the matchers hash/dictionary.

If anybody is wondering if they should test more, you need to watch the last 30secs of the presentation once the videos are available. Great job Nic :)

Paolo Negri - Divide and conquer riding rabbits and trading gems

Paolo had a problem where he had 1,000,000 search phrases and wanted to compare the results between Google and Bing (warning, don't try this at home… it's a breach of the terms of service). Trying to fetch 2m pages is quite a time consuming task, enter a distributed approach.

How many nodes will you need? How many workers? What mechanism should you use to distribute the work? RabbitMQ is an open source implementation of AMQP, written in Erlang. The fact it's written in Erlang is relevant is it inherently supports concurrency, fault tolerance, and distribution of tasks.

To get started you need to install the rabbitmq package and a gem:

sudo apt-get install rabbitmq
sudo gem install tmm1-amqp

There's a link to the code at the bottom so rather than transcribe it all I'll just talk through it.

First you create a worker which does the setup and then subscribe to the queue. The queue is where RabbitMQ will store all outstanding work, and the workers will request new work whenever they're free for more processing. At the other end of the process what happens is that your app pushes a message to an exchange, the exchange pops it on a queue, and then workers come take it off the queue. You can have multiple exchanges, multiple queues, and multiple workers. Queues and messages are resident in RAM, however you can persist them to disk for further fault tolerance (for a performance hit).

There are problems though, if a worker has a problem you could lose some messages. To get around it you can send an ACK message from the worker to prefetch a message. You then process the message, and acknowledge completion. Dead client connections go unacknowledged and the work is handed off elsewhere.

Another problem is there is no easy way to control the workers. The solution to this is to great a system queue. You give each worker a unique ID and explicitly define which exchange you want to use, and make it use the system queue. You then subscribe the worker to that queue. If you want to do something to the worker you simply post the appropriate command onto the system queue, and the worker will then pick up that command as soon as it can. You can also multicast messages to affect all workers.

Next up was drilling into some of the detail of fetching the pages Google/Bing. For anyone who's not done something like this, you really need to use EventMachine and EventMachine::Deferrables. It means you're not blocked waiting for socket responses, but can fire callbacks when you get a response back and you can do other tasks (fetch other pages) concurrently in the one process.

Problem #3 was working out how many workers you had, and keeping track of what they were doing. The solution was somewhat similar to the system approach, setup a heartbeat queue and have the workers post to the queue what they're doing. Then create a different worker which subscribes to that queue, takes all the messages off, and posts them somewhere for you to interrogate or view later (like a web page?).

For doing TDD with RabbitMQ there is moqueue which gives you easy testing of workers. There is also a web front-end for monitoring your RabbitMQ clients and make sure the system is alive called alice. For testing with EventMachine you've got em-spec.

Slides are available at SlideShare. Code is on Github

Pat Allan - "Sphinx - Beyond the basics"

Sphinx is a full-text search engine, and Thinking Sphinx is an ActiveRecord plugin which will use Sphinx. Once you've set it up if you want to search you just do: "my name"

You'll also need to use a few rake tasks to create/update the indexes. Now the basics are out of the way.


Pat's written some ruby code to provide a means of offering facets (search summaries) like other engines like Solr offer. To use you do

indexed field, :facet => true
has_attribute , :fa

rebuild your indexes, then you're good to search.

@facets = Article.facets 'pancakes'
@facets == { :author => { "Pat Allan" => 12,
                         "Glenn Gillen" => 1}.
            :tags => { "breakfast" => 2,
                       "brunch" => 3}}

You can then drill into the results to return just the Articles for a specific author:

@facet.for(:author => "Pat Allan")


It got a bit of a cheer, apparently some people have been hanging out for them. They're basically the bold bits on a Google page of results, where the actual search terms are highlighted within the results. The syntax is:

@articles = 'pancakes'


Pat thinks this could be the killer feature. The only gotcha is that the datatypes need to be in radians as floats. To define lats and longs on your models just do:

has latitude
has longitude

If you've already got lat and long data as degrees and decimals you can do:

has 'RADIANS(latitude)',
    :as => :latitude,
    :type => :float
has 'RADIANS(latitude)',
    :as => :latitude,
    :type => :float

Once that's done, you can find all pubs that serve pancakes near a given point:

@pubs = 'pancakes'
  :geo => [@lat, @lng],
  :order => [email protected] ASC'

Once you've got your pubs you probably want

@pubs.each_with_geodist do |pub, distance|

International Sphinx

Using the character set table means you can now handle international characters easier, so an "e" with an accent can be treated as a plain old english "e". Same with other similar characters.

With word stemming you can now define a custom stemmer to help you handle stemming in various languages without too much trouble.

Different Delta Approaches

In most rails apps you want information to be available as quickly as possible, so when users update a field you don't want to have to rebuild the whole index. To get around that you can do the following:

define_index do
  set_property :delta => true

The downside is you end up with a new column on the table, and a little additional HTTP overhead.

Instead you can skip the extra column and use the updated_at field to manage deltas:

define_index do
  set_property :delta => :datetime, :threshold => 2.hours

The downside is they're not updated immediately, they're updated every 2 hours. Another alternative is to use the delayed option which pushes the request out of process using DelayedJob.

define_index do
  set_property :delta => :delayed

Removes indexing from HTTP requests, and the results are available soon(ish). The downside is it needs a constantly running rake task (using delayedjob) and a long queue of updates could take a while to get through.

Multi-server deployment

Basically, set the remote_spinx value to true, use the DelayedJob approach, and make sure Spinx and the database are on the same server. You'll probably want the DelayedJob and ThinkingSphinx on that server too. Pat's not actively using multi-server setups so is looking for feedback on this.

Sphinx Scopes

People have been using named_scopes with Sphinx and it's caused some problems because of the difference between SQL and Sphinx. The new sphinx scopes are an implementation that works just like named scopes, except they chain together properly and work with pagination.

Brendan Lim - Mobilize Your Rails Application

There are an estimated 4 billing mobile users, which means there are more phones than there are personal computers. Approximately half of those users have mobile web access. With that much reach you need to make your apps accessible to as many of these users as possible.

THe problem with trying to do just one webpage for all devices is problematic because of resolution differences, javascript, flash, bandwidth, etc.

Enter MobileFu. It'll detect if a user is mobile, can add custom styling based on user agent and takes you one step closer to one webpage. You use it like:

class MyController < ActionController::Base

  def index
    respond_to do |format

Which will output the page in a mobile friendly format. Some of the magic happens in the stylessheet_include_tag:

stylessheet_include_tag "foo"

will look for foo_iphone.css, foo_android.css, etc. so you can have custom styles for each device. You've also got various helper methods to take actions within you view:


Rails iUI is a wrapper for the iUI user interface framework. It takes care of detecing rotating an iPhone and various other helpers. To use it you do the following in your controller:

class MyController < ActionController::Base

  def index
    respond_to do |format|

And then within your view:


To display iPhone styled default interface objects you've the following methods:

iui_toolbar( initial_caption, search_url)
iui_list(items, opts)
iui_grouped_list(items, opts, &group_by_block)

Again you need add some stuff into your views to make it work:

<body <%=register_orientation_change%>>

you'll receive params[:position] in the request to the url you put in observe_orientation_change, 0 is upright. 90 and -90 is the two various landscape modes (depending on rotation).

Next up is SMS. You can use Clickatell's API. Once you connect to the API you just do:

api.send_message(phone_number, message)

Brendan also wrote something called SMSFu. The problems are that you need to know the recipients carrier and it doesn't support as many carriers. But because it sends SMS via email it's free to send (to supported carriers). To send you just do:

deliver_sms(phone_number, carrier, message)

Now how do you receive SMS? Well you can get a shortcode (an inbound SMS number like 80800), but they're quite expensive to setup with monthly costs associated. Alternatively there are providers (like TextMarks) where you're given a keyword that users need to prepend on the front of their message, and you share a shortcode. TextMarks then send the next set of words (up to 9) as an array to you to do what you want with.

If you're wanting to receive MMS, then MMS2R is the gem to use. It strips out any carrier advertising and other stuff that may be added to the core attachment that you actually want. You'll get access to the message subject, body, and default_media (the attachment).

Jim Weirich, DHH, Obie Fernandez, Geoffrey Grosenbach, Jonathan Siegel - Q&A Session

Bah, I can't keep pace with questions and responses in a fashion that does them justice. Best wait for the videos.

Eleanor McHugh - The Ruby Guide to *nix Plumbing

Unix isn't just a server based operating system, it's a philosophy. It consists of a whole heap of little tools that all receive a little piece of input and do just a little piece of work. And they can all work together to achieve much bigger tasks.

Eleanor went through a bit of an explanation of how she felt the fascination on better threading in ruby is a bit stupid, and that the introduction of the threading approach we have in 1.8 was a backward step from 1.6. Most problems you can get around by forking the process, and threading is very rarely the appropriate solution to the problem.

The shell system is simply a process which provides you an interactive environment, job/process management, and a scripting language. However, you can get around using the standard shell scripting languages by using the under-documented shell.rb.

A little known fact is that the IO/File new methods don't just accept a string, but will also accept a file descriptor (which is just a number). So if you open 0, 1, or 2 you'll end up writing directly to STDIN, STDOUT, and STDERR respectively.

Taking the approach further means that you can effectively communicate with any process in a similar fashion by sending signals to the appropriate process descriptors.

When working with sockets in ruby, Eleanor said you should turn off reverse DNS lookups as they will greatly slow down network connection and tie up your processes needlessly.

There's also a whole host of examples of using low level system calls and examples in the slides which I've no chance of transcribing quickly enough with the right context.

Lindsay Holmwood - Behaviour driven monitorins with cucumber-nagios

Everyone is probably familiar these days with what Cucumber is and the descriptive syntax it gives you for defining tests. You can combine it with webrat and mechanize to give you a way of monitoring external websites. To get started you need to install the gem and setup a new project:

sudo gem install auxesis-cucumber-nagios
cucumber-nagios-gen project mysites
cd mysites
rake deps

You then create a new feature, for the sake of example create one to test the navigation:

cucumber-nagios-gen feature navigation

which will create the file features/ From there you can use all the usual cucumber/webrat commands to go to a page, click links, test for the result text. To run the test you:

cucumber-nagios features/

cucumber-nagios will then take care of taking the cucumber results and outputting them in a nagios plugin format.

The caveats are that you're going to run problems if you site is javascript dependent. But you're embracing progressive enhancement and your site works fine without javascript, right? You could also run into some problems if you're trying to run multiple scenarios within the same feature file.

The big benefits of this approach is that you end up making your system monitoring actually test what you care about. Traditionally these tests are just a ping or socket connection. While if you've got a server down your tests will definitely fail, but there is a whole host of other cases where your site might be responsive but not usable by a real person (DB down?). Lindsay argues that this approach is really (and should be) continuous integration for systems monitoring.

Some cool examples of people extending it include telephony-systems-test which allows you to test an Asterisk/Adhearsion telephony dial plan. cucumber+dash which pull metrics out of the hosted Dash metrics system to alert you of application performance problems.

Lindsay went through what seemed like an awesome replacement for nagios that he's been working on, which is handling upto 6,000 tests per second. It's super top secret at the moment and didn't want specifics to go outside the room. He's auxesis on github, you join the dots.

I've also got summaries for Rails Underground - Day 1

Glenn Gillen

I'm an advisor to, and investor in, early-stage tech startups. Beyond that I'm an incredibly fortunate husband and father. Working on a developer-facing tool or service? Thinking about starting one? Email me and let me know or come to one of our days to help make it a reality.