Ruby Performance Testing Tips and Tricks
This post comes to us from our friends and partners at thoughtbot, a leading provider of Ruby on Rails based web application development and training services. Many thanks to Jason Morrison, Matt Jankowski, Dan Croak and Nick Quaranto from the thoughtbot team for pulling this together.
###Use em_proxy to load-test code changes
by Jason Morrison
Running a high-traffic site with little room for downtime could make deploying large changes daunting. What if your new code has a large impact on performance?
We’ve been in this situation several times with Hoptoad as we make large changes and improvements to the codebase. Things like changing databases, upgrading major versions of Rails, and adding queueing can have unpredictable effects on performance. Little things can have a big effect, too, when most of your traffic is focused in one write-heavy API endpoint.
We’ve boosted our confidence in rolling out these changes by performance testing ahead of time. We’ve tried a variety of synthetic load testing approaches, but the most realistic way to do this is by using real traffic.We use Ilya Grigorik’s em-proxy to fork traffic in real time from our production environment to a load testing environment which is identical save for the new code. (See the duplex.rb example in em-proxy). We use Engine Yard’s “Clone Environment” feature to make a reasonably up-to-date copy of production, and we use a Chef recipe to toggle em-proxy on and off. We use New Relic RPM to keep an eye on performance, and make sure that the load testing environment performance is acceptable. Once we’ve run a few hours to a day’s worth of traffic, we’ll shut down the load testing environment, and deploy the change live.
###Automatic cucumber step generation with factory girl
by Dan Croak
Did you know that Factory Girl includes some steps for your integration testing pleasure? They are currently available but remain relatively unknown.
Let’s assume you’ve defined your factories normally in test/factories.rb or spec/factories.rb:
Factory.define :user do |user|
user.email { Factory.next(:email) }
user.password { "password" }
user.password_confirmation { "password" }
end
Factory.define :author, :parent => :user do |author|
author.after_create { |a| Factory(:article, :author => a) }
end
Factory.define :recruiter, :parent => :user do |recruiter|
recruiter.is_recruiter { true }
end
Once those are in place, and assuming you’ve otherwise loaded factory girl correctly, add this to features/support/env.rb:
require 'factory_girl/step_definitions'
Then, write Cucumber features using the simple “create record” step:
Given a user exists
…or the “create record & set one attribute” step:
Given an author exists with an email of "[email protected]"
…or the “create record & set multiple attributes” step:
Given the following recruiter exists:
| email | phone number | employer name |
| [email protected] | 1234567890 | thoughtbot |
These steps will be available for all your factories, so stop writing boilerplate steps and shake what Factory Girl gave you.
###(Testbot * Fog) + Hudson = Faster Tests!
by Nick Quaranto
Test suites can get slow, even when running on one high powered machine. So why not spin up EC2 instances and distribute the tests?
This is made really easy with testbot, a distributed test runner. It works like so:
- A requester kicks off the process, asking for tests to be run
- The server determines how many tests to run and how they will be distributed
- Runners on your army of EC2 instances execute the tests given to them
- THE KICK! (back up to the server)
- BAWWWWWWWW (results get returned to the requester)
Visually, it looks like: (thanks to the testbot readme)
Requester -- (files to run) --> Server -- (files to run) --> (many-)Runner(s)
^ | ^ |
|-------------------------------| |--------------------------------------|
(results) (results)
In our situation, the requester is Hudson (CI server project), which gets kicked off when we commit to GitHub. Testbot not only supports multiple machines but also multiple cores, so our tests run on 16 medium instances in 32 different processes. We start up the boxes on EC2 using a fork of cloud_bot, which uses fog to quickly spin up instances and runs a simple bash script to install what we need to run the app’s tests. The result: Instead of a 60+ minute test suite, the tests run in around 10 minutes. If your tests need a major speed boost, definitely look into testbot. ###Hoptoad deploy tracking resolves errors by Matt Jankowski
This has been around for quite some time, but we still get a lot of questions about it. People who use hoptoad, our web-based error tracking application, often will ask how they “resolve all errors” in their account.
Typically, they’ve been running production code for a while and have accumulated a lot of “old” errors that they want to stop paying attention to. They’d like to focus instead on errors that are happening NOW and not be distracted by things they might have fixed. There are two ways to resolve all, and both are connected to deployment.
If you have an account that supports this, and you use capistrano, then this feature “just works”. When you do a deployment using capistrano, you’ll see output towards the end of the deploy indicating a request to hoptoad has been made which tells it about the deploy. The next time you access your hoptoad account, you’ll see that all previous errors have been marked as resolved, and will only be reopened if they keep coming in.
If you don’t use capistrano, you can get the same effect by running a rake task. Running “rake hoptoad:deploy TO=production” (where production is the name of whatever environment you’d like to resolve all errors for) will have the same effect. We’ve found that on hosting platforms where you can’t use capistrano, you can usually find some sort of post deploy hook to help automate this step as well.
We’ve resisted adding this functionality to the web interface, because in our opinion there is never a scenario where someone should want to resolve old errors that doesn’t also coincide with a deployment.
Share your thoughts with @engineyard on Twitter