Spinning Up Cloud Compute Instances
</a>
Computation is the lifeblood of the cloud, providing the raw resources for innovation. Unfortunately, with great power comes great responsibility complexity. The actual process of refining these resources into services is serious business, further complicated by the huge variance between offerings. Thankfully flexibility and power no longer have to be out of reach. fog simplifies the process of utilizing these resources and smooths the differences between providers so that you can focus on creating the next revolutionary cloud service.
###Installing fog
fog is distributed as a RubyGem:
gem install fog
Or for bundler users, you can add it in your Gemfile:
gem "fog"
###fog => Rackspace Cloud Servers We can start our exploration of cloud computing with Rackspace’s Cloud Servers. You can sign up here and copy down your api key and username from here. We are about to get into the code samples, so be sure to fill in anything in ALL_CAPS with your own values!
# create a connection
connection = Fog::Compute.new({
:provider => 'Rackspace',
:rackspace_username => RACKSPACE_USERNAME,
:rackspace_api_key => RACKSPACE_API_KEY
})
###Servers the Rackspace way Creating a server on Rackspace is very easy if you are willing to accept the defaults (the smallest server size, using Ubuntu 10.04 LTS).
server = connection.servers.create
You can then list your servers to see that it now appears.
connection.servers
Or you can fetch the latest data for your server in particular.
servers.get(server.identity)
As you might imagine, this is a pretty common use case, so fog simplifies it by providing the reload
method to refresh the state of a model to the freshest available.
server.reload
But this too can get tedious quickly, especially when servers can take several minutes to boot. Fog uses wait_for
in cases like this to periodically reload a model until either the block returns true or a timeout occurs (by default the timeout is 600 seconds). We can combine wait_for with ready?
to check when a server has finished booting without needing to know the specifics of each service.
server.wait_for { ready? }
Once we are done with that we can shut it down.
server.destroy
###Bootstrap: Servers the fog Way
Cycling servers is great, but in order to actually ssh to a server on Rackspace you need to place ssh keys (and ideally disable password authentication for root). Rather than worrying about the nitty gritty, we can utilize bootstrap
.
server = connection.servers.bootstrap({
:private_key_path => '~/.ssh/id_rsa',
:public_key_path => '~/.ssh/id_rsa.pub'
})
Bootstrap will create the server, but it will also make sure that port 22 is open for traffic and has ssh keys setup. In order to get all the pieces put together the server will have to be running, so we can skip checking ready?
since it should already be true. Now we can send commands directly to the server.
server.ssh('pwd')
server.ssh(['pwd', 'whoami'])
These return an array of results, where each has stdout, stderr and status values so you can check out what your commands accomplished. Now just shut it down to make sure you don’t continue getting charged.
server.destroy
###Using Amazon EC2 and fog
Sign up for an account here and copy down your secret access key and access key id from here.
First, create a connection with your new account:
require 'rubygems'
require 'fog'
connection = Fog::Compute.new({
:provider => 'AWS',
:aws_secret_access_key => YOUR_SECRET_ACCESS_KEY,
:aws_access_key_id => YOUR_SECRET_ACCESS_KEY_ID
})
With that in hand we are ready to start making EC2 calls!
We should be able to reuse all our old code, except for one small exception. fog uses the official Canonical Ubuntu image as a default on AWS (official Canonical Releases). This image uses ‘ubuntu’ as the username, rather than ‘root’. So the bootstrap call will look slightly different.
server = connection.servers.bootstrap({
:private_key_path => '~/.ssh/id_rsa',
:public_key_path => '~/.ssh/id_rsa.pub',
:username => 'ubuntu'
})
Just like on Rackspace, this will boot a server and place our keys so that we can ssh in, run commands and eventually shut down the server with destroy
.
###Mocking out Compute
You can also start any of these scripts with
Fog.mock!
(or start the fog interactive tool from the command line with)
$ FOG_MOCK=true fog
to run in mock mode. In this mode commands are run as local simulation, so no cloud resources are ever consumed and things operate much faster. Not everything has mocks written for it, but if you run up against these edges errors will be raised to quickly alert you that you are entering not-yet-mocked territory. The functionality that has been mocked gets exercised by the same test suite as the real code, so you should feel confident that its behavior should be consistent. ###Cleaning up To cover your tracks it is a good idea to check for running servers and shut them down, here is one way you might do that.
connection.servers.select {|server| server.ready? && server.destroy}
###Other Providers
Rackspace and Amazon are just the tip of the iceberg. fog also supports compute offerings from BlueBox, Brightbox, Terremark, GoGrid, Linode, Slicehost, Storm on Demand and Voxel. Once you wrap your head around that you can also check out fog’s many supported storage, DNS and other services. With familiar commands throughout it is easy to pick up and try new services. Quickly your toolbox runneth over and you should have all the power you need to innovate.
###Summary
Compute can be tricky, but the abstractions in fog make it much easier to get started. With your servers up and running you can then focus on the task at hand and get some work done. Congratulations on adding a new tool to your arsenal. Let us know what we can do better.
Share your thoughts with @engineyard on Twitter