Boom…

Difficult takes a day, impossible takes a week.

Migrate Redis To A New Server Without Downtime

There comes a time when you wish to move your redis system from one server to another. At CodePen we recently did this to move the redis server to a bigger box. Below I’ll outline the steps we followed to move our redis instances without downtime.

Throughout this walk through, we’ll refer to the following terms.

  • current master – your currently running redis box.
  • candidate master – where you’ll be migrating to.
  • current slave – your currently running redis slave box.
  • candidate slave – the box that will replicate from your candidate master.

This process will take you about 2 hours with setup. The data migration takes a few minutes.

Step 1: Set up the two boxes.

We used the chef-redis cookbook to build redis from source. The role with the appropriate attributes overridden follows below.

1
2
3
4
5
6
7
8
9
10
11
12
name "redis"
description "redis"

run_list 'recipe[redis::server]'

override_attributes "redis" =>
  {
    'install_type' => 'source',
    'symlink_binaries' => true,
    'init_style' => 'init',
    'config' => {'bind' => '0.0.0.0'}
  }

One thing worth noting: you’ll need to fork and merge this pull request to allow the redis-server and redis-cli binaries to symlink properly. If that pull request has been accepted, you can safely proceed.

Step Two: Set up replication on the candidate slave.

After having recently set up chained replication on a MySQL server, configuring replication for redis was quite refreshing. To do so, log onto your candidate slave and issue the following command, remembering to change the placeholder below to the real candidate master IP.

1
2
redis-cli
slaveof <candidate_master> 6379

Data should not exist on your candidate master, but if for some reason it is, you can verify that the master/slave are synced by issuing the following command.

1
redis-cli info | grep '# Replication' -A 10

Then look for master_sync_in_progress:0 in the output.

1
2
3
4
5
6
7
8
9
10
# Replication
role:slave
master_host:ec2-50-***.us-west-2.compute.amazonaws.com
master_port:6379
master_link_status:up
master_last_io_seconds_ago:1
master_sync_in_progress:0
slave_priority:100
slave_read_only:1
connected_slaves:0

Step 3: Start the migration.

Now it’s time to move your data. The complexities of this operation are described in a great article by Garantia Data and are abstracted away with a fantastic script they wrote to make your life great. The script is short and, being python, very readable. Understand the concepts and proceed.

To summarize, the script will

  • sync master to candidate master, showing progress
  • prompt you to turn off the read only flag on the the candidate master
  • wait for you to re-point your applications to the new candidate master
  • prompt you to make the candidate master into a master

The script can move a whole fleet of redis instances if you have them.

So, let’s get started. On a server with access to the master and candidate master, get the script

1
2
curl https://raw.github.com/GarantiaData/redis-migrate/master/redis-migrate.py > redis-migrate.py
chmod +x redis-migrate.py

and install the python dependencies

1
2
easy_install pip
pip install redis

Then run the script

1
./redis-migrate --src <current_master> --dst <candidate_master>

And follow the on-screen instructions.

After you cut over the applications from the current master to the candidate master, and before you stop synchronization from your master to your slave, be sure to check that no more clients are connecting to the current master. If you find that you have other connections, be sure you understand what they are doing. In other words, be sure you have re-pointed all your apps.

1
redis-cli client list | awk '{print $1}' | sed s/addr=//g | sed s/:.*//g | sort | uniq

And that’s it. We moved about 1.5G of cache in a few seconds because Redis is fast!


Painless AWS Autoscaling With EBS Snapshots And Capistrano part 3

A Three Part Series:

This is part three of a series designed to get your auto scaling environment running.

Review

In the last part of this series, we reviewed a bunch of scripts used to deal with properly snapshotting and mounting volumes. In this part we’ll get our auto scaling system set up in AWS. Then we’ll give a high-level run-through of what you need to do to complete your setup. In this part we’ll review these scripts:

The Scripts

  1. aws_create_lb.sh – a bash script for creating an load balancer.
  2. aws_create_launch_config.sh – a bash script for creating launch configs.
  3. aws_create_autoscaling_group.sh – a bash script for creating an autoscaling group.
  4. aws_create_scaling_policies.sh – a bash script for creating policies and alarms.

Finally, I’ll tie together the whole process, referring to scripts as I go.

Before you start

The scripts that we’re about to execute will work out of the box, but will have some very codepen-specific stuff listed in them. You’ll probably want to do your own naming of policies, lbs, etc.

Also, I’m not going to go into great detail about the AWS creation scripts. The most useful article I found on the topic is on the cardinal path blog. I followed his instructions until I understood the process well enough to build my own.

AWS Autoscaling Creation Scripts

I wrote bash scripts to automate the creation of my autoscaling setup. Let’s review each in turn below.

aws_create_lb.sh – It is pretty obvious what’s happening in this script. Be sure to change the CERT_ID variable and the LB_NAME to something that makes sense for you.

aws_create_launch_config.sh – Here we’re building our launch config. Be aware that the USER_DATA_FILE here is the one we created in part 2 of this walkthrough. Find the source here.

aws_create_autoscaling_group – Again, boilerplate stuff.

aws_create_scaling_policies.sh – Note that I’m creating four things in this script: two policies and two metric alarms. To keep things DRY, I wrote a function and overrode the global variables twice, like so:

1
2
3
4
5
6
7
8
9
echo "scale up policy"
POLICY_NAME="scale-up"
ADJUSTMENT=1
SCALE_UP=$(add_policy)

echo "scale down policy"
POLICY_NAME="scale-down"
ADJUSTMENT="-1"
SCALE_DOWN=$(add_policy)

In this case, add_policy is a function that we call twice.

High Level Overview

I’ve thrown a lot of information at you all at once here and it’s time to review the setup details from a high level.

  • Create a new instance. Install your webserver and get Capistrano pushing code.
  • Bootstrap your node with Chef, if you’re using it
  • Create an AMI to work with
  • Launch a fresh AMI to work through your process
  • Manually set up your mount and volume for the first time, and snaphshot it, as described in part 1.
  • Put snapshot.py and prep_instance.py onto your production AMI.
  • Add the deploy:snapshot task into your deploy.rb.
  • Put utils.rb into your config dir in your rails setup.
  • Add production.rb to your Capistrano multistage setup at config/environments
  • Keep chef_userdata.sh and userdata.sh handy for when you work with aws_create_launch_config.sh
  • Review the autoscaling recipe to see where all these files are stored on your production instance.

Conclusion

Setting up an environment that scales based on load is cumbersome if you must constantly build AMIs every time your code changes. But, as you’ve seen, you don’t have to bake an entire AMI to take advantage of autoscaling. The instructions listed in this series make Capistrano deployment a natural part of your autocaling process.


Painless AWS Autoscaling With EBS Snapshots And Capistrano Part 2

A Three Part Series:

This is part two of a series designed to get your auto scaling environment running.

Catching Up

In the last part of this series, we did a bunch of manual key mashing to take our first snapshot. This gives us the foundation we need to automate the the process. In this part we will review the scripts required to make auto scaling work as expected. Also, at the end of this post, I’ll share the Chef recipe used to install all the scripts described here.

The Scripts

  1. snapshot.py – a python script to snapshot a volume on deploy
  2. deploy:snapshot – a capistrano task used to call snapshot.py
  3. prep_instance.py – a python script to mount a volume from the most recent snapshot and tag the instance.
  4. utils.rb – a ruby script used during cap deploy to get instance dns names by tags.
  5. production.rb – a the file used by capistrano multistage to get a list of servers.
  6. chef_userdata.sh – a userdata script for bootstrapping chef.
  7. userdata.sh – a userdata script that does not include chef bootstrapping.
  8. autoscaling – a chef recipe used to set up all the scripts above.

Before you start

Let’s review the tool set we’ll be working with. So far, we’ve used Bash and the AWS Command Line Tools and we’ve done just fine. We’ll still use bash to stitch together our scripts, but in the next few steps we’ll be using both Python and Ruby to accomplish our goals. I find Python to be more expressive and capable than Bash when dealing with lots of variables that need to be type checked and have default values. Plus Ruby is a good fit for Capistrano. So, we’ll be using the boto libraries on the server side, and the AWS SDK for Ruby on the client (Capistrano) side.

I use Chef to manage my dependencies, but if you’re doing this by hand, the AMI on which these scripts will run must have the boto libraries pre-installed. To do so, you can issue the following statements.

1
2
3
sudo apt-get install python-setuptools
sudo easy_install pip
sudo pip install boto

Also, boto expects on a .boto file to exist in the home directory for the user who executes these scripts. We’ll set the BOTO_HOME variable in our driver script later on in this post.

The rest of this part will describe the file you need and what they do.

snapshot.py

source

The Python script we review here will:

  1. Given an instance ID, look up the volume attached to a device and take a snapshot of it.
  2. Tag the snapshot, so that future scripts can query the tags.

This script is called at deploy time so that the most recent code is always ready to mount on an auto scaling instance.

The parsed_args method at the top of the script does a decent job of describing its default values. You’ll probably want to change the --tag argument to match your organization’s needs.

In the main method we do all our work. The line:

1
vols = conn.get_all_volumes(filters={'attachment.instance-id': args.instance_id})

drives this little app. We search for instance IDs that match that of the calling box.

Then we iterate over the volumes, searching for the mount point (device) we set up earlier. Once found, we tell the script to create the snapshot and add the tag.

1
2
snap = code_volume.create_snapshot(snapshot_description(code_volume, args.instance_id))
snap.add_tag('Name', args.tag)

And that’s it. We’ll use this script later on in our automation.

deploy:snapshot

The Capistrano task below calls snapshot.py on deployment.

1
2
3
4
 task :snapshot do
  desc "Take a snapshot of the codepen volume for future autoscaling needs"
  run "BOTO_CONFIG=/home/deploy/.boto /home/deploy/snapshot.py"
 end

Notice the presence of the BOTO_CONFIG environment variable. The boto library provides documentation for the appropriate keys to add to this INI-style file.

Finally, remember to add the snapshot task to your after_deploy hooks in your Capistrano deploy.rb.

1
after :deploy, "deploy:snapshot"

prep_instance.py

source

The script we’ll review here will, given a tag, search for the most recent snapshot, create a volume and mount it. Furthermore, the script will apply tags to the instance itself. We’ll use these tags in our Capistrano ruby script.

As with the other Python script, there is a parsed_args method that defines the default values we’ll need. The help section of each describes each default. The pair that need a bit more explaining are device_key and device_value. If you recall in Step 4 of part one of this series, device names can differ from AWS to your OS. These two arguments compensate for this fact.

Some interesting parts of the code include wait_fstab and wait_volume. Both deal with the fact that calls to create volumes, snapshots, and to attach devices are async. So, we must poll the API waiting for the status we expect. For example, in the snippet below, our script sleeps for up to 60 seconds until the status we want appears. If not, it throws an exception.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
def wait_volume(conn, args, volume, expected_status):
    volume_status = 'waiting'
    sleep_seconds = 2
    sleep_intervals = 30
    for counter in range(sleep_intervals):
        print 'elapsed: %s. status: %s.' % (sleep_seconds * counter, volume_status)
        conn = ec2.connect_to_region(args.region)
        volume_status = conn.get_all_volumes(volume_ids=[volume.id])[0].status
        if volume_status == expected_status:
            break
        time.sleep(sleep_seconds)

    if volume_status != expected_status:
        raise Exception('Unable to get %s status for volume %s' % (expected_status, volume.id))

    print 'volume now in %s state' % expected_status

utils.rb

source

This tool grabs all instance DNS names from AWS. We use this in the Capistrano multistage production.rb to get an array of DNS names. It is pretty self-explanatory. Since this script will be distributed to your developers, it would probably be a good idea to lock the credentials down to read-only. You will have to require this in your deploy.rb like so:

1
require './config/deploy/utils'

Here’s the file itself. This makes deployment nice because it dynamically grabs EC2 Instances tagged with the Role and Environment you specify along with an instance-state-name of running. This guarantees that you’re pushing out to all the servers.

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
require 'aws-sdk'
require 'awesome_print'

class AwsUtil

  def ec2_object
    # deployer user, has read-only access
    AWS::EC2.new(
      access_key_id: "<your_key_here>",
      secret_access_key: "<your_secret_here>",
      region: 'us-west-2'
    )
  end

  def deployed_app_server_dns_names
    ec2_object.instances.
      filter('tag:Role', 'app').
      filter('tag:Environment', 'production').
      filter('instance-state-name', 'running').
      map {|r| r.dns_name}
  end

  def print_dns_names
    ap deployed_app_server_dns_names
  end
end

production.rb

source

The Capistrano multistage extension allows you to specify a file for each deployment target. This script replaces production.rb and calls out to utils.rb to get dns names.

1
2
3
4
5
6
7
8
9
10
11
set :branch, "master"

# tagged:
# Role:app && Environment:'production'
# filtered:
# instance-state-name:running
aws_servers = AwsUtil.new.deployed_app_server_dns_names

role(:app) { aws_servers }
role (:web) { aws_servers }
role :db, aws_servers[0], primary: true

chef_userdata.sh

source

This file will be passed to an autoscale launch config.

The shebang line uses the -ex args to instruct bash to exit on error and to be very verbose when executing. This is super-handy for debugging your user data script.

1
#!/bin/bash -ex

The exec call redirects standard out and error to three different places.

1
exec > >(tee /var/log/user-data.log|logger -t user-data -s 2>/dev/console) 2>&1

We slightly shorten the DNS name and assign it to the EC2_HOST variable.

1
2
3
EC2_HOSTNAME=`ec2metadata --public-hostname`
EC2_HOST=`echo $EC2_HOSTNAME | cut -d. -f1`
EC2_HOST=$EC2_HOST.`echo $EC2_HOSTNAME | cut -d. -f2`

If you’re not using Chef, you can skip the following bits. If you are using Chef you can boostrap the node this way: delete the .pem, set up a first-boot.json file, and pass the EC2_HOST variable to the client.rb file so your Chef node name is useful.

This script also assumes that the Chef libraries are already installed and have been bootstrapped once before.

1
2
3
4
5
6
7
8
if [ -a /etc/chef/client.pem ]; then
  rm /etc/chef/client.pem
else
  echo "no pem to delete"
fi
echo '{"run_list":["role[app_server]","recipe[passenger]","recipe[autoscaling]"]}' > /etc/chef/first-boot.json
sed -i "s/node_name \".*\"/node_name \"app_$EC2_HOST\"/g" /etc/chef/client.rb
sudo chef-client -j /etc/chef/first-boot.json

And finally we call userdata.sh

userdata.sh

source

Ultimately, this is the script that does all the work. It mounts drives as described in Step 4 of part one and then calls prep_instance.py from above.

Although this script is mighty important, we’ve covered all the details elsewhere. Look it over and you’ll recognize parts.

autoscaling chef recipe

source

We’ve reviewed a lot of scripts here in this document. You may be wondering where to put them all. Chef to the rescue! Even if you’re not using Chef, the default recipe from my recipe creates a great guide for placing these files where you want them.

Here’s an example from the default.rb. In this case /root/.boto is where we’re going to place the boto.cfg.read_only.erb file.

The owner, group and mode functions should make sense to you.

1
2
3
4
5
6
template "/root/.boto" do
  source "boto.cfg.read_only.erb"
  owner "root"
  group "root"
  mode "0660"
end

This pattern is repeated throughout the document.

Fin

You’ve reached the end of this part. So far, you’ve reviewed all the scripts you’ll need to auto scale your environment. In part 3 we’ll look at some bash scripts for setting up your autoscaling rules, and review where all these scripts go.


Painless AWS Auto Scaling With EBS Snapshots And Capistrano

A Three Part Series:

Choices to Make

AWS (Amazon Web Services) auto scaling is a simple concept on the surface: You get an AMI, set up rules, and the load balancer takes care of the rest. However, actually getting it done is more complicated.

Some choices are worse than others: you could bake an AMI (Amazon Machine Image) before you deploy, but that could add 10 minutes or more to each deployment. Some are dangerous: you could create an AMI after each deploy, but you run the risk that an auto scale even happens before your AMIs are done. Plus, you have a whole variety of AMIs deployed in at any given time. Some are similar to what we propose in this tutorial: you could push your code to S3 on each deploy and have user-data scripts that pull it down on each auto scaling event. However you slice it, to get auto scaling to fit into your development work flow in a transparent way takes careful thought and planning.

We recently rolled out the following solution at CodePen. It keeps our AMIs static and our application ready for scaling on EBS (Elastic Block Store) snapshots. We can push code using Capistrano and let a few scripts distribute the ever-changing code base to our fleet of servers. I’d like to share the steps required to make it work. This series of posts will walk you through the steps required to build an auto-scaling infrastructure that stays out of your way.

Overview

The process can be summed up like this:

  • Source is mounted on an EBS Volume.
  • Snapshots are taken on deployment.
  • When AWS scales up, new instances are started from latest snapshot.
  • Instances are tagged with roles so that deployment scripts always push code to the right servers.

Before you start

This walk through assumes that you have a working Capistrano deployment going on AWS. If you need some help with that, the guys at Beanstalk have a great guide for getting started. We use the Capistrano Multistage to separate our our deployment environments.

Also, it is a good idea to practice this whole setup on a clone of your application environment. Hopefully you’re running your instance on an EBS-mounted root partition so you can simply create an AMI and run these steps in a safe environment.

A functional AWS API Tools environment is a requirement as well, because this walkthrough will use them extensively. Although I do my development on a Mac, I prefer a Linux environment for this type of work. I keep an EBS-backed micro instance around for all my admin work. I found Eric Hammond’s instructions for installing aws command line tools invaluable for this task.

Identifying Your Environments

You’ll be working in two environments for this tutorial.

  1. Workstation Environment – this is where you have the AWS API Tools installed. A micro instance is nice for this.
  2. Instance Environment – this is the instance where you deploy your code. To follow along with this guide, the Instance Environment should have a working Rails environment in the Source directory. In this case, that’s /home/deploy/codepen. Yours will obviously be elsewhere.

I’ll reference these two environments throughout this walk through.

Step 1: Create the EBS volume in AWS – Workstation Environment

In this section, we’ll do the EBS legwork to get your code snapshot-ready.

First, let’s identify where your source lives. Capistrano’s deploy.rb defines your Source directory with the :deploy_to setting. We’ll refer to this as your Source from here on out.

1
2
# Source
set :deploy_to, "/home/deploy/codepen"

You will mount your source directory on an EBS volume in a process similar to the instructions laid out in the Amazon article Running MySQL on Amazon EC2 with EBS. This is a manual process for now, but we’ll automate this with a script later on in the article.

Let’s create a volume using the command line tools.

1
2
VOL_ID=`ec2-create-volume --size 5 --availability-zone us-west-2c | awk '{print $2}'`
echo $VOL_ID

Now let’s poll for your volume status. Repeat the command below until your echo returns available. It is worth noting that AWS calls are asynchronous. This means that even though you asked AWS to create the volume, you can’t use it until its Status becomes available. That’s what we’re doing here.

1
2
STATUS=`ec2-describe-volumes $VOL_ID | awk '{print $4}'`
echo $VOL_ID

Step 2: Get the Instance ID – Instance Environment

You also need your Instance ID in order to mount this volume, so let’s get that. You will need to be logged into the machine.

1
2
INSTANCE_ID=`ec2metadata --instance-id`
echo $INSTANCE_ID

I take for granted here that the ec2metadata command is available on Ubuntu Cloud Instances. If you’re using some other flavor of OS, you can do:

1
INSTANCE_ID=`curl http://169.254.169.254/latest/meta-data/instance-id`

Remember your $INSTANCE_IDfor the next section.

Step 3: Mount the Volume – Workstation Environment

In the previous section you got your instance ID. Let’s put that in a variable on the workstation so you can easily access it.

1
$INSTANCE_ID=<your instance id here>

Now, let’s ask AWS to mount this volume to your instance on device /dev/sdf.

1
ec2-attach-volume $VOL_ID -i $INSTANCE_ID -d /dev/sdf

Step 4: Mount the File Systems to the Volume – Instance Environment

In the previous steps, we attached a volume to an instance. Now we’re on the instance and we’ll associate that volume with the file system.

First, verify that the device exists.

1
ls /dev/xvdf

It’s worth noting that the device I asked AWS to mount, /dev/sdf, is not the same as the device we’re checking for. Ubuntu uses the prefix xvd instead of sd to enumerate devices. So, we search for /dev/xvdf to see that the ec2-attach-volume call worked.

It can take some time for the device to mount. During that time, the command above could return No such file or directory. Just keep trying.

Now create an xfs filesystem on the device.

1
2
3
4
5
sudo apt-get update
sudo apt-get install -y xfsprogs

grep -q xfs /proc/filesystems || sudo modprobe xfs
sudo mkfs.xfs /dev/xvdf

In the call above, we asked apt to install the xfsprogs package, we test that xfs was installed. Then we make the filesystem with the mkfs.xfs command.

We’ll create a temp at /tmp/mount.sh that you can grab from here

Let’s review what it does. Lines 1 – 6 below echo our mounting instructions into fstab. We want to mount our device /dev/xvdf to the file system at /cp. Furthermore we want to mount the directory /home/deploy/codpen/ to /cp/codepen/. The second mount just acts like a symlink, pointing the home directory of the deploy user to the mounted filesystem. The juicy bits are below.

1
2
3
4
5
6
grep "codepen_fstab_setup" /etc/fstab
if [ $? -eq 1 ]; then
    echo "# codepen_fstab_setup" | tee -a /etc/fstab
    echo "/dev/xvdf /cp xfs noatime 0 0" | tee -a /etc/fstab
    echo "/cp/codepen /home/deploy/codepen     none bind" | tee -a /etc/fstab
fi

Then, lines 1 – 12 below make the directories if they don’t exist, and finally line 18 calls mount -a. This tells the OS to run the mount command against /etc/fstab, effectively running the configuration we just set up.

1
2
3
4
5
6
7
8
9
10
11
12
if [ $? -eq 0 ]; then
    if [ ! -d /cp ]; then
        mkdir -m 000 /cp
    fi
 
    if [ ! -d /home/deploy/codepen ]; then
        mkdir -m 000 -p /home/deploy/codepen
    fi
    mount -a
else
    echo FAIL
fi

If you have mounted /dev/xvdf and downloaded and executed mount.sh then you can verify that your devices and directories are mounted and linked by issuing the mount command.

1
2
3
4
5
mount

...snip
/dev/xvdf on /cp type xfs (rw,noatime)
/cp/codepen on /home/deploy/codepen type none (rw,bind)

Now you have your source directory hosted on an EBS volume.

Step 5: Verify, Deploy and Snapshot – Workstation Environment

Now your code is ready for deployment. Let’s verify that everything is in place.

1
cap deploy:check

A hangup here could be permissions. If your code was already deployed to the Source directory, the above steps should have simply linked your code in Source to the /cp/codepen directory. If for some reason this did not happen, you can initialize your deployment now.

1
2
cap deploy:setup
cap deploy

With a successful deployment, you’re ready to snapshot.

1
SNAPSHOT_ID=`ec2-create-snapshot -d "First snapshot" $VOL_ID | awk '{print $2}'`

We’re also going to tag the snapshot. This step is important becasue during the launch of a new box, we’ll search for the latest snapshot with this tag name and mount it as our Source directory.

1
ec2-create-tag $SNAPSHOT_ID --tag Name="codepen-app"

Done, for now.

In part 2 of this series, we’ll automate what we did here with a script.


Installing a RapidSSL SSL cert on an AWS Load Balancer

Over at CodePen it came time to renew our SSL cert. I dutifully follwed the setup instructions, but I was greeted with this error:

Invalid Public Key Certificate

After talking with the support staff at RapidSSL, I was told to reverse the Intermediate CA Bundle. The example from their instructions looks like this:

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
-----BEGIN CERTIFICATE----
Primary Intermediate CA
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Secondary Intermediate CA
-----END CERTIFICATE-----

Needs to be switched to..

-----BEGIN CERTIFICATE-----
Secondary Intermediate CA
-----END CERTIFICATE-----
-----BEGIN CERTIFICATE-----
Primary Intermediate CA
-----END CERTIFICATE-----

I’m noting this here so in 2016 when we have to renew our SSL Cert, we’ll know what to do.


Set Up A Static Host-Only Network For Virtualbox

Intro

Setting up a Host-Only Network with Ubuntu Server requires some knowledge of networking. But why accumulate knowledge when you can simply copy snippets from the internet?

Set Up Host-Only Networking

Host-Only Networking is a setting in VirtualBox that allows your host machine to act like a DHCP server for a private network on your machine. Using this setting, you may loom like a god above the private network you create on your garden of nodes. Or, you can just test out some new service… Your choice.

Enable Host-Only Networking

Right-click settings on your virtual machine of choice, then click the Network tab. Choose Adapter 2 and then click Enable Network Adapter. Make sure the Name dropdown says vboxnet1. If it does not, click VirtualBox from your menu bar, then Preferences, and then the Network tab because we’re going to add a new network. Click the Add host-only network button. This will create a new Host-Only network with a gateway of 33.33.33.1. We’ll set our Ubuntu Server up accordingly.

Configure Your Ubuntu Box

Start the box, then issue the following commands:

sudo vi /etc/network/interfaces

Then, make your interfaces file look like this:

# The loopback network interface
auto lo
iface lo inet loopback

# The primary network interface
auto eth0
iface eth0 inet dhcp

auto eth1
iface eth1 inet static
    address 33.33.33.11
    netmask 255.255.255.0
    gateway 33.33.33.1

Then, reboot your machine.

sudo reboot

Verify Your Settings

We want to make sure that the settings you put in place work. To do so, issue this command

ifconfig

And view the resulting settings:

eth0      Link encap:Ethernet  HWaddr 08:00:27:c8:d3:98  
      inet addr:10.0.2.15  Bcast:10.0.2.255  Mask:255.255.255.0
        ...Truncated for brevity...

eth1      Link encap:Ethernet  HWaddr 08:00:27:0e:e2:c0  
       inet addr:33.33.33.11  Bcast:0.0.0.0  Mask:255.255.255.0
        ...Truncated for brevity...

If you don’t see that, be sure that the Host Only Network you created in the steps above is in the 33.33.33.1 gateway range.

Reading More

Ubuntu Network Configuration Accessing Ubuntu Server in a VirtualBox Virtual Machine


How to manage your dotfiles with git

Update

The blog post below is over-simplified. You should follow the steps outlined by the vimcast guys.

Objective

Programs like vim, bash, and zsh all use dotfiles for configuration. You want to back them up in case of disaster. Here’s how I handle that using a .dotfiles directory and symlinks.

Where Do My dotfiles Live?

By default vim, bash, zsh and other programs store dotfiles in your home directory. You can view the dotfiles in your home directory like so:

cd
ls -al

Vim As An Example

In the following steps, you’ll learn how to back up your Vim configuration to a directory named .dotfiles.

To get started, create your .dotfiles directory.

cd
mkdir -p .dotfiles/vim

Note: The -p option tells bash to create the directory recursively, building the entire path if it does not exist.

Now, move your .vim and .vimrc files to your .dotfiles directory.

mv .vimrc .vim .dotfiles/vim

Finally, symlink the files and folders you just moved back to their original location.

cd
ln -s .dotfiles/vim/.vimrc .vimrc
ln -s .dotfiles/vim/.vim .vim

Back It Up

Remember to use whatever source control system you like to back up your .dotfiles directory. I prefer git.

cd ~/.dotfiles
git init
git add .
git commit -a -m 'My first dotfile commit'

Chef Recipe To Upgrade Virtualbox Additions

Every time the guys at VirtualBox update their software, you have to scramble to find resources to upgrade your virtualbox guest additions. You also get the following annoying message.

[default] The guest additions on this VM do not match the install version of
VirtualBox! This may cause things such as forwarded ports, shared
folders, and more to not work properly. If any of those things fail on
this machine, please update the guest additions and repackage the
box.

To prevent this from being a hassle, I created this chef recipe to help ease our suffering.

You will probably have to restart your vagrant box for this to work. I’m not 100% sure.


Command-Line Resources

SSH Tips

This excellent article entitled Tips for Remote Unix Work covers some vital SSH goodness. For example, copying your public ssh key

ssh user@example.com 'mkdir -p .ssh && cat >> .ssh/authorized_keys' < ~/.ssh/id_rsa.pub

and piping commands via SSH without logging into the remote machine

cd && tar czv src | ssh example.com 'tar xz'

How to test out a shared vagrant box

Intro

At some point, someone will offer to share a vagrant box with you. These are the steps required to get that box working.

Create a Working Folder

We’ll need a place to house the .box file and a way to start it up, so create the directory and use the vagrant gem’s init call, which will make a VagrantFile for you.

mkdir WorkingFolder
cd WorkingFolder
vagrant init

Download the .box File

Put the .box file into your Working Directory. For this exercise, we’ll call it sharedBox.box.

Add The Box to Vagrant’s Box Cache

The command below will import your .box file.

cd WorkingFolder
vagrant box add shared_box sharedBox.box

Importing a box file will copy it your ~/.vagrant.d/boxes folder. To prove this, run the ls command.

ls ~/.vagrant.d/boxes
yourshell$ shared_box

Notice that the shared_box argument to the box add command produces a shared_box file in your ~/.vagrant.d/boxes directory. Now, when dealing with this box in vagrant, you’ll refer to it as shared_box. So, you can safely delete the sharedBox.box file from your Working Directory.

rm sharedBox.box

Edit the VagrantFile

In order start the vagrant box, you’ll need to reference it in your VagrantFile. Using your editor, change

config.vm.box = "base"

to

config.vm.box = "shared_box"

Now when you tell vagrant to start, you’ll be referring to the shared_box.

All Done

With these steps in place, you’re ready to start vagrant with the vagrant_up command.