This is part three of a series designed to get your auto scaling environment running.
In the last part of this series, we reviewed a bunch of scripts used to deal with properly snapshotting and mounting volumes. In this part we’ll get our auto scaling system set up in AWS. Then we’ll give a high-level run-through of what you need to do to complete your setup. In this part we’ll review these scripts:
aws_create_lb.sh– a bash script for creating an load balancer.
aws_create_launch_config.sh– a bash script for creating launch configs.
aws_create_autoscaling_group.sh– a bash script for creating an autoscaling group.
aws_create_scaling_policies.sh– a bash script for creating policies and alarms.
Finally, I’ll tie together the whole process, referring to scripts as I go.
Before you start
The scripts that we’re about to execute will work out of the box, but will have some very codepen-specific stuff listed in them. You’ll probably want to do your own naming of policies, lbs, etc.
Also, I’m not going to go into great detail about the AWS creation scripts. The most useful article I found on the topic is on the cardinal path blog. I followed his instructions until I understood the process well enough to build my own.
AWS Autoscaling Creation Scripts
I wrote bash scripts to automate the creation of my autoscaling setup. Let’s review each in turn below.
aws_create_lb.sh – It is pretty obvious what’s happening in this script. Be sure to change the
CERT_ID variable and the
LB_NAME to something that makes sense for you.
aws_create_autoscaling_group – Again, boilerplate stuff.
aws_create_scaling_policies.sh – Note that I’m creating four things in this script: two policies and two metric alarms. To keep things DRY, I wrote a function and overrode the global variables twice, like so:
1 2 3 4 5 6 7 8 9
In this case,
add_policy is a function that we call twice.
High Level Overview
I’ve thrown a lot of information at you all at once here and it’s time to review the setup details from a high level.
- Create a new instance. Install your webserver and get Capistrano pushing code.
- Bootstrap your node with Chef, if you’re using it
- Create an AMI to work with
- Launch a fresh AMI to work through your process
- Manually set up your mount and volume for the first time, and snaphshot it, as described in part 1.
prep_instance.pyonto your production AMI.
- Add the
deploy:snapshottask into your
configdir in your rails setup.
production.rbto your Capistrano multistage setup at
userdata.shhandy for when you work with
- Review the
autoscalingrecipe to see where all these files are stored on your production instance.
Setting up an environment that scales based on load is cumbersome if you must constantly build AMIs every time your code changes. But, as you’ve seen, you don’t have to bake an entire AMI to take advantage of autoscaling. The instructions listed in this series make Capistrano deployment a natural part of your autocaling process.