The Test/Development Environment
One of those things which is always difficult is putting together the right test environment. Traditionally purchasing a seperate set of hardware, , a seperate datacenter, A developers “local copy,” or something as cheesy as different vhosts on the same machine have been the only real answers. Most of those are grossly inneficient as well as expensive. At the best they’re “ok” (because you arent going to put any *real* power in this arena when you have mission critical applications to deploy) and at worst they’re downright dangerous (how many people do you know have never accidentally edited the wrong file in the wrong folder?)
Enter Amazon, wisking you away to a magical place in which its possible to throw as much horsepower as neccesary at a test environment. Where its possible to wipe and reload the entire thing in a *very* short period of time. And where you only have to pay for it when it’s being used. With Amazon EC2 there is no longer any reason that a web app company cannot afford a truly useful test/development environment
Do you worry that something is going to happen to your data? Is the only backup copy of your working code on one of those CDR’s sitting in the “server closet”… was it taken more than 3 months ago? If you answered yes to any of these questions Amazon EC2 is right up your alley. With a comple of public keys, and rsync you can put the power of secure backups to work for you! But wait, theres more! Once the backup has completed schedule a shutdown command, and only pay for what you need! Pretend you have 100Gb of data to backup… it changes at 5% per dat, and takes only 4 hours to run the backup remotely… If you setup a once per month full copy, and preform incrimental backups daily you’re looking at in the neighborhood of $130 per month. Beat that with a tape drive… And you dont outgrow the EC2+S3 combo in 3 months of prolific Word Document writing like you would that last tape drive
Closely coupled with the development environment Version control is the heart and soul of any source based application, and theres no reason not to put this on something like an EC2 server. The bandwidth usage is minimal. The gauranteed backed up aspect is a huge load off the restless mind, and the next time you have to run reports on the last three hundred and seventy five thousand revisions… it doesnt bring your real (or test) environment screetching to a rather annoying halt!
You took the time to setup a nice trouble ticketing system for both your customers and your employees. It’s customized to the teeth. It rocks. It rolls. It just went down with the rest of your infrstructure just now when you blew a circuit breaker. Or did it? Not if you moved this piece of mission critical (and resource light) infrastructure to EC2. You’ll basically be able to run this puppy at the cost of $.10 per hour because the traffic itll cost you will be… what… $3 bucks a month? Change. And the ability to still communicate with all your customers during a major outage? priceless
Monitoring and Alerting
For roughly the same reasons as noted above using EC2 for monitoring your mission critical applications is the right thing to do. Not only do you have a third party verification f availability you have an off network source for things like throughput and responsiveness. If you’re monitor sends SMS through your mail server… and your mail server just died… you wont hear a peep out of your phone until someone brings it back up. And then when it *is* backup not only do you catch flack for not noticing… but all holy hell breaks loose on your phone as 400 SMS messages come in from l;ast nights backlog. Do yourself (and your organization) a favor… EC2 + Nagios + Mon = peace of mind (throw in cacti and you have answers to the dreaded question: “so how is our performance lately anyhow?”). Plus if you use something like UltraDNS which offers a programatic API for their services you can use amazon as the trigger for moving from one ser of servers to the other. Wonderful!