ooo something non-regurgitated about EC2… and put in a frame of the rest of the amazon web services!
Business
5 Excellent Uses for Amazon EC2
The Test/Development Environment
One of those things which is always difficult is putting together the right test environment. Traditionally purchasing a seperate set of hardware, , a seperate datacenter, A developers “local copy,” or something as cheesy as different vhosts on the same machine have been the only real answers. Most of those are grossly inneficient as well as expensive. At the best they’re “ok” (because you arent going to put any *real* power in this arena when you have mission critical applications to deploy) and at worst they’re downright dangerous (how many people do you know have never accidentally edited the wrong file in the wrong folder?)
Enter Amazon, wisking you away to a magical place in which its possible to throw as much horsepower as neccesary at a test environment. Where its possible to wipe and reload the entire thing in a *very* short period of time. And where you only have to pay for it when it’s being used. With Amazon EC2 there is no longer any reason that a web app company cannot afford a truly useful test/development environment
Offsite Backups
Do you worry that something is going to happen to your data? Is the only backup copy of your working code on one of those CDR’s sitting in the “server closet”… was it taken more than 3 months ago? If you answered yes to any of these questions Amazon EC2 is right up your alley. With a comple of public keys, and rsync you can put the power of secure backups to work for you! But wait, theres more! Once the backup has completed schedule a shutdown command, and only pay for what you need! Pretend you have 100Gb of data to backup… it changes at 5% per dat, and takes only 4 hours to run the backup remotely… If you setup a once per month full copy, and preform incrimental backups daily you’re looking at in the neighborhood of $130 per month. Beat that with a tape drive… And you dont outgrow the EC2+S3 combo in 3 months of prolific Word Document writing like you would that last tape drive 🙂
Version Control
Closely coupled with the development environment Version control is the heart and soul of any source based application, and theres no reason not to put this on something like an EC2 server. The bandwidth usage is minimal. The gauranteed backed up aspect is a huge load off the restless mind, and the next time you have to run reports on the last three hundred and seventy five thousand revisions… it doesnt bring your real (or test) environment screetching to a rather annoying halt!
Trouble Ticketing
You took the time to setup a nice trouble ticketing system for both your customers and your employees. It’s customized to the teeth. It rocks. It rolls. It just went down with the rest of your infrstructure just now when you blew a circuit breaker. Or did it? Not if you moved this piece of mission critical (and resource light) infrastructure to EC2. You’ll basically be able to run this puppy at the cost of $.10 per hour because the traffic itll cost you will be… what… $3 bucks a month? Change. And the ability to still communicate with all your customers during a major outage? priceless
Monitoring and Alerting
For roughly the same reasons as noted above using EC2 for monitoring your mission critical applications is the right thing to do. Not only do you have a third party verification f availability you have an off network source for things like throughput and responsiveness. If you’re monitor sends SMS through your mail server… and your mail server just died… you wont hear a peep out of your phone until someone brings it back up. And then when it *is* backup not only do you catch flack for not noticing… but all holy hell breaks loose on your phone as 400 SMS messages come in from l;ast nights backlog. Do yourself (and your organization) a favor… EC2 + Nagios + Mon = peace of mind (throw in cacti and you have answers to the dreaded question: “so how is our performance lately anyhow?”). Plus if you use something like UltraDNS which offers a programatic API for their services you can use amazon as the trigger for moving from one ser of servers to the other. Wonderful!
Good commentary
The Corporate Rat and The Elusive Cheese gets it exactly right in their Amazon EC2 post: Amazon’s S3 and EC2 – classic application long tail
Good read
Some comparisons
Lets make some sample comparisons based on 1 year assumptions
High bandwidth (1 Year: 10 servers, 5Tb xfer per month)
- Low end host – $12,000
- Mid end host – $21,600
- Amazon EC2 – $128,880
Low bandwidth (1 year: 10 servers, 200Gb xfer per month)
- Low end host – $12,000
- Mid end host – $21,600
- Amazon EC2 – $8,920
Mixed solution
- (
- 8 servers @ amazon for processing (100Gb/mo),
- 2 servers at a hosting co for bandwidth (2Tb/mo)
- )
- EC2 + Low end: $9524 (vs $12,000 hosted, and $9,280 at EC2)
- EC2 + Mid end: $11,684 (vs $22,800 hosted, and $9,280 at EC2)
When you should (and should not) think about using Amazon EC2
The Amazon AWS team has done it again. And EC2 is generating quite the talk. Perhaps I’ve not been watching the blogosphere closely enough about anything in particular until now (very likely) but I’ve not really seen this much general excitement. The ferver I see going around is alot like a kid at christmas. You unwrap your present. ITS A REMOTE CONTROLLER CAR. WOW! How cool! All of a sudden you have visions of chasing the neighborhood cats, and drag racing your friends on the neighborhood sidewalks. After you open it (and the general euphoria of the ideas start to fade) you realize: this is one of those cars that only turns one direction… And you just *know* that the next time you meet with your best friend bobby he will have a car that turns left *and* right.
I expect we will see some of this… A lot of the talk around the good old sphere is that AWS will be putting smaller hosting companies out of business. But thats not going to happen unless they change their pricing model. Which i doubt they will.
But before all you go getting your panties in a bunch when EC2 only turns left… Remember that EC2 is a tool. And just like you wouldn’t use a hammer to cut cleanly through a board. EC2 is not meant for all purposes… The trick to making genuinely good use of EC2 will be in playing off of its strengths… And avoiding its weaknesses.
Lets face it… The achillies heel of all the rampant early bird speculation is that the price of bandwidth for EC2 is rather high. Most hosting companies get you (with a low end plan) 1000Gb of transfer per month. Amazon charges $200 per month for that speed, whereas you can find low-end hosting for $60, and mid end hosting got $150. Clearly this is not where EC2 excells. And I dont think that the AWS team intended for it to excell here. How big of a headache would it be to run the servers which host every web site on the planet? Not very.
What you *do* get at a *great* price is horsepower. For a mere $74.40/month (assuming 31 days) you get the equivalent of a Xeon 1.75Ghz with 1.75Gb Ram. Thats not bad!
but the real thrill comes with the understanding that additional servers can talk to eachother over the network… for free. There is a private network (or QV) which you can make use of. This turns into a utility computing atom bomb. If you can monimize the amount of bandwidth used getting data back and forth to and from the machine, while maximizing its CPU and RAM utilization, then you have a winning combination which can take full use of the EC@ architecture. And if your setup is already using Amazon’s S3 storage solution… Well… Gravy
Imagine running a site like, say, youtube on EC2. It would kill you with the huge bill. the simple matter of the situation is that youtube uses too much bandwidth in the receiving and serving of its users files. I would have to imaging that the numbers for its bandwidth usage per month are staggering! But lets break out the things that youtube has to manage, and where it might be able to best utilize EC2 in its infrastructure.
Youtube gets files from its users. Converts those files into FLV’s. And then makes those FLV’s available via the internet. You therefore have 3 main actions that are preformed. A) HTTP PUT, B) Video Conversion, and C) HTTP GET. If I were there, and in a position of evaluating where EC2 miht prove useful to me I would probably be recommending the following changes to how things work:
First all incoming files will be uploaded directly to web servers running on EC2 AMIs. Theres no reason it should be uploaded to a Datacenter, and then re-uploaded to EC2, and then sent back down to the Datacenter — that makes no sense. So Users upload to EC2 Servers.
Second the EC2 compute farm is in charge of all video conversion. Video conversion is, typically, a high memory and high cpu usage process (as any video editor will tell you.) And when they built their datacenter I can assure you that this weighed heavily on their minds. You dont want to buy too many servers. You pay for them up front, and you pay for them in back as well. Not only do you purchase X number of servers for your compute farm but you have to be able to run them, and that means rack space and power. Let me tell you that those two commodities are not cheap in datacenters. You do not want to have to have servers sitting around doing nothing unless you have to! So how many servers they purchase and provision every quarter has a lot to do with their expected usage. If they dont purchase enough then the user has to wait for a long time for his requests to complete. Too many and you’re throwing away your investors money (which they dont particularly like.) So the ability to turn on and off servers in a compute farm only when they are needed (and better yet: to only pay for them when they’re on) is a godsend. This will save oodles of cash in the longrun.
At this point, as a side note, I would also be advising keeping long term backups of content in the S3 service. As well as removing rarely viewed content, and storing it in S3 only. This would reduce the amount of space that is needed at any one time in the real physical datacenter. Disks take up lots of power, and lots of space. You dont want to have to pay for storage you dont actually need. The tradeoff here is that transferring the content from S3 back to the DC will cost some money. So the cost of that versus the cost of running the storage hardware (or servers) youselves ends up being. I would caution that you can move from S3 to a SAN, but moving from a SAN to S3 leave you with a piece of junk which costs more than your house did ;D.
Third the EC2 servers upload the converted video file, and thumbnails to the primary (and real) datacenter. And it’s from here that the youtube viewers would be downloading the actual content.
That setup would be when you *DO* use Amazons new EC2 service. You’ve used the strengths of EC2 (unlimited horsepower at a very acceptable price,) while avoiding its weaknesses (expensive bandwidth, and paying for long term storage (unless S3 ended up being economical for what you do))
That said… There are plenty of places where you wouldnt want to use EC2 in a project. Any time you’ll be generating excessive amounts of traffic… you’re loosing money compared to a physical hosting solution.
In the end there is a lot of hype, and theres a lot of room for FUD / Uninformed Opinions (this blog post, for example, is an uninformed opinion — I’ve never used the service personally,) and what people need to keep in mind is that not every problem needs this solution. I would argue that its very likely that any organization could find one or (probably) more very good uses for EC2. But hosting your static content is not one of them. God help the first totally hosted EC2 user who gets majorly slashdotted ;).
I hope you found my uninformed public service anouncement very informative. Remember to vote for me in the next election 😉
cheers
Apok
I cant say that I’ve used AJS…
But according to the things the author says about it it sounds *very* interesting.
Download page (For Personal Reference)
Bash wizardry: Command Line Switches
If you’re like me (and God help you if you are) You write a lot of bash scripts… When something comes up bash is a VERY handy language to use because it’s a) portable (between almost all *nixes), b) lightweight, and c) flexible (thanks to the plethora of linux commands which can be piped together) One large reason people prefer perl (or some other language) is because they’re more flexible. And one of those cases is processing command line switches. Commonly bash scripts are coded in a way which makes it necessary to give certain switches as a certain argument to the script. This makes the script brittle, and you CANNOT leave out switch $2 if you plan to use switch $3. Allow me to help you get around this rather nasty little inconvenience! (note: this deals with SWITHCES ONLY! *NOT* switches with arguments!)
check_c_arg() {
count=0
for i in $@
do
if [ $count -eq 0 ]
then
count=1
else
if [ "$i" = "$1" ]
then
return 1
fi
fi
done
return 0
}
This beautiful little bit of code will allow you to take switches in ANY order. Simply setup a script like this:
#!/bin/bash
host="$1"
check_c_arg() {
count=0
for i in $@
do
if [ $count -eq 0 ]
then
count=1
else
if [ "$i" = "$1" ]
then
return 1
fi
fi
done
return 0
}
check_c_arg "-v" $@
cfg_verbose=$?
check_c_arg "-d" $@
cfg_dry_run=$?
check_c_arg "-h" $@
cfg_help=$?
if [ $cfg_help -eq 1 ]
then
echo -e "Usage: $0 [-v] [-h]"
echo -e "\t-v\tVerbose Mode"
echo -e "\t-d\tDry run (echo command, do not run it)"
echo -e "\t-h\tPrint this help message"
exit 1
fi
if [ $cfg_dry_run -eq 1 ]
then
echo "ping -c 4 $host"
else
if [ $cfg_verbose -eq 1 ]
then
ping -c 4 $host
else
ping -c 4 $host 1>/dev/null 2>/dev/null
fi
fi
In the above all of the following are valid:
- 127.0.0.1 -v -d
- 127.0.0.1 -d -v
- 127.0.0.1 -v
- 127.0.0.1 -d
- 127.0.0.1 -h
- 127.0.0.1 -h -v -d
- 127.0.0.1 -h -d -v
- 127.0.0.1 -h -v
- 127.0.0.1 -h -d
- 127.0.0.1 -v -h -d
- 127.0.0.1 -d -h -v
- 127.0.0.1 -v -h
- 127.0.0.1 -d -h
- 127.0.0.1 -v -d -h
- 127.0.0.1 -d -v -h
I hope this helps inspire people to take the easy (and often times more correct) path when faced with a problem which requires a solution, but not necessarily a terribly complex one.
Cheers!
DK
Bartender ANOTHER!
I second that! Why XHR should become opt-in cross-domain
.mobi … seriously?
If we’re talking about really bringing the web to mobile devices, which are by their nature smaller, why are we using a longer domain name?
sure its just one character, but… on a keyboard thats 1/15th the size of a regular keyboard… or worse… with handwriting recognition, shoulldnt we go with the motto “more is less” ??!
apokalyptik.mobi ? how about apokalyptik.m … I mean… come on… throw us a fricken bone here!
opera mobile
i have to say that, so far, opera mobile (demo/beta mind you) is the best web browser available for my ppc6700 phone. minimo (ppc mozilla port) would win out except its slow (read SLOW) and crashes a lot. opera mobile is quick, responsivee, has tabs, and works with wordpress (how i’m posting this now!) thunderhawk was a promising cpntender but turned out to be a joke (to be fair my pho ne isnt supported)… at this point i’m stuck in IE hell… and it sucks…now… to find a *good* ssh client with dsa public key auth support