HA EC2 Part #3: What Happens Once You’re Inside the Cloud

Onto what happens inside the cloud!

Since we’re looking to load balance what happens inside the cloud you might be tempted to ask why not use the same sort of method we used for load balancing (well at least fail-over) outside the cloud. And the answer is a resounding YOU CAN! But Rather like a cooking show where you _could_ use water to hydrate something but you could also “bring a little flavor to the party” by using chicken broth or wine, we can find ourselves with the option for a mighty fine set of bonus features — if we’re willing to look past the vanilla DNS round robin load balancing.

Ok, I suppose before I go on I should address you people still scratching your heads that I said you could use DNS for load balancing. I know you’re thinking that you’ll never achieve real balanced load with this method, and you’re right! Like I said more features await! And for everyone wondering why they should use a real load balancer if DNS is going to be “good enough” anyhow: remember that slight problem we mentioned about caching DNS servers? Well that problem would apply here as well, and since previously we couldn’t avoid it but now we can I see no reason not to. Plus it will be more of a headache here than there because the likelihood of a load balancer *LEFT ALONE* failing is a lot less likely than a web or database server which is constantly doing a great many things (and is subject to the whims of people — in the process of development.) Finally you get a single point at which you need to employ firewall configuration instead of X points where X is the number of back-end web servers.

Now as far as load balancers go there seem to be two prevalent kinds. First there is the TCP load balancer, and second there is the proxy.

A TCP load balancer functions on by IP address, Protocol, and Port. For example you may specify a group of servers as the back-end cluster for the IP address a.b.c.d port 80. And thats all you can do with that IP address and that port. It avoids a lot of complication by not caring, in the slightest, why or how you got there, or whether that particular set of web servers is really what you want. For this reason it’s not possible to specify fancy things like all requests on port 80 for example1.com go to cluster A, and example2.com to cluster B. To do that you need another IP address, or to access the cluster on a different port (say a.b.c.d port 81). For former is OK when you have multiple IP’s to work with and you can share IP’s between fail-over load balancers, but neither of those luxuries hold true in the EC2 environment. And the later is fine if you are planning to do this for your development team, but if you’re trying to drive normal web traffic to port 81 it might end up being a little less than convenient for your users.

Which is why I’m going to be focusing on the proxy.

Improperly configured the reverse proxy is a sure ticket to trouble on the internal LAN… Fortunately we aren’t dealing with an internal LAN, we’re dealing with servers which are publicly available anyway. What you can get from a reverse proxy, though, are extra features. You get load balancing, you get fail-over, you get url and host rule based redirection, and you get apache log file aggregation. You get it all… and, I might mention, at a very convenient low low price of FREE! WOO!

Um, er.. >clears throat< yea, never-mind that!

I would say that you have a couple of commonly known programs which can handle reverse proxying in Apache, and Squid. But… lets face it… both of those carry with them a more-so-than-necessarily complicated setup process and are like swatting at a fly with a cannonball. Sure they’ll work, but there re better alternatives for this. Two seemingly popular alternatives are pound and, the newcomer, perlbal. Both of these daemons offer better functionality that LVS (in the back-end server fail-over department) but which do we want to use?

The choice between the two is tough, and I can’t say that I’ve extensively used either, however pound does shine in three areas. First pound does have a notion of the idea of a SESSION and can even manage persistence/affinity (seemingly a LOT better than LVS manages it I might add (the persistence tables *ARE* wiped for a server that goes down!)) However if you have a web application which was developed with a “shared nothing” approach (which is a GOOD thing) this benefit does not really apply, so it’ll be up to the next to to knock your socks off. Second pound does SSL wrapping, taking that load off of your web servers (which is a good thing for responsiveness, isn’t it?). And finally pound offers a logging mode to emulate the apache combined log file format (both with and without virtualhosts.) Which puts pound in a class all by itself (and right up there with the hardware balancers I think). If none of those features matter to you or if (as is very possible) I’m wrong about perlbals feature-set. Then just pick one already (flip a coin, choose the one written in the language you prefer… or… hey… go read their docs and see which one you like better!)

The only drawback that comes to mind right now about using this approach is that you will be making use of text configuration files, so some parsing and rebuilding will end up being necessary for the registering and de-registering of web servers. I’ll add more in the comments if and when I think of more…

So there you have it.

Using a good DNS Service (with a low TTL, and decent API) mixed with a decent reverse proxy you have all the benefits of

  • a load balancer
  • load balancer fail-over
  • rules based request redirector
  • log consolidator
  • back-end server filover
  • and a single point for fire-walling

While this hasn’t exactly been a HOWTO, or a TOASTER, I hope that it’s definitely a pointer in the right direction for people who are looking to scale their applications which have been built on top of the Amazon EC2 (and SQS, and S3) services.

Leave a Reply