I was recently asked to look into load balancing web servers on the Amazon Elastic Cloud Computing Service (EC2). And managing this presents some very interesting problems which need to be worked around. To look at the subject I’ll break it into 3 distinct pieces. #1: Identifying the Challenges (Which you’re currently reading), #2: Load Balancing the Load Balancer, and finally #3 What Happens Once You’re Inside the Cloud. No promises as to how quickly I get these out 🙂
First lets look at what this would normally entail:
You would have a data center, and a router which feeds into a DMZ. On the DMZ you would have a set of load balancers (either hardware or software.) A set so that if one failed the other would take over its job. These load balancers have static IP addresses on the DMZ as well as on the LAN. They also have a shared IP address which they are the balancers for. When one goes down the other takes over the IP address. In a hardware solution this might be accomplished in a fairly elegant and network invisible way. In a software solution this normally entails using IP aliases and forcibly updating the ARP cache on the router.
So the load balancers are the bridge between the DMZ and the LAN. On the LAN, with the load balancers, are a group of web servers. also with static IP addresses. There is a monitoring functionality on the load balancer which detects if a web server is no longer available. When that happens the load balancer updates an internal table and no longer sends requests to that particular web server. When the web server becomes available again the load balancer detects this, updates those internal tables, and begins sending requests to the server once more. All of that happens with varying levels of complexity.
For the scenario of the web servers reply there are multiple possible configurations. The web server may reply to the load balancer and the load balancer would then handle getting the proper response from your data-center to the client (a full reverse proxy). The web server might also reply directly to the client through a network route (in Linux Virtual Server (LVS) terms this is called “Direct Routing” (LVS-DR))
[ WAN ] -> [ Server ] [ ROUTER ] |-> [ Server ] [ DMZ ] <-> [ Load Balancer ] <-> [ LAN ] <-+-> [ Server ] |-> [ Server ] -> [ Server ]
The first thing that jumps out at me is that there is one key assumption in the above setup possibilities, and that is that everything is able to obtain a static IP address. That is that every time a given machine goes down, it comes back up at the same IP address. This is not true of the EC2 service. Your EC2 instances are dynamically allocated new IP addresses (and host-names) each time they are started (and consequently restarted.) So…
- No static IP for the load balancer
- No static IP for the web servers
Which means that on top of the challenges of installing and configuring a normal software load balancing solution there are several fold more challenges to overcome to be “successful” in your endeavor.
- You need to notify your clients if the load balancer address has changed
- You need to notify your web servers if the load balancer address has changed
- You need to notify your load balancer if the address of a web server has changed
Now you could, technically, circumvent the first o these challenges by housing the load balancer outside of the EC2 cloud, however this doesn’t make a whole lot of sense seeing as you would end up paying twice for all the bandwidth consumed (You would have to pay for the incoming request at the load balancer, then to make the same request to a web server, then the cost of the reply from the web server to the load balancer, and finally the cost of the reply from the load balancer to the client) so for the sake of this little mental pushup we’ll not even consider that a viable option, only worth mentioning (and we have, so now that thats over…)