Kyle covers some of the headaches and gotchas when deploying servers to cloud environments that hand out completely random IPs.
Typically when a network is under my control, I like my servers to have static IPs. Whether the IPs are truly static (hard-coded into network configuration files on the host) or whether I configure a DHCP server to make static assignments, it's far more convenient when you know a server always will have the same IP. Unfortunately, in the default Amazon EC2 environment, you don't have any say over your IP address. When you spawn a server in EC2, Amazon's DHCP server hands out a somewhat random IP address. The server will maintain that IP address even through reboots as long as the host isn't shut down. If you halt the machine, the next time it comes up, it likely will be on a different piece of hardware and will have a new IP.
To deal with this unpredictable IP address situation, through the years, I've leaned heavily on dynamic DNS updates within my EC2 environments. When a host starts for the first time and gets configured, or any time the IP changes, the host will update internal DNS servers with the new IP. Generally this approach has worked well for me, but it has one complication. If I controlled the DHCP server, I would configure it with the IP addresses of my DNS servers. Since Amazon controls DHCP, I have to configure my hosts to override the DNS servers they get from DHCP with mine. I use the ISC DHCP client, so that means adding three lines to the /etc/dhcp/dhclient.conf file on a Debian-based system:
supersede domain-name "example.com"; supersede domain-search "dev.example.com", "example.com"; supersede domain-name-servers 10.34.56.78, 10.34.56.79;
With those options, once the network has been restarted (or the machine reboots), these settings will end up in my /etc/resolv.conf:
domain example.com search dev.example.com. example.com nameserver 10.34.56.78 nameserver 10.34.56.79p
I've even gone so far as to add a bash script under /etc/dhcp/dhclient-exit-hooks.d/ that fires off after I get a new lease. For fault tolerance, I have multiple puppetmasters, and if you were to perform a DNS query for the puppet hostname, you would get back multiple IPs. These exit hook scripts perform a DNS query to try to identify the puppetmaster that is closest to it and adds a little-known setting to resolv.conf called sortlist. The sortlist setting tells your resolver that in the case when a query returns multiple IPs to favor the specific IP or subnets in this line. So for instance, if the puppetmaster I want to use has an IP of 10.72.52.100, I would add the following line to my resolv.conf:
sortlist 10.72.52.100/255.255.255.255
The next time I query the hostname that returns multiple A records, it always will favor this IP first even though it returns multiple IPs. If you use ping, you can test this and see that it always pings the host you specify in sortlist, even if a dig or nslookup returns multiple IPs in random order. In the event that the first host goes down, if your client has proper support for multiple A records, it will fail over to the next host in the list.
This method of wrangling a bit of order into such a dynamic environment as EC2 has worked well for me, overall. That said, it isn't without a few complications. The main challenge with a system like this is that the IPs of my DNS servers themselves might change. No problem, you might say. Since I control my dhclient.conf with a configuration management system, I can just push out the new dhclient.conf. The only problem with this approach is that dhclient does not offer any way that I have been able to find to reload the dhclient.conf configuration file without restarting dhclient itself (which means bouncing the network). See, if you controlled the DHCP server, you could update the DHCP server's DNS settings, and it would push out to clients when they ask for their next lease. In my case, a DNS server IP change meant generating a network blip throughout the entire environment.
I discovered this requirement the hard way. I had respawned a DNS server and pushed out the new IP to the dhclient.conf on all of my servers. As a belt-and-suspenders approach, I also made sure that the /etc/resolv.conf file was updated by my configuration management system to show the new IP. The change pushed out and everything looked great, so I shut down the DNS server. Shortly after that, disaster struck.
I started noticing that a host would have internal health checks time out; the host became sluggish and unresponsive, and long after my resolv.conf change should have made it to the host, it seemed it was updating the file again. When I examined the resolv.conf on faulty systems, I noticed it had the old IP scheme configured even though the DNS servers with that information were long gone. What I eventually realized was that even though I updated dhclient.conf, the dhclient script itself never grabbed those changes, so after a few hours when it renewed its lease, it overwrote resolv.conf with the old DNS IPs it had configured!
I realized that basically every host in this environment was going to renew its lease within the next few hours, so the network needed to be bounced on every single host to accept the new dhclient.conf. My team scrambled to stage the change on groups of servers at a time. The real problem was less that dhclient changes required a network blip, but more that the network blip was more like an outage that lasted a few seconds. We have database clusters that don't take kindly to the network being removed. At least, they view it (rightly so) as a failure on the host and immediately trigger failover and recovery of the database cluster. The hosts seemed to be taking way longer than they should to bounce their network, were triggering cluster failovers, and in some cases, required some manual intervention to fix things.
Fortunately for us, this issue affected only the development environment, but we needed to respawn DNS servers in production as well and definitely couldn't handle that kind of disruption there. I started researching the problem and after confirming that there was no way to update dhclient.conf without bouncing the network, I turned to why it took so long to restart the network. My dhclient-exit-hook script was the smoking gun. Inside the script, I slept for five seconds to make sure the network was up and then performed a dig request. This meant that when restarting the network, the DNS queries configured for the old IPs would time out and cause the host to pause before the network was up. The fix was for me to replace the sleep and dig query with a template containing a simple echo to append my sortlist entry to resolv.conf. My configuration management system would do the DNS query itself and update the template. With the new, faster script in place, I saw that my network restarts barely caused a network blip at all. Once I deployed that new exit hook, I was able to bounce the network on any host without any ill effects. The final proof was when I pushed changes to DNS server IPs in production with no issues.