LJ Archive

Digging Up Dirt in the DNS Hierarchy, Part II

Ron Aitchison

Issue #166, February 2008

The examples used here were not invented. This article is really, really scary.

In the first part of this article [LJ, January 2008], we started the apparently simple journey of navigating our way to the IP address of www.example.com and its secure server online.example.com by traveling down the DNS hierarchy. We finally received a referral from the gTLD .com servers pointing us to the name servers ns2.example.com, an in-zone name server, and ns1.example.net, an out-of-zone (or out-of-bailiwick) name server.

So, let's restart our quest for the IP address of www.example.com and assume we have verified and obtained the IP address of ns1.example.net, which, because it is out-of-zone, we have tracked to its authoritative source via the .net gTLD servers. Now, it's time to check all our authoritative servers for the example.com domain to see what else we can find. First we check the front door:

dig @ns1.example.net version.bind txt ch

This command uses a legacy DNS resource record class called CH(AOS)—Internet addresses use the IN class—to try to obtain information about the software being used. We get this response:


; <<>> DiG 9.4.1-P1 <<>> @ns1.example.net version.bind txt ch

...

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 8503

;; flags: qr aa rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 0

;; QUESTION SECTION:

;version.bind. CH TXT


;; ANSWER SECTION:

VERSION.BIND. 0 CH TXT "named 4.9.6-Rel-Tuesday-24-June-97..."


;; Query time: 25 msec

;; SERVER: 207.253.126.250#53(207.253.126.250)

...

And, we got lucky. This name server is telling us the supplier and version number of its software. If we were bad guys, we would go and look up the alerts for this version of software and see if there were any juicy vulnerabilities. In this case, the news is extremely good (for the bad guys), because the server is running BIND 4, last updated in 1997! Between 3% and 7% of all the estimated 9 million name servers in operation still use this redundant software, which is full of bugs and exploit possibilities. Let's assume we repeat the command, substituting ns2.example.com—our second authoritative server—and we get back “my name is Bind, James Bind”. The administrator of this DNS has a sense of humor and some knowledge of BIND configuration parameters. In the options clause of BIND's named.conf file (normally in /etc/named.conf), the following parameter will appear:

options {

...

version "my name is Bind, James Bind";

...

}

You can place any text here, and it will be supplied in place of the version number. If the statement is missing, BIND will return its version number, as shown in the previous example. As we shall see, this may not prevent us from discovering the information, but it does at least make it more than a trivial one-line command. Finally, although BIND servers respond to the above command, not all DNS software does, so we could have received a timeout.

Now, it's time to move on to the next check. We're going to use the second of our tools, fpdns, which is a DNS fingerprinting tool (see Part I of this article for more information on fpdns). fpdns uses a range of benign techniques to try to identify both the software vendor and, in many cases, the release version or version range. It is not infallible, but its accuracy is extremely good. Let's use it to check our reluctant Mr Bind:

fpdns ns2.example.com

And, we get the following:

fingerprint (ns2.example.com, 10.10.0.2): ISC BIND 9.2.0rc7 -- 9.2.2-P3
[recursion enabled] 

Now, this potentially is serious as well. First, the current version of BIND at the time of this writing is 9.4.1-P1. So, we can simply check the security alerts for the version range quoted and see whether we have some handy poisoning possibilities. Second, this server is an open recursive server, which means that any request for name resolution will be accepted and acted on by this server, not only the names for which it is authoritative. We could test this using a dig command like the following:

dig @ns2.example.com some.obscure.domain

Why are open resolvers a serious problem? There are three reasons. First, we can load up the server for a simple Denial-of-Service (DoS) attack by sending it requests for external name resolution. It will be so busy following the referral chains that it will not have time to answer requests for the domain for which it is authoritative—effectively taking the domain off the air for at least a proportion of the traffic. Second, it can be used in Distributed Denial-of-Service attacks. In this type of attack, requests are sent for the same name to many open name servers (there are perhaps as many as one million open name servers on the Internet), each of which then sends a query to the DoS target. No one single request breaks any threshold monitoring, so it is difficult to identify all the sources. The net effect is that the target DNS is bombarded with traffic and cannot respond. Third, if I send a query to an open name server, I know what it is going to do; it's going to send a query to the target domain's name server. So, without even sniffing its traffic, I can start sending spoofed responses, and if I get lucky, I can poison the open server's cache (there are many documented weaknesses that I can exploit to increase my chances significantly).

The function of a caching server is to save the response until the Resource Record's TTL (Time to Live) expires and then re-read the record. If the TTL for the requested RR is long (30 minutes or more), I have a poisoning opportunity only every 30 minutes or more, but if the TTL is short, say, five seconds or even zero seconds, my odds of getting poisoned responses into the cache shoot up dramatically. And, of course, my poisoned response will not have a TTL of five seconds; it will be more like five weeks, so when it's there it stays there for a long time.

Now the real place to do this cache poisoning is not at the authoritative name server. Instead, I would go looking for an open recursive name server at an ISP and try to poison the cache using the same technique, so that all the ISP's clients for www.example.com will come to my pharming site.

All name servers should be closed to external traffic to stop this behavior. If you are using BIND, there are three options:

1) If the name server is authoritative only (best practice recommends that you never mix caching and authoritative functions in the same DNS), add the following line to the /etc/named.conf file in the options clause:

options {

...

// BIND's default is recursion yes;

recursion no;

...

};

2) If your server does provide both authoritative and recursive services, limit who can use them by using the allow-recursion statement in an options clause:

options {

...

allow-recursion {192.168.2/24};

...

};

This statement limits the allowable IP addresses permitted to make recursive requests to 192.168.2.1–192.168.2.254. It is worth pointing out that even if this statement is present, a recursive request from outside the defined IP range will return the correct result if it already exists in the cache (it previously was requested by a valid internal user). BIND 9's view clause also can be used to provide further control and separation in a mixed authoritative and caching configuration.

3) Finally, if the server only provides caching services, use the allow-query statement instead:

options {

...

allow-query {192.168.2/24};

};

Now, let's continue with our checks by requesting the IP address of our target from one of its authoritative servers:

dig @ns1.example.net www.example.com

And, we get this in response:


; <<>> DiG 9.4.1-P1 <<>> @ns1.example.net www.example.com

...

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49319

;; flags: qr rd ra aa; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 0


;; QUESTION SECTION:

;www.example.net. IN A


;; ANSWER SECTION:

www.example.com. 5 IN A 10.10.0.5

www.example.com. 5 IN A 10.10.0.6

;; Query time: 61 msec

;; SERVER: 192.5.6.30#53(192.5.6.30)

...

There are a couple of things to note in this response. First, the aa flag is set, which is what we would expect. If this flag is not set, this would be what is called a lame-server (a server defined in the parent as authoritative for a domain but that does not return the aa flag to a query for information in that domain). The master (primary) and slave (secondary) name servers for a domain must return the aa flag. There is no externally visible difference between master and slave server responses. This means you can use two or more slave servers to provide authoritative service and keep your master completely hidden. The second point to note is that the ra flag is set, thus offering a recursion service. Let's test it:

dig @dns1.example.net some.obscure.domain

And bingo, we get a response—this server is also open. The reason for using some.obscure.domain is to make sure the data is not already cached, in which case, depending on its configuration, the name server could return the desired results and still be closed as noted previously. Using an obscure name minimizes the possibility of a false positive. The corollary is also true. If we fire a request for a popular domain name, such as google.com, to an apparently closed DNS and get a valid result, we know this server is providing recursive services for some set of clients—unless of course it is the authoritative server for google.com! This knowledge gives us some, very modest, poisoning possibilities by looking at the TTL time of the response.

In passing, we also should note that the site sensibly has provided two IP addresses, albeit both on the same IP address block. This means that browsers automatically will fail over (in 2–3 minutes). If the first server fails, it uses a five-second TTL, which, apart from being of great assistance to potential cache poisoners, is entirely useless as Microsoft's browser will attempt to refresh its browser-cached IP addresses only every 30 minutes (one minute for Firefox).

So, ns1.example.net is using old, buggy software and is open. Can it get worse? Well, yes it can, and indeed, in this case, it does get worse.

So far, we have been emulating what a browser does and simply looking for ARRs; dig can be used to find any type of RR. In this case, the absence of an AUTHORITY SECTION is a tad suspicious, so let's experiment and try this command:

dig @ns1.example.net www.example.com ns

And, we get this response:


; <<>> DiG 9.4.1-P1 <<>> @ns1.example.net www.example.com ns

...

;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 49319

;; flags: qr rd ra aa; QUERY: 1, ANSWER: 2, AUTHORITY: 0, ADDITIONAL: 2

;; QUESTION SECTION:

;www.example.com. IN NS



;; ANSWER SECTION:

www.example.com. 3000 IN NS ns3.example.com.

www.example.com. 3000 IN NS ns4.example.com.


;; ADDITIONAL SECTION:

ns3.example.com. 3000 IN A 10.10.0.8

ns3.example.com. 3000 IN A 10.10.0.9


;; Query time: 61 msec

;; SERVER: 192.5.6.30#53(192.5.6.30)

...

This result means that the user is trying to delegate www.example.com to an alternate set of DNS servers, ns3 and ns4.example.com, but the delegation is invalid, so the defined DNS servers are not visible. The zone file probably has this construct:

$ORIGIN example.com.

...

; these A RRs should not be present in the example.com

; zone file but should be present in a www.example.com zone file

www 5 IN A 10.10.0.5

www 5 IN A 10.10.0.5

; valid delegation for www.example.com

www 3000 IN NS ns3.example.com.

www 3000 IN NS ns4.example.com.

...

; required glue RRs for the delegation

ns3.example.com. 3000 IN A 10.10.0.8

ns3.example.com. 3000 IN A 10.10.0.9

BIND 9 (used by ns2.example.com) correctly will interpret this as a delegation and generate a referral to ns3 and ns4.example.com. BIND 4 (ns1.example.net) will not, and thus, approximately 50% of the traffic will never even see the delegated servers, which if we perform our checks using the above techniques, we would see are solidly configured and using the latest versions of BIND (similarly with the name servers for online.example.com).

In summary, this user configured and maintained his or her internal name servers with great care and in a thoroughly professional manner but had entirely overlooked the route by which users arrived at the site. To put it another way, the castle was impregnable, the moat wide and deep, the walls thick, the defenses manned, but the front door wide open.

This problem may look pretty far-fetched, but it was real, took less than ten minutes to find and—in case you were wondering—is now fixed!

When performing this kind of analysis, you will develop your own methods and variations, but here are some things to look for that seem to make organizations especially vulnerable:

  • Multiple domain names, for instance, example.com, secure-example.com and online-example.com, tend to make managing and monitoring more complex for the operator and, hence, are more likely to have DNS configuration errors.

  • Backroom domains—many organizations elect to use unique domain names, for instance, support-example.com, to perform infrastructure functions, such as running their internal DNS systems, that are not visible to end users. For some strange reason, many of these organizations think end-user invisibility also applies to the DNS system.

  • Many DNS servers—the more DNS servers, the more likely it is that at least one of them is running either badly configured or unpatched software.

  • BIND 8 and open is a very common ISP configuration. BIND 8 is pretty buggy, represents approximately 20% of all DNS servers and is now officially deprecated. Whoopee for the bad guys.

  • Always follow the transitive trust routes. The more there are, the more likely you are to find a problem.

  • Outsourced DNS—there are highly professional DNS organizations to whom many large users subcontract a provision of DNS service and whose DNS configurations are invariably in very good shape. Many organizations use the outsourced DNS to delegate to internal DNS systems. These users can exhibit the exact opposite characteristics of the example case—the internal name servers are a disaster. Further, in a surprising number of cases, even when outsourced, there is one internal name server or that of a local service provider on the primary authoritative list—almost invariably this additional name server has a problem.

The techniques used here are not aggressive; for example, they do not test for AXFR (zone transfer) vulnerability, because this not a friendly action and is likely to generate nasty responses, quite rightly, from DNS administrators. Treading lightly is the best technique.

We used a very small subset of dig's capability here. Read the man pages for more information. If you do find something suspicious or wrong, double-check, then either fix it immediately or, if it affects a third party, act responsibly and inform the relevant organization (though it is sometimes extremely difficult to get through to the right person). Theoretically, the SOA RR of the domain in question should contain the valid e-mail address of the right person in the organization.

I encourage you to experiment and modify the techniques for diagnosing and auditing your DNS systems—it will pay dividends time and time again—it's best that you get there before the bad guys. And, it can provide endless hours of fun as you sleuth around.

Ron Aitchison is the author of Pro DNS and BIND and loves nothing better than using dig to uncover bizarre DNS configurations. One day, real soon now, he is going to get a real life.

LJ Archive