Use Massdns

The only tool for mass enumeration across multiple domains.


MassDNS can reliably resolve 100K subdomains in seconds, can use AltDNS’s capabilities, and provides the user with power over results. Use it if you are bruteforcing a ton of domains continuously.


The first step in offensive security is recon. Getting as close to the full scope of your target is the goal of the recon phase. Mainly, this post will focus on how to discover subdomains effectively, for multiple targets by using MassDNS. Additionally, I have researched this space for some time and did not find a good comparison of tools in regards to their ability to be ran over many targets.

The tools

There are a lot of scripts and programs that tackle subdomain enumeration. I will talk primarily about the following tools (chosen based on how I perceive their popularity):

  1. Passive sources
  2. Subbrute
  3. Sublist3r
  4. Enumall
  5. Brutesubs
  6. DNS-Parallel-Prober

Passive sources

I wanted to tackle passive sources manually, as I already had a framework for automation and it was difficult to integrate pre-built tools into it. Passive sources are okay but they are never as good as bruteforcing. The reasons are simple: if it is a passive source, it was already found elsewhere and indexed. However, if you are bruteforcing there is a chance some passive channels haven’t picked up that subdomain.

In my opinion, passive sources are always beneficial to pull from, but should not be the main source where you get your subdomains. This leads us to other tools.


A time tested tool that many know how to utilize. In my opinion, the best feature of the tool is the built-in recursion, checking subdomains of subdomains. When I started my subdomain enumeration for every open scope bug bounty target I chose this tool first.

When you have a lot of domains to scan, time and reliability is the most important feature a tool can have. When I attempted to integrate Subbrute into my process I found a few things:

  1. Long run times
  2. Script wouldn’t stop
  3. Recursion prolonged run times

With a list of around 100K subdomain names it took my $20 Digital Ocean instance 15+ minutes to finish a scan for a single domain. Due to the time it takes to finish a scan, your automation neglects other domains where new subdomains might have just came online. I somewhat prevented this by running my passive modules while subbrute ran. This way, while a domain was being scanned, I would be pulling passive DNS information from another domain.

Finding the vulnerable server before anyone else is very important in bug bounty hunting, and the time it takes for subbrute to accomplish its tasks was too high for me (and my machine). Additionally, when running across all my targets, subbrute would hang occasionally. This makes detecting the end of a run a nightmare, and eventually was too much work to keep up. I started looking for other options.


Sublist3r is more focused on passive sources to gather information. These passive sources often offer an API to make searching easier for the user. However, rate-limiting is applied to them, making automation of a lot of domains troubling. Many times a source would block my instance’s IP address due to the amount of requests (understandably).

Note Sublist3r can run subbrute for you, but I wouldn’t advise for the reasons above. Additionally, Sublist3r must run on the target and then sequentially will run subbrute, increasing your run times per domain.

Therefore, I created a script to run Sublist3r for a domain, which then ran subbrute separately for a domain. This way, once one of the processes finishes, it can start running on another domain which improves the effectiveness of the automation. This approach was on the right track and was very similar to how I handled subbrute and passive sources manually. Major downside to this was time to complete scans.


Enumall relies on Recon-NG to do passive information collecting and bruteforcing. Enumall is a handy small script that I think utilizes multiple other tools to accomplish the task in a smart manner. By using Recon-NG to discover hosts, it automatically will store your enumerated subdomains in its built-in tables.

However, to run it on multiple domains and be efficient wasn’t possible for me. Recon-NG will run each test in sequence, which severely impacts its performance.

Additionally, I ran into memory problems ($20 DO box) due to the tables it would create in the workspace. I had to delete each workspace for the domain after it had finished running, and then create a new workspace for the next domain. If there was a large domain it would cause my instance to run out of memory.

Due to these reasons I couldn’t use enumall.


Another tool that runs some of the other tools mentioned. Personally, I haven’t played with it very much but it is gaining traction as an easy way to enumerate subdomains.


In its infincy at this point, but might be a competitor to MassDNS. Haven’t used it but might be worth looking into if MassDNS causes you too much trouble.


At this point I was hopeless. In terms of effectiveness, I believed passive sources and subbrute would be the best approach. However, I did not want to deal with the upkeep of creating fault tolerant program. It was at this point I came across MassDNS, the savior.

Pros (You might as well call me a shill)

Seriously, run MassDNS. If I had came across this tool I would have saved a ton of time monkeypatching other subdomain bruteforcing applications together.

First, the reliability and speed is unmatched. 100K subdomains bruteforced in under 10 seconds. Previously, it would have taken 5-10 minutes if I was lucky and there were a few subdomains. I have been running it for 2-3 months continuously and have not encountered any problems with the application itself in terms of reliability.

Previously, I had gathered subdomains from running subbrute and utilizing my scripts to parse passive sources. I didn’t expect to find very many subdomains after that, but when running MassDNS with a large wordlist it gave me too many subdomains to investigate each one. (hint: some people are using EyeWitness, I wonder why?)

Additionally, to me it seems that AltDNS was created to be used for this tool (even though AltDNS contains a way to resolve domains itself). AltDNS will create a wordlist, which you feed into MassDNS to have it resolve for you. This is great because when you have a domain with a ton of subdomains, and a large prefix list, the permutation list is gigantic. So far I haven’t found a faster DNS resolver than MassDNS.

Finally, the power of parsing the output. MassDNS is definitely verbose if you allow it to be. You will not be missing any crucial output for most records no matter what the response is. This point is expanded on in the drawbacks below.

As an aside, I think most big (data) bounty hunters are using MassDNS but obviously this is not something I can say for certain.


There is a major downside to MassDNS that I have not discussed. It is a very simple tool with complex output.

A lot of the other tools discussed provide an easier-to-use interface and easier-to-understand output. However, all you have to do is watch Frans Rosen’s talk at AppSec EU where he addresses problems that MassDNS does not have and other tools do. MassDNS does not withhold information that other tools do. For instance, if a subdomain is not found, many tools will not show it (since it is NXDOMAIN). However, Frans shows where there is a CNAME without an A record for a subdomain. Using the host command will return NXDOMAIN because it can’t find an address for the CNAME. However, if someone registered the CNAME there would be an A address. Some tools miss this, so takeovers were neglected. However, MassDNS does not hide information (unless you provide flags to do so).

The next downside is the resolvers.

In order to speed up enumeration, MassDNS contacts many resolvers for each host. This way one DNS server does not slow down the process and you can spread out the enumeration efficiently (subbrute does the same). However, there are sometimes BAD RESOLVERS.

Bad resolvers return records that are old and expired (or just wrong). This severely hampers your enumeration if you are getting berated by false negatives for subdomains that do not exist.

A way that I resolved this is to parse out the ‘found’ subdomains, and then use Google’s DNS ( to resolve each one. If one did not resolve by Google, I can remove the original DNS server that returned that record from the resolver list. This way, I have removed a majority of bad resolvers, leaving me with only good results.

And then there is CPU strain. My $20 DO box is definitely hurting everytime I run this. However, the results are so good that I am okay with it (and considering a beefier box just to support it).

Finally, MassDNS requires a user to parse its output. This means for automated systems, that a script must be made to pull meaningful information from the output. A basic knowledge of scripting/programming would almost be required to get a good level of automation and coverage with it.

Final thoughts

Overall, to get use out of the information MassDNS provides, you have to write a script to parse it and interact with the output. In my opinion this is the biggest barrier for everyone to use MassDNS. With this said, if you know enough programming to parse the output and relay into your automation, you will have a great subdomain enumeration process. Try it out and compare to your previous enumeration methods, think in terms of results, reliability, and speed.

Written on June 2, 2017