Network Security

How to determine out-of-scope bug bounty assets

How to determine out-of-scope bug bounty assets
Written by ga_dahmani
How to determine out-of-scope bug bounty assets

Applications and networks are rarely hardened as well as they should be, so why not incentivize bug discovery by third parties? Organizations that start their own bug bounty program can pay security researchers to submit bugs they discover instead of posting them online or being found by attackers first.

Those interested in getting started can use Corporate cybersecurity: risk identification and the Bug Bounty program by author and security researcher John Jackson. One thing to consider when creating a program is what goes into it. It is important to note that not all bugs found are always included in a bug bounty policy. Paying for these out-of-scope errors, Jackson said, is important.

“If you have remote code execution on a server that contains 100,000 PII [personally identifiable information] records, then ‘out of range’ doesn’t really mean anything anymore,” Jackson said. “Attackers don’t care about out of range. If a hacker touches that data and leaks it, he’s paying a lot of money per record.”

In the following excerpt from Chapter 4, Jackson explains how to determine what out-of-scope means without being subject to it at the same time.

Check out a Q&A with Jackson to learn more about bug bounty programs and how they differ from vulnerability disclosure programs.

Cover image of Corporate CybersecurityClick for more information on


Corporate Cybersecurity:
Risk Identification and
Bug bounty program
.

4.9 When is an asset really out of scope?

In general, the goal of out-of-scope reporting is never to dissuade a security researcher from reporting, but rather to ensure baseline restrictions are put in place to avoid legal or financial impact. With that in mind, ideally program managers should be fair in assessing in-scope and out-of-scope vulnerabilities.

A security researcher mines a list of subdomains for the company and finds many assets, some of which are explicitly declared out of scope. Researchers are curious, either by nature or acquired in their lives. They want to know how the components work. In this specific scenario, assume this asset is well known to be out of scope (defined in the out of scope section of the enterprise bug bounty program): should it be tested?

If the answer that came to mind was No, can be simultaneously the correct and incorrect answer. To demonstrate the scenarios bug bounty administrators may encounter, consider an example of (P1) (critical vulnerability) and (P4) (low vulnerability).

(P1) — Critical: Admin account takeover

In this scenario, while testing on various subdomains, the researcher notices something strange. It appears that one of the subdomains he found was “admin.prod.example.com”. The investigator reviews the scope and sees that *.prod.example.com is listed as out of scope. Curiosity takes over, however, and to the investigator’s surprise, they find an exposed client-side Laravel debug bar. At this point, they have to make a decision, “Go out of scope or not?”

Now, in the case of the Laravel Debugbar, imagine that the researcher intercepts the admin credentials and can log in to the web application that resides on the admin endpoint. The web application portal has internal employee email addresses, sensitive workflow information, customer addresses and other contact information, etc. A moral dilemma ensues, and the researcher now has to decide whether it is even worth reporting a vulnerability that is identified as “out of scope.”

Under most circumstances, the vulnerability will be reported by the investigator, especially if it involves account takeover or PII (personally identifiable information).

Now review the following scenario, considering example P1.

(P4) — Low: Apache server status page with server information leaking

Similarly, the same security researcher may have stumbled upon a subdomain called “sv1.prod.example.com”. As our previous example indicated, *.prod.example.com is out of scope. Upon loading the web page, the investigator finds the page/server status of the web server and can now view specific information such as version control, the URL paths it logs, CPU usage, and some internal IP addresses.

While this may seem like a serious vulnerability to an amateur researcher, the severity of the vulnerability depends entirely on the level of information that can be accessed. For example, if this server status page logs all endpoints for the root domain, the investigator might find an endpoint that reveals sensitive information after navigating to it. However, if no sensitive information is disclosed, either directly or through registered endpoints, the vulnerability will be treated as low.

The investigator will once again have to make a choice, to report or not. In many cases, it may only be reported for the love of doing the right thing. The researcher knows that she is in the hands of a program director to decide whether or not to get paid. However, a bug bounty administrator will have to make the right decision about paying the hacker. Alternatively, by utilizing bug bounty platform providers, program administrators can resolve the report without monetary awards which can still be beneficial to the investigator’s credibility.

4.10 The house wins, or not?

The final decision for many bug bounty programs operating today generally lies in not paying the security researcher if they are out of scope. Some of those that have been heard reside within the school of thought of “We don’t want to encourage our investigators to hack out-of-scope assets just to get a bounty” or “If we pay an investigator for an out-of-scope reward, we’ll have to pay them for every bounty out of reach.”

A major problem with adverse thinking towards out-of-scope research is that it can lead to key research being overlooked. When determining if paying for a reported vulnerability seems like the right move, ask the following questions:

  1. What is the impact level of this vulnerability? Could it lead to a leak of extremely sensitive information?
  2. If sensitive information is not leaked, could the vulnerability seriously damage the company’s reputation?
  3. Do we value the researcher and want to reward him?

One size does not fit all in terms of security researcher pay. Remember, when managing a program, the rules are set by the program administrator. While transparency should always be exercised with leadership, caring is the key to developing a strong and loyal research platform and building a brand reputation among the cybersecurity community. Let’s take our earlier examples of critical and low vulnerabilities and run a drill to answer the questions above.

P1 — Acquisition of administrator account

  1. What is the impact level of this vulnerability? Could it lead to a leak of extremely sensitive information? Yes, the investigator has found multiple pieces of PII and information that may be valuable if sold or obtained by a threat actor.
  2. If sensitive information is not leaked, could the vulnerability seriously damage the company’s reputation? Even if the investigator were unable to leak sensitive information, this admin panel contains functionality that could take down key parts of the business, resulting in monetary impact and, in turn, reputational damage.
  3. Do we value the researcher and want to reward him? Personally, this researcher has never reported to us before, but it would be nice to reward him for his efforts.

In this administrator takeover example, it’s easy to see that a lot of damage could happen. Company information could be sold or there could be a direct impact on business continuity, resulting in loss of revenue. That said, the right thing to do would be to pay the security researcher. In a circumstance such as the one described, there would be many more losses in a default scenario than there would be if the investigator received adequate payment. Programs should consider setting a moral example and paying for vulnerability findings.

P4 — Apache server status page leaking server information

  1. What is the impact level of this vulnerability? Could it lead to a leak of extremely sensitive information? No, there is no leak of sensitive information. Multiple internal IP addresses are displayed, but there is no impact.
  2. If sensitive information is not leaked, could the vulnerability seriously damage the company’s reputation? No, there is no damage to reputation.
  3. Do we value the researcher and want to reward him? This researcher has informed us many times before. However, we do not see any severe impact.

When a lower-level vulnerability is reported to be out of scope, it is not technically immoral to refuse to reward hackers for their findings. The expectations that were set early in the process are fair game. However, consider paying investigators a slight bonus or sending them a t-shirt. A little gratitude, especially for someone who went out of their way to help, goes a long way.

4.11 Fair judgment on rewards

It’s easy to stray from the moral path as a program manager. Sometimes it can seem tempting to claim plausible deniability of a problem. If you use a managed program, triagers only know their environment as best they can from an outside perspective. Bug bounty programs are only as efficient as the engineers and managers who run the program. Similarly, programs can only be as honest as the parties responsible for validating vulnerabilities.

There are many blind exploit paths that are easily negated or questionable from an attacker’s perspective, and proper communication between the researcher and engineer is needed to confirm the existence of some vulnerabilities.

If a researcher stumbled across a GitHub repository for an organization and found SQL credentials, that would be an extremely valuable find, especially if they could log in. In the event that the login is restricted to the internal network, you may not have a way to immediately control the server. Sometimes it may require internal verification. The verification may be out of the line of sight of the classification team and the investigator.

About the Author
John Jackson is a senior offensive security consultant and founder of Sakura Samurai. 桜の侍, a hacking group dedicated to legal hacking. He is best known for his many contributions to enterprise/government security and CVE research. Jackson has contributed to the threat and vulnerability space, revealing various pieces of cyber vulnerability research and aiding in resolution for the greater good. He continues to work on various projects and collaborates with other researchers to identify major cyber vulnerabilities.

About the author

ga_dahmani

Leave a Comment