Inside the Department of Defense bug bounty program
KATIE Moussouris is the founder of Luta Security and a prominent expert on bug bounty programs. In a bounty, organizations pay independent researchers -- hackers -- to find the vulnerabilities in their systems. In exchange they earn a prize: kudos, a T-shirt, or cash, on average a few hundred dollars per bug. Outside the walled garden of a bounty, identifying a software flaw can earn researchers a civil suit or felony charge; in May, the FBI raided the home of a dental technician who had tried to alert Patterson Dental that their software left patient records exposed. Still, bounty prices are often criticized for valuing bugs -- and the labor of the researchers who spot and disclose them—too cheaply.
As Chief Policy Officer for HackerOne, Moussouris helped organize Hack the Pentagon, the first federal bug bounty. More than 1400 participants took part, some as young as 14; the effort cost the Defense Department roughly $150,000, half of which went to paying bounties. “If we had gone through the normal process of hiring an outside firm,” Secretary Ashton Carter said this June, “it would have cost us more than $1 million.”
The interview has been edited for length and clarity.
I was hoping to get a sense of the origin story behind Hack the Pentagon. Did the Defense Department come to HackerOne with this idea, or did HackerOne reach out to them?
None of the above, actually. I was still employed at Microsoft when I was first invited to brief the Pentagon about bug bounty programs.
And when you went over there and described what exactly a program like this is, were they enthusiastic about having one or were there hurdles to jump, red tape to get through?
Considering that I left Microsoft about two years ago and the program was only launched this year… I think it was interesting to them that a complex organization with a lot of moving parts like Microsoft could model a way to step into bounties gradually. It took a while, though, for all the pieces to fall into place.
I saw a DoD press release about the competition that kicked off by describing HackerOne as a “reputable” bug-bounty-as-a-service firm, which sounds like the way you phrase it if you anticipate getting flak. Was trust a significant problem?
Well, it’s more about convincing the folks in any conservative organization that the hackers are going to participate in good faith. If you’ve got public-facing web properties, people are probing your sites. To get reports from hackers who want to turn these things over, that’s where you need vulnerability coordination programs and then bug bounty programs.
The trust element was about talking people through that logic. They’d say, “If we invite all these hackers to hack us, what’s to say they’re not going to find a bunch of bugs and not tell us?” And we’d explain, “Criminals are finding a bunch of bugs and not telling you. You’re providing a legal mechanism for the first time for hackers to come forward and tell you.”
What kind of person does come forward for a competition like this? Is it somebody who’s been waiting for that lawful opportunity to screw around in Pentagon systems, or is it someone with a public service kind of mindset?
Most hackers, just like most humans, want to do the right thing. It’s just that the anti-computer-hacking laws in the United States that have been in place for over 30 years make it quite clear that if you have not been given authorization to even look for a vulnerability on a site, you’re committing a felony. The people who come forward are people who would have loved to test those systems and then provide that information, but it was a felony for them to do so before getting permission from the United States government. And that’s what makes this a big, big historical shift.
Obviously it’s an improvement over the status quo if there are more venues where people can do this lawfully, but do you worry that -- for a corporation that doesn’t want people poking around -- it strengthens their hand if they can say, “If we wanted you looking, we would have set up a bug bounty, and therefore we’re going to come down on you with the full force of the Computer Fraud and Abuse Act?”
Some organizations might want to continue to live in that world, where they can threaten away their security problems, but organizations that are sincere about taking security seriously would probably look at a hacker trying to report the issue as a sign of good faith. This person didn’t need to come forward and risk prosecution, they could have just sat on the vulnerability, or they could have given or sold it to somebody else who wants to use it criminally.
It is certainly possible that organizations that haven’t evolved in their thinking about security might use the lack of a bug bounty program as a way to try and intimidate researchers. But more and more customers are beginning to expect a certain level of security, they’re expecting that their data will be protected if the company says they’re going to protect it, and I don’t think consumers would look too kindly on an organization that said, “A hacker tried to tell us about this thing that caused the loss of all your data, but instead of accepting the report, we threatened legal action.”
You were dealing with some conflicting political cultures here. Did you experience any backlash from folks asking, “Why exactly are you starting with the defense sector? Why can’t the first federal bug bounty be Hack the Department of Health and Human Services?” Especially in light of some of the big-ticket national security conflicts between the government and the tech community, like Apple/FBI?
I think it actually makes it much more powerful for this to start with the Department of Defense. They control one of the most powerful military organizations in the world. For them to say, “You know what, we’ve done our best trying to defend our websites, but we’d like to see what happens if we invite people who are not on our payroll to take a look” -- I think that admission, that even the Department of Defense needs help in cybersecurity, was a powerful statement to make. It opens the door for other agencies in the U.S. government to try this, and also for other governments to try this.
But I imagine that idea -- “If even the Pentagon is willing to do this, it definitely checks out” -- doesn’t sit well with some parts of the hacker community. The guy behind the Hacking Team breach recently gave an interview in which he said, “Nowadays, I prefer the cybercriminals to the white hats. The white hats write as if the fact that the State is wasting more money on cybersecurity is a good thing.”
You’re not necessarily going to agree with 100 percent of what any company or government does, but it’s about being willing to lend your expertise to solve the so far unsolvable problem of keeping systems safe.
As for wasting money, certainly the products and services that have grown up around cybersecurity over the last 20 years have been impressive; it’s a multi-billion-dollar industry. But the rise of bug bounties is an area in which you can show where the emperor’s clothes are and where they’re not, in terms of all these very pricey security services and products. It’s also about spreading the wealth in security, paying people around the world for the real bugs they find. It’s a broadening and a globalization of the source of technical security talent. In a lot of ways, it’s anti-snake-oil.
I’m sure you saw that Bugcrowd recently released a report on the state of the bug bounty in 2016. One of the things they found was that the overwhelming bulk of their payments were going to a very small number of very talented bug hunters. And the flipside of that was that 43 percent of their submissions came from India, more than the next nine nations combined, and those bugs tended to be fairly low priority. So what do you make of the critique that the actual wealth is being captured by a relatively small number of people, and that the rest of the labor being put into this system has more to do with outsourcing than crowdsourcing?
I think we’re seeing a couple of things going on. Some folks are using a bug bounty program as a way to outsource some security testing, and they might have been able to find some of that low-hanging fruit on their own if they were willing to pay somebody to run those scans for them as part of a traditional penetration test, or if they hired a couple of competent security engineers. But a lot of small organizations -- or even medium-sized organizations -- don’t have the budget to have somebody in-house, full-time because the competition for competent security folks is so stiff. It makes sense for them to do a bit of outsourcing in that case.
And then there are other organizations, big ones, that have very well established security programs and they’re looking for anything that those programs missed. It’s a very important sort of last check: you’ve done all of your due diligence, you’ve invested in security, and you’ve tried to eliminate as many issues as you can, but you’re using a bug bounty program to catch the inevitable mistake.
You’re also using it as a talent sourcing and recruiting mechanism. For the people who come forward with a run-of-the-mill bug that you just happened to miss, that’s great, you pay them the bug bounty. But for the people who come forward and end up uncovering serious, maybe even design-level flaws, you might want to get those folks under a longer contract or even hire them full-time.
That seems to resonate with the controversies that plenty of industries are facing over contract work, unpaid work, internships, where folks are promised, “We’re not necessarily paying fair market value for your labor, but in our defense, we couldn’t afford to pay you for it, and it could lead to a job.” When it comes to bug bounties, how much credit do you extend that kind of criticism?
There are always folks who are going to make a very, very erroneous comparison between bug bounty or “white market” prices for bugs and the offense market. People think that those prices are based on buying the same thing, and they’re not. What’s included in the higher price that’s paid by the offense market is exclusivity and secrecy. Whether it’s a nation-state buying a zero-day or the FBI buying something, what they’re buying is the extended use of that vulnerability or technique, for as long as possible.
You brought up Hacking Team earlier, and that pricing model played out in some of their e-mails. There was a zero-day seller saying, “For one of them it’s X, if you buy a few of them I’ll give you a bulk discount, if you want an exclusive sale, it’s triple the price.” So when I hear people say “Your bug bounty’s not a fair price because on the black market I could get X,” that’s making an assumption that 1) someone on the offense market wants what you have, 2) you know how to get in touch with somebody to make this deal, 3) you get paid for it, and 4) whoever’s paying you for it wants that extra premium, wants you to sell it to them exclusively rather than resell it or use it for your own purposes.
During Apple vs. FBI, people were saying, “Why doesn’t Apple just offer a million dollars for the thing? Apple can outbid the FBI.” But the defense market can’t bid the same amount as the offense market, because they will lose. The offense market can always go higher. And imagine how many Apple engineers and quality assurance testers would still be sitting in their seats the next day after they announced that Apple would pay $1 million for a bug or a jailbreak. The defense market can’t go past a certain point or you’ll lose the talent that’s creating and maintaining the technology itself.
The other point of comparison for pricing, though, wouldn’t necessarily be white market vs. gray or black market, it would be relative to other sectors of the white market -- the price for that talent paid as a salary, as an hour of work, etc. Because under the bug bounty model, you’re only getting paid for the result; you don’t get paid anything for the hours you spent not finding the bug, even though that seems like it’s as much a part of the process of identifying whether there’s a vulnerability. Do you worry about uncompensated labor alongside each bug that ends up selling?
In the three years that it took me to convince Microsoft to do their very first bug bounties, of course. We had lots of lawyers, so we looked into labor laws, we looked into this potential risk area. What we found is that while there may be a legal risk, technically, the fact of the matter is that if you’re putting enough disclaimers on the front end of this -- saying, “Here are the rules, they’re subject to final decision by the party who’s offering the bug bounty, and if you don’t agree with these rules then please don’t participate in our program” -- then that’s enough. Certainly when we looked into it, that seemed to be enough for the companies offering bug bounties.
And when you think about it, the bounty hunters are free agents. They can choose -- especially as there are more and more bug bounty programs -- not just based on the advertised pay rate for a bug, but also on their personal experience in dealing with the particular security response team.
One counterpoint would be that -- certainly according to Bugcrowd’s data -- the vast majority of the people submitting to these programs are quite young; 75 percent are between the ages of 18 and 29 and 45 percent of them lack a college degree. Are they in a good position to judge whether they’re being compensated fairly by these companies, or if they’re happy with the deal that’s being offered in the terms of service?
I think that -- especially as a lot of them will use it as a way to learn on the job, learn how to run these tools, especially the ones who only come up with low-hanging fruit -- it is like a paid internship for people that young who are trying to skill up. They might not be in the best position to judge the ROI of any of the ways they spend their time at that age.
But on the point about a lot of the labor being fairly young folks -- the Microsoft bounties will pay someone as young as 14.
We were told we could go younger than that but that it would get complicated. It’s about child labor laws, things like keeping their social security number and all the things you would need to pay them out; it’s complicated before age 14. But the rules were basically, if you’re younger than 14 years of age -- or considered a minor in your place of residence -- than you need to get a parent or legal guardian to accept the bug bounty payment for you.
The youngest bug hunter I’d ever heard of was five years old; he found a bypass in the password field for signing onto your Xbox. His dad happened to be in computer security so he knew how to report the issue. That particular thing was out of scope for any of the paid bounties, but that kid definitely got a pile of Xbox games.
On the whole, how much of your work do you think is about making the rest of the world more comfortable with hackers, and how much is about helping hackers get along with the rest of the world?
Hackers are gonna hack, no matter what. The norms that need to shift around vulnerability disclosure aren’t necessarily with what hackers have been doing; the problem is the preparedness of organizations and governments to deal with vulnerability reports. When you get a report -- whether it comes from a hacker, a customer, a partner, or anyone else outside your organization -- how do you deal with it? A majority of organizations still don’t know how to deal with that question.