Bug bounty programs have become an increasingly common tool in cybersecurity, with startups and established companies inviting hacker bounty hunters from around the world to attempt to find security holes in their networks in exchange for rewards. Even Apple, which long dismissed the idea, began offering bug bounties this year—for as much as $200,000—amid widespread questions about iOS vulnerabilities.
Organizations offering bug bounties often say the programs allow them to test their digital safeguards against a wider range of attackers than they could possible have on staff, often at a fraction of the cost. One bounty management provider, San Francisco-based Bugcrowd, reported earlier this year that companies using its platform have paid out more than $2 million between January 2013 and March 2016.
But for some companies and government agencies, inviting arbitrary strangers from across the internet to probe their security systems is a bigger risk than they’re willing to take, says Jay Kaplan, CEO of Redwood City, California, security firm Synack.
“When you open the doors, so to speak, and say, ‘come attack us and if you attack us, we’re going to pay you some money,’ and you don’t know who those people are and you have no auditability of what they’re doing, it brings a lot of risk into the equation, especially for conservative enterprises,” says Kaplan.
To allow those organizations to still get some of the benefits of a bug bounty program while maintaining discretion and security, Synack employs a network of freelance security researchers around the world, including programmers, engineers, and academic researchers. Once they’re properly vetted, they’re assigned to probe particular customers’ networks based on their particular strengths and interests and are rewarded for the vulnerabilities they uncover.
In addition to corporate clients in the health care, energy, finance, and other sectors, Synack recently inked a contract with the Department of Defense, focused on the department’s more sensitive IT assets. Contracts with Synack and San Francisco-based HackerOne are together worth $7 million. In a three-week pilot “Hack the Pentagon” program, the Defense Department reportedly received nearly 1,200 bug reports with about 1,400 hackers, paying out a total of about $150,000.
And this week Synack also announced a $2 million deal with the Internal Revenue Service to protect systems on the irs.gov domain, the first crowdsourced security bug-hunting effort by a civilian federal agency. Those government clients generally require strict background checks and limit participation to U.S.-based hackers, says Kaplan, who founded Synack in 2013 with CTO Mark Kuhr after both had worked at the National Security Agency.
In the future, Synack might be able to supply hackers with security clearances for projects with potential access to classified data, though the current Pentagon assignments don’t involve systems with any such information, Kaplan says.
Critics of bug bounty programs have also complained that bounty hunters are only picking up the easy-to-find flaws while leaving more difficult vulnerabilities undiscovered. According to security firm High-Tech Bridge, nine in ten companies with public or private bug bounty programs have at least two high- or critical-risk vulnerabilities detected in less than three days of professional auditing, issues that were missed by the crowd.
Synack’s approach is intended to supplement, not replace a company’s dedicated security specialists. But its human-led, machine-supported system offers a more systematic approach to the problem than traditional bug bounty free-for-alls, Kaplan says.
Synack, and other companies that manage crowdsourced security testing, including Bugcrowd, HackerOne, and San Francisco-based Cobalt Labs, also help alleviate the need for companies to manage bug search programs internally, a potential challenge for organizations already looking for solutions to an industry-wide shortage of security talent. Many of the managed crowdsourcing vendors also maintain their own points systems or other reputation rankings for participating researchers across multiple client engagements, letting them build their own profiles in addition to receiving monetary rewards.
Unlike a typical bug bounty challenge, Synack’s security researchers don’t simply connect from their own computers to client machines. Instead, they route their attempted hacks through a connection similar to a corporate virtual private network, which automatically logs their interactions with client systems.
“All of their traffic when they’re conducting their work actually routes through us, and then we have an ability to audit that traffic on an ongoing basis,” Kaplan says. Since the program is limited to approved researchers connecting through Synack’s computers, it reduces the risk of hackers taking advantage of a bounty program and penetrating a network for malicious purposes, he says.
The logging data isn’t just used for security reasons: It’s also used to let clients know what types of hacks researchers have tried and how much time they’ve spent probing the network. In traditional bug bounty programs, without such logging, companies offering rewards often have no way of knowing whether hackers haven’t reported bugs in a system because they’ve tried to find them and failed or simply because they haven’t looked for them, he says.
When bugs are found, Synack provides assistance in rooting them out. Though it doesn’t have access to internal source code, it generally can provide enough information to let clients discover and fix the underlying problem, Kaplan says. Clients can also notify researchers when they’ve updated particular components of their systems, letting them know good areas to probe for new vulnerabilities, he says.
Synack charges clients a steady subscription rate, rather than charging based on the number of bugs found, and separately pays out rewards to its researchers. While it’s possible that one client could have an unexpected burst of bugs, Synack’s total risk is essentially spread across its clients, similar to an insurance company, Kaplan says.
And clients see benefits in paying for a continuous evaluation of their systems, rather than for finding individual flaws, he says.
“They’re paying for validation that there are no new vulnerabilities in their system, and that positive validation is just as important and valuable to pay for as constantly paying for new vulnerabilities that come out,” he says.