advertisement
advertisement

Here’s when you can trust Zoom, and when you shouldn’t

Whether you can be comfortable with the controversial videoconferencing phenom depends on your work, your secrets, and how much you believe its promises.

Here’s when you can trust Zoom, and when you shouldn’t
[Photo: Chris Barbalis/Unsplash]

In the midst of global pandemic, the nine-year-old videoconferencing service Zoom skyrocketed into general awareness. “To zoom” is now a verb. But among its rise in stock and a twentyfold increase in usage between December 2019 and March 2020, with 200 million daily users now conducting meetings worldwide, a lot of other verbs have been used with less affection about the company’s software quality, installation methods, security, ties to China, and privacy policies and actions. At the same time, technologists have hailed the company’s ability to deliver reliable service despite the blistering increase in usage—a growth rate no Internet firm has ever had to contend with under normal circumstances—with the lion’s share producing no new revenue.

advertisement
advertisement

Tech companies have always had a way of snatching defeat from the jaws of victory, and Zoom appears to rest on a razor’s edge. At least a dozen bugs, design flaws, and other issues were exposed over just one week in late March, as Zoom faced increased scrutiny from security researchers and privacy advocates, not to mention an ever-increasing gimlet eye from regulators and elected officials.

In response, Zoom issued a flurry of updates and changes—and several apologies, which were absent from previous security flaws exposed in July 2019 and January 2020, even though those were repaired too. Fast Company’s Jared Newman has a rundown of the most serious remaining issues and challenges.

From Yuan’s earliest work, Zoom has focused its efforts on making sure video works in every circumstance.

The New York City Department of Education abruptly told principals on April 5 that Zoom was now off-limits to educators, because of security risks and the problems caused by “Zoombombing,” another new word for the coronavirus age. With Zoombombing, trolls and bigots join sessions via public meeting URLs or ones passed around in venues such as Discord and 4chan,  and bombard the session with pornography, expletives, white supremacist or anti-Semitic images or audio, and much more.

Internet culture, not Zoom, is arguably to blame for Zoombombing. One of my kid’s teachers was Microsoft Teamsbombed on Tuesday, for instance. However, Zoom’s initial choices and slow response to abusive participation is part of what fed the Zoombombing trash fire. Its CEO, Eric Yuan, told NPR on Wednesday that the mostly business-focused company hadn’t considered harassment as an issue until its explosive growth. “I never thought about this seriously,” he said.

From Yuan’s earliest engineering work on Zoom, the company has focused its efforts on making sure video works in every circumstance, especially with mobile and low-bandwidth environments, says Janine Pelosi, Zoom’s chief marketing officer. Zoom hasn’t buckled under the strain of new users, and its architecture has made it resilient to high levels of use. But Pelosi adds that the company is working rapidly to reset and shift to deal with the flood of users who have no technical resources and don’t come from a place where corporate security practices predominate. “People weren’t putting their meeting IDs in Twitter before this,” she notes.


Related: Forget Facebook: Zoom is the tech industry’s newest problem child

advertisement

Zoom has shown its commitment by rapidly overhauling security settings, including making passwords mandatory on free and single-user paid accounts and adding a Security button in its app just days ago that consolidated a number of options to help control public meetings and adding new ones.

The question still arises, though: Have Zoom’s sloppy work outside of video dependability, bad choices, weak disclosure, and resistance to acknowledge flaws in its approach up until several days ago endangered its future even as it’s accumulated massive numbers of new free and paid users? Will existing alternatives, such as Cisco’s Webex (with substantially the same features) or Microsoft Teams (already in wide use in business and education), start to absorb the user base?

Noted security expert Bruce Schneier wrote recently, in a blog post about Zoom security risks and encryption models, “you should either lock Zoom down as best you can, or—better yet—abandon the platform altogether” until it’s verified that Zoom has overhauled its encryption and security model.

That may be impossible for tens of millions of people whose work, government branch, or school has picked Zoom as their platform.

Can you trust Zoom the company? And can you configure the Zoom service and software to fit your comfort level? The answers require both some understanding of Zoom’s security issues and an analysis of your company’s needs.

What’s at risk

When it comes to trust, the main issue should be whether you can use Zoom without fear of disclosure of your chats, audio, and video from meetings, both while those sessions are in progress and after the fact, by having the data intercepted and decrypted or through recordings being accessible to unwanted parties later.

advertisement

Zoom’s current encryption architecture makes those concerns valid, even if the risk is low for most users. The company claims that it uses end-to-end (E2E) encryption to protect meetings, but a report from Citizen Lab, reporting by The Intercept, and Zoom’s response on its blog make it clear that the system doesn’t meet a well-accepted definition of E2E. Zoom CMO Pelosi says, “There was a combination of semantics and us not being as clear as we could have been about how encryption was really run.”

In systems such as Apple’s iMessage, the enterprise version of Cisco Webex, and the independent, privacy-first Signal, various methods are used to ensure that encryption keys are never stored on a server nor accessible to the company providing the service. Instead, they are generated on a device—sometimes via embedded hardware, more typically within an app—and stored only there.

Such secure methods of exchanging keys among parties prevent three problems: The system operator can’t peek at users’ data, preventing both intentional examination and employee misconduct. The system can’t easily be attacked by hackers who might want to intercept user messages and video. And legitimate and illegitimate government efforts to snoop are deterred, as the design of the system prevents interception. That latter is a huge point of contention with both democracies such as the U.S. and authoritarian countries such as China.

Zoom’s system instead creates an encryption key on a Zoom server, which is then distributed securely to participants in a meeting. That’s a terrible design for E2E to begin with: Handing out keys like candy just makes things easier for those you don’t want to help, be they rogue employees or government spies. (Enterprise Zoom users have the option to use hardware and software that generates meeting keys within their network.)

But it’s particularly problematic that Citizen Lab found some sessions that involved no Chinese participants had keys generated by servers located in China, which were also involved in managing some video sessions. As Citizen Lab noted in its report, “Zoom may be legally obligated to disclose these keys to authorities in China.” Zoom said immediately afterward this use of Chinese servers was an error due to scaling its systems to balance load globally and that it’s fixed the problem. Zoom’s Pelosi reiterated that the company had removed all servers it operates in China from its global rotation. She says that of 233 million participants in meetings on a recent day before that change was made, “only a very small subset” wound up unintentionally routed through China, and those instances were random in nature.

Pelosi also says that Zoom destroys the session key used immediately after a meeting is complete. But the company’s architecture means that the encryption key remains available to various parts of the server software during the session. Zoom uses the key to allow participants to join during a meeting, as well as to connect to some of its cloud and external services, such as cloud-based recording—which decrypts the session in order to save video, audio, and text chats—and dial-in calls from the regular phone system. (Cloud recordings aren’t vulnerable per se, but some Zoom users have saved their recordings of meetings onto internet-indexable storage, making often intimate or proprietary meeting video available to anyone who can figure out a search pattern. This is not a security flaw on the part of Zoom.)

advertisement

Some security experts who have evaluated Zoom’s security issues aren’t overly concerned about them. Steven Bellovin, a security guru whose experience dates back decades, wrote on his blog, “apart from Zoombombing, the architectural problems with Zoom are not serious for most people. Most conversations at most universities are quite safe if carried by Zoom. A small subset might not be safe, though, and if you’re a defense contractor or a government agency you might want to think twice, but that doesn’t apply to most of us.”

Zoom has admitted its mistakes and plans to revamp its entire encryption approach, including involving outside experts and advisors. Pelosi says that the company is already moving rapidly toward a more secure encryption key algorithm that is resistant to determined attackers who could crack the current method employed. She says a more detailed road map will be available within about 45 days.

On Wednesday morning, former Facebook chief security officer Alex Stamos, now an independent consultant and a researcher at Stanford, disclosed he had agreed to begin offering advice under contract to Zoom on its redesign. He noted appreciatively, “To successfully scale a video-heavy platform to such a size, with no appreciable downtime and in the space of weeks, is literally unprecedented in the history of the Internet.”

Several changes made in the last few days should mitigate the safety issue of Zoombombing and other disruptions.

While independent and academically connected security experts are usually skeptical about company promises when it comes to repairing fundamental flaws, Zoom’s apologetic attitude and road map for improvement seem to have swayed many people. Tod Beardsley, director of research at corporate security consultancy Rapid7, wrote earlier this week, “The engineers, marketers, and leadership at Zoom are neither dumb nor evil. You can judge Zoom on its response to security issues, more so than on the security issues themselves, within reason.” (Rapid7 uses Zoom, but Zoom isn’t a Rapid7 client.)

The Chinese connection offers more concern. Chinese authorities have little interest in allowing protected conversations among their citizens or between them and the rest of the world, and no transparency about how they require or coerce Chinese companies or firms doing business within their borders to comply with government needs. Citizen Lab reported that Zoom has at least 700 employees in China working on its products.

While Zoom has responded to much of what Citizen Lab has reported, it hasn’t explained how it performs a dance with the Chinese government that lets it maintain the integrity of software that’s developed there and used in the rest of the world. Pelosi says that Zoom doesn’t work with the Chinese government differently than it works with any other country, but she didn’t provide additional detail about its operations. Many software and hardware companies have employees and contractors in China, so this isn’t a unique issue for Zoom. But the massive global use of the product means the company should more explicitly spell out its relationships and restrictions.

advertisement

Several changes made in the last few days should also mitigate the safety issue of Zoombombing and other disruptions. In addition to requiring a session password, Zoom has added friction in several ways that let hosts control sessions better and more simply while deterring trolls and harassers. All Zoom meetings now start by default with participants in a virtual waiting room, in which a host can see who wants to join. Meetings can be locked with a single click to prevent new participants from joining. Hosts can also block people from using a Zoom web app to join a session without registering a free account with Zoom. While that seems minor, some Zoombomb trolling relied on automating the use of a web app to rejoin a meeting with a new alias repeatedly.

Education hosts will welcome the option to prevent meeting attendees from changing their name as it appears in the session by locking those changes. Some students were making free use of the name change in the way that kids have used to write dirty words upside down in LED calculator numbers or write “booger” (or worse) on a blackboard.

The proof is yet to come

I would normally expect security and privacy experts to advise uniformly that people should stay away from a product that has had as many flaws as Zoom, even though the company has fixed most of them and expressed its intent to deal with the rest. Users have plenty of other options: Competing products such as Webex offer the same benefits for free users, such as up to 100 participants with video in a single meeting, and have similar pricing for paid tiers.

Yet Zoom’s ability to keep its service working through skyrocketing demand and its ease of use and access seem to have led to many people cutting the company slack for the time being. Stamos, its new adviser, noted in his blog post, “The morning [Zoom CEO] Eric [Yuan] called me (and most mornings since) there were five simultaneous Zoom sessions emerging from my home, as my three kids recited the Pledge of Allegiance in their virtual morning assembly, my wife supported her middle-school students, and I participated in a morning standup with my Stanford colleagues.”

To repeat the question at the outset: Can Zoom be trusted?

We want to make sure we can live up to everyone that’s using us and the responsibility we have.”

Zoom CMO Janine Pelosi
Human-rights activists, companies engaged with sensitive intellectual property, public officials discussing critical points of security and public safety, and those in legal, medical, and financial industries who have specific regulatory demands should likely avoid the platform until it implements true end-to-end encryption. But most people and organizations in those categories were unlikely to be using Zoom in the first place.

advertisement

Bellovin’s analysis is “What it boils down to is this: Exploiting the lack of true end-to-end encryption in Zoom is quite difficult, since you need access to both the per-meeting encryption key and the traffic.” That means governments and very determined parties might be able to exploit this potential weakness. But not casual hackers.

With tens of millions of hours of meetings every day on the platform, only targeted users should be concerned.

Matthew Green of Johns Hopkins Information Security Institute offers a very human reaction to the current state of affairs: “Many people are doing the best they can during a very hard time. This includes Zoom’s engineers, who are dealing with an unprecedented surge of users, and somehow managing to keep their service from falling over. They deserve a lot of credit for this. It seems almost unfair to criticize the company over some hypothetical security concerns right now.”

Zoom ultimately has to fix its existing flaws, clarify its position about work in China, and get ahead of future problems before others discover them. It’s promised a lot of new plans and change within just a few weeks. Pelosi, its CMO, says, “We want to make sure we can live up to everyone that’s using us and the responsibility we have.” If the company can’t meet that bar, that’s where trust should begin to waver, and use of the service should be reevaluated.

advertisement
advertisement

About the author

Glenn Fleishman is a veteran technology reporter based in Seattle, who covers security, privacy, and the intersection of technology with culture. Since the mid-1990s, Glenn has written for a host of publications, including the Economist, Macworld, the New York Times, and Wired

More