ISIS and other militant groups use social media to disseminate propaganda, recruit, and organize. In the wake of deadly shootings in Paris last month and California last week, both of which have been treated as terrorist attacks, U.S. lawmakers have moved to enlist technology companies to assist law enforcement in fighting terrorism.
“I will urge high-tech and law enforcement leaders to make it harder for terrorists to use technology to escape from justice,” President Obama promised while addressing the nation from the Oval Office on Sunday night.
On Monday, Rep. Mike McCaul (R-Texas) proposed a special congressional committee to focus “on security and technology challenges in the digital age,” which would include technology company executives as well as representatives from academia, law enforcement, and civil liberties groups. Senator Dianne Feinstein, meanwhile, promised to reintroduce legislation that would require social media platforms like Twitter, YouTube, and Facebook to report online terrorist activity to federal authorities. The measure had been introduced as part of another bill, but was not included in the final Senate version.
These efforts in Washington have put a spotlight on the debate around the role of technology companies in aiding law enforcement’s investigation of terrorist activity. Take encryption, which FBI and police officials complain makes it difficult to collect intelligence about terrorist activity. One proposed solution is to have companies include “back doors” in their products that would allow law enforcement to break encryption while investigating a threat. Tech industry leaders are opposed to this, noting that it’s an infringement on their customers’ privacy infringement and arguing in a letter to Obama that such a measure would make everyone less safe. “Encryption protects billions of people every day against countless threats—be they street criminals trying to steal our phones and laptops, computer criminals trying to defraud us, or corporate spies trying to obtain our companies’ most valuable trade secrets, repressive governments trying to stifle dissent, or foreign intelligence agencies trying to compromise our and our allies’ most sensitive national security secrets,” they wrote. “Introducing intentional vulnerabilities into secure products for the government’s use will make those products less secure against other attackers.”
Blocking terrorist accounts is also more complicated than it sounds, as evidenced by the 335 versions of the pro-Islamic State Twitter account. Shut down one account, and another simply materializes.
Reporting terrorism efforts to authorities, as Senator Feinstein’s legislation would require, could lead to censorship, free speech advocates argue. “Social media companies shouldn’t take on the job of censoring speech on behalf of any government, and they certainly shouldn’t do so voluntarily,” the Electronic Frontier Foundation’s international director, Danny O’Brien, said in a statement to Fast Company. “These kinds of speech restrictions set online platforms on a very slippery slope. Who defines “terrorism”? Does Facebook, for example, intend to enforce its policies only against those that the United States government describes as terrorists, or will it also respond if Russia says someone is a terrorist? Israel? Saudi Arabia? Syria? It’s particularly worrisome that we’re not even talking here about speech that’s actually been found unlawful.“
On Sunday, Hillary Clinton referenced this argument. “You are going to hear all the familiar complaints: ‘freedom of speech,’” she said in an hour-long speech in which she vaguely called for tech companies to block websites and content from terrorists. ““We need to put the great disrupters at work at disrupting ISIS,” she said.