Last November, Twitter stripped Richard Spencer, the high-profile white nationalist, of his blue checkmark, his verified status on the social media platform. His badge’s removal, coinciding with a purge of other alt-right account verifications, came just days after Twitter CEO Jack Dorsey tweeted that Twitter’s verification system was “broken,” in response to a growing user uproar about neo-Nazis being conferred this status marker. The original intent had been to denote an account’s authenticity, but over time it has been erroneously interpreted as an endorsement. “We failed by not doing anything about it,” Dorsey wrote in his tweet. “Working now to fix faster.”
Inside Twitter, though, some were surprised at how swiftly Dorsey had acted. The company, traditionally known for slow, at times enigmatic, executive decision making surrounding which users and content to allow on its platform, quickly worked to roll out new verification guidelines and enforcement measures. Yet, according to several sources who spoke on the condition of anonymity, the hasty decision to eliminate certain alt-right verifications–and, at other times, to suspend particular bad actors from the service altogether–has come directly from Dorsey himself. “Jack said we should do this,” members of the team involved with removing the statuses told coworkers at the time.
The rationalization, people familiar with the situation say, has become an increasingly common refrain at Twitter as Dorsey has thrust himself into addressing trust and safety issues, a dynamic that’s both promising and fraught, as Fast Company learned during reporting for our new cover story on Twitter and its struggles to eradicate digital pollution from its platform. While some insiders are heartened that Dorsey is finally giving product safety the attention and resources it deserves, others fear his personal involvement could create a policy and enforcement minefield that calls into question the impartiality of the system, especially if he continues to step into internal decisions related to individual accounts. “Jack was making those [kinds of] calls. They were very smart about it–you’re not going to find a paper trail,” says a source aware of the dynamic. “He was very good about providing certain items to [the] safety [team] and asking for action. He would tell somebody [on the team], You should check out this account–you should do something about it.”
A spokesperson for Twitter, in response to requests for comment for this article, referred Fast Company to the company’s recent tweetstorm from Dorsey on the topic of product safety, where he broadly describes how the company is working toward building a “systemic framework to help encourage more healthy debate, conversations, and critical thinking.” The spokesperson also sent links to a series of blog posts on recent rule changes, such as a December post regarding new policies around hateful conduct.
Pressure has mounted on technology companies to cleanse their platforms of abuse and toxicity following the 2016 U.S. presidential election, as it has become more apparent how much they were leveraged and manipulated for ill. Although CEOs are expected to ramp up their self-policing efforts, they simultaneously face the complicated task of not injecting themselves into the policing process itself, lest they fall down the slippery slope of trying to arbitrate all forms of acceptable expression online.
It’s a challenge that leaders of services ranging from Medium and Tumblr to Reddit and YouTube have been grappling with for much of the last year as public scrutiny and cries for regulation have hit a fever pitch. Only just last month, Facebook CEO Mark Zuckerberg told Recode he felt “fundamentally uncomfortable sitting here in California in an office making content policy decisions for people around the world . . . [The] thing is like, ‘Where’s the line on hate speech?’ I mean, who chose me to be the person that did that? I guess I have to, because of [where we are] now, but I’d rather not.”
Whether or not he’s comfortable in the role, Dorsey seems to have realized the necessity of being less ambivalent about what he’ll allow on the service he created. But some feel the dynamic could lead to controversy, if not valid questions over how much his personal beliefs could play into platform safety considerations.
The potential consequences of this setup are manifold. At core, three sources familiar with the situation contend, is the risk that Dorsey makes requests, implicitly or explicitly, to the safety team to suspend a user or remove a verification, but doesn’t first push to develop a corresponding policy or enforcement mechanism to justify that decision, which could either result in watering down Twitter’s overall community rules or, worse, make action impossible to scale consistently without his personal input. “The moment you’re catering to the request of a CEO–whether by making exceptions in enforcement or by taking action outside of [what] the policy [calls for]–everything goes downhill,” says the source aware of the dynamic.
To some company observers, however, Dorsey’s more direct approach is a welcome change. Not long ago, multiple sources say, Dorsey was far less inclined to weigh in on comparable content decisions. From the early days of the company, he saw the service as a vehicle for free expression and embraced First Amendment-like values, an ardent believer in Twitter being the “free-speech wing of the free-speech party,” as it became referred to internally around 2012.
Even as the company witnessed a parade of red flags on the platform–horrific cyberbullying incidents, racist and misogynistic trolling, extremist content such as ISIS beheadings–Dorsey, who served as Twitter’s chairman before returning as its chief executive nearly three years ago, remained for the most part committed to these ideals. Meanwhile, his predecessor as CEO, Dick Costolo, along with his cofounder and fellow board member, Ev Williams, were growing disillusioned with Twitter’s absolutist approach to free speech, sources say. “[By early 2015], Ev and Dick were in one camp, whereas Jack was more in the ‘let speech be free’ camp,” says a source familiar with the board’s thinking at the time. “The stance Jack took [then] was very different from the stance he’s taking now. He felt uncomfortable and worried that we might end up squelching too many voices as a result of anti-abuse efforts.”
Dorsey, this source adds, was particularly concerned about “false positives,” meaning speech on the platform wrongly deemed inappropriate or abusive and thus unjustly inhibited in some way. At one point, he wondered aloud whether the company could somehow develop a metric to measure tweets mistakenly suppressed on the service to determine how many impressions were stifled as a result, which he described, according to this source, as tantamount to an “insult to our customers” and their right to an open platform.
Costolo, on the other hand, was much more willing to be the decider-in-chief and involve himself in judgments on which users and content to allow on the platform, what sources describe as a call-it-like-I-see-it approach. For instance, he personally pushed for the permanent suspension of Chuck Johnson, the right-wing troll.
As reported in our cover story, when Dorsey took over CEO responsibilities from Costolo in the latter half of 2015, Dorsey’s approach to these matters led to internal strife. Dorsey was increasingly forced to reconcile his free-speech views with alarming patterns of behavior on Twitter, and frequently had to serve as an arbitrator in disputes between the internal teams responsible for crafting policy and those expected to enforce it. A source intimately involved in these interactions describes them as “emotionally draining,” with Dorsey frequently “wishy-washy” in resolving these squabbles. “You’d try to block [a user or piece of content] and [another team] would be like, ‘No, that’s not abuse–that’s freedom of expression,'” this source recalls. “It was a total nonsense molasses machine.”
It wasn’t until July 2016, well into the U.S. presidential election, that Dorsey’s views started to change, after alt-right trolls on Twitter, incited by white-nationalist provocateur Milo Yiannopoulos, attacked Saturday Night Live star Leslie Jones. Dorsey personally weighed into the issue, declaring an “abuse emergency” inside the company that eventually led to more safety features and Yiannopoulos’s suspension. (Earlier that year, he also contributed to an internal debate around removing Yiannopoulos’s verification status, a precursor to actions and policy moves the company would later take against Richard Spencer and other alt-right supporters.) These enhanced safety efforts picked up in the aftermath of the election, particularly as a number of incidents unfolded in the following year–from revelations around election interference to the tragic events of Charlottesville–and put more pressure on Dorsey to act.
Since the election, the company has suspended a slew of controversial figures from the platform. Sources say Dorsey was involved in a number of these decisions–it wasn’t uncommon for him to express his desire for action directly to Twitter’s trust and safety VP Del Harvey–especially when they affected prominent users. He was engaged with the process of suspending political agitator Roger Stone from the platform in October 2017, for example, due to Stone’s vitriolic tweet storm targeting several CNN anchors, a knowledgeable source says. (A spokesperson for Twitter referred Fast Company to tweets from the company’s safety account explaining Stone’s suspension.)
If anything, scores of sources I spoke to for this story were still baffled why it took him until 2018 to acknowledge the scope of Twitter’s problems publicly–as he did in March in a tweeted mea culpa–let alone invest more heavily in safeguarding the platform. A number of these sources even said Dorsey’s prior lack of candor on these matters, along with his years-overdue attempt to address them, was the central reason they quit their jobs at the company.
As Twitter continues to augment its approach to safety, it has faced pushback from those affected, including lawsuits from banned users (such as Chuck Johnson) who claim their free speech has been infringed upon as a result. Regardless of the merits of these lawsuits, to some observers, it’s a sign that the company needs to do a better job of institutionalizing its efforts, so it can react to platform problems in a less ad hoc way and ideally distance Dorsey from the process.
“Honestly, for a guy who is splitting his time as the CEO of two big companies [Twitter and Square, Dorsey’s payments company], the idea that he’s weighing in on individual accounts . . . Jack got far too deep in the weeds,” says a former employee familiar with these issues who departed the company last year. This source calls it unsustainable to have “a CEO who essentially has to sign off on [certain] account suspensions in other parts of the world.”