Fast company logo
|
advertisement

Surveillance expert Chris Gilliard reflects on 2020’s racial justice protests, the hypocrisy of tech companies’ commitments, and where we are one year later.

‘I don’t think you can have an anti-racist tech company at scale’

[Photo: courtesy of Chris Gilliard]

BY Katharine Schwab7 minute read

In summer 2020, protests erupted across the U.S., sparked by the police killings of George Floyd, Breonna Taylor, Ahmaud Arbery, and other Black Americans. Within the tech industry, many leaders made public statements, financial commitments, and policy changes meant to improve equity and inclusion within their walls—and in the products they peddle.

To commemorate the first anniversary of these protests, Fast Company partnered with The Plug, a publication that covers the Black innovation economy, to examine what those commitments are, what they have achieved—and how much work still remains. (You can see the resulting data visualizations and first-person testimonials from Black employees, entrepreneurs, and customers here.)

For Chris Gilliard, a surveillance expert who is currently a Harvard Kennedy School Shorenstein Center Visiting Research Fellow, it’s clear that tech companies’ business practices don’t match the DEI platitudes they preach.

The following interview has been edited and condensed for clarity.

Fast Company: What was your reaction last summer to seeing the outpouring of “Black lives matter” statements from tech companies?

Chris Gilliard: It felt like after the first few companies, after those dominoes fell, that everyone felt as if or had been advised that they needed to come out with some kind of statement. And it got to the point where it was sort of absurd, where there were chewing gum companies and mouthwash companies and all these places asserting, “We believe that Black lives matter.” Tech companies whose practices [and] core functions clearly indicate they do not [believe that were] coming out with these statements.

If you take like a YouTube, for instance, it’s mutually exclusive to host Nazi content and to affirm Black lives. You can’t have it both ways. 

Youtube is a great example. Are there other companies where you think their business practices and the products that they sell are working against Black people?

The usual suspects: Amazon, Facebook, Twitter, Instagram, TikTok. We can just go down the line with these companies. Amazon is doing everything in their power to kill unionization. TikTok is well known for down-ranking certain content. They at one point even pretty explicitly said, if someone’s disabled or fat, we’re going to deprioritize their content.

There’s a lot of crossover between those kinds of practices and the marginalized populations—like specifically Black [people]—that they disproportionately affect.

In the case of Facebook, like one of the main things they do is promote racist groups and racist activities. They can say what they want about how much they don’t want that material on their platform or what they’re doing to get it off. Some of their own research contradicts that, but this is the nature of a corporation.

But it can’t be both things. If you are a company, if the left hand is promoting Nazis or selling white supremacist gear, and the right hand is saying Black lives matter, those are inconsistent. It can’t be both. I think the more we reject that, the better off we’ll all be.

As part of the Black in Tech project, I’ve been looking at what these companies have done and what they’ve pledged to do. They’re throwing a lot of money at this in three main buckets: education, racial justice non-profits, and Black-owned businesses. I’m curious what your take is on the sheer amount of money that they have decided to invest. Does this let them off the hook? How much does this do? What corporations are in it for just looking good? Is it just a PR thing, versus trying to actually make real change?

 I mean, money is great. But on the other hand, these are among the richest companies that have ever existed in the history of the world. So what seems like a staggering number—let’s say someone gave $50 million, $100 million. It’s not even couch cushion money to them, and in some ways it generates a lot of good will. And so it’s money well spent. But the other thing is they haven’t changed their practices. Google is busy firing ethicists, high-profile Black women. Facebook is under investigation for being a serial offender in terms of like creating an anti-Black workplace.

Money…is almost immaterial to them on the scale in which they operate. I’m sure those organizations appreciate that money. And it’s not to say that it’s not possible that a lot of good is going to come with that funding. But they haven’t changed the core of what they are or how they operate. They spend all day like spewing toxin into the air and creating poisonous environments. And then at night they try to undo a tiny bit of that damage. If you spend all day pumping poison into the air in the water, and then at night, like take some of it out, it’s a net loss. It’s still not good. At their core, a lot of these organizations’ function is in many ways anti-Black. A lot of these initiatives are entirely insufficient.

Is an anti-racist tech company simply incompatible with our current capitalist system? Because of the nature of capitalism and the fact that its whole point is to grow and exploit, can you even have corporations that are anti-racist?

I don’t think you can. More specifically, I don’t think you can have an anti-racist tech company at scale. Because the nature of the scale and growth at all costs is what we see. 

These business models are built around surveillance and tracking, whether it’s on the internet or in the physical world. How does that play into your views around these companies fundamentally being anti-Black?

A consistent and I think true maxim in surveillance studies is that surveillance harms fall disproportionately on marginalized and vulnerable populations. These companies are all surveillance companies, whether that is through cameras and facial recognition or tracking people’s behavior on the web. So to use concrete examples, the ways in which undocumented people are tracked by their use of social media, the way that technologies are used to track down and deport people, the way they’re used to track down and incarcerate people, the way they’re used to deny people the ability to rent an apartment, the way they’re used to target people in a variety of ways—that falls often on the most vulnerable.

advertisement

Living in Detroit, there’s a couple of high profile cases of facial recognition falsely implicating Black men in crimes. And there may be some cases of that with white folks. I’ve not heard of them yet, but also, those are the outliers. They’re not the ones that are most likely and most common.

Part of the bigger problem I think is that our version of what it is to use technology is driven by surveillance. It’s driven by tracking people in some form or another, aggregating data and selling it off, or letting other people look at it.

The ways in which the method of surveillance is used against people is going to fall disproportionately on people who are less powerful, less wealthy—like black and Brown folks, like immigrants, undocumented people.

To connect the two kind of threads of this conversation, what do you see as the connection between homogeneous, very white teams within these companies and business models that disproportionately harm marginalized groups?

The example I use is with Zoom. I don’t know what the Zoom design team looked like, but I do know that the CEO of Zoom came out, maybe halfway through the pandemic and said, “Oh, we never imagined Zoom bombing. We never imagined this product we designed would be used for targeted racist and misogynistic harassment.”

I don’t know what the Zoom team looks like, but if I take him at his word, based on his claim, he didn’t have like a lot of non-white people or women on his team, or he didn’t listen to them. His team probably did not look like people like me.

It’s hard for me to even imagine that someone made software where anyone can pop in and never thought, how will this be used to harm people? Because that is unfortunately the existence of so many people on the web. 

The other thing is that these companies have shown that they don’t really want people there who are going to tell them these things—whether they don’t hire someone because of fit, they only take people from certain colleges and universities that have very historically biased admissions, or they get rid of them when they do the job that they’ve been hired to do, whether that is in diversity, equity, and inclusion programs, or pointing out the flaws in their systems.

So I think they feed into each other. Many of these companies have created very hostile and toxic environments for anyone who doesn’t look like the guys who founded the company. And when they do bring those people on, they don’t support them and don’t listen to them. That’s repeated across so many of these companies.

Part of my goal with this project is to draw that line really clearly between toxic work environments and products that aren’t inclusive, that don’t consider how they could be harmful—all of these things that we’ve been talking about.

At the security camera company Verkada, people at the company were using the product in house to harass women who work there. It’s disgusting, but it’s not shocking that there are larger problems with the company and how [the technology is] being used to harass people on a larger scale, because this is the environment at the company. It’s the soil in which this thing was grown.

If it’s created in a toxic environment, the thing that is created is going to be toxic.

Experience the full Black in Tech project here.

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Katharine Schwab is the deputy editor of Fast Company's technology section. Email her at kschwab@fastcompany.com and follow her on Twitter @kschwabable More


Explore Topics