Walter Geer describes himself as one of the few Black creative directors in the ad industry. An executive creative director for experience design at the marketing agency VMLY&R, Geer is also a frequent and popular poster on LinkedIn, where he discusses issues faced by Black people and other people of color in the industry.
“I do this because I’ve been thrust into this role not by choice—I’ve become a voice for people of color in advertising,” he tells Fast Company. “It’s important to me because progress doesn’t happen unless we have these conversations.”
But Geer says LinkedIn hasn’t always felt welcoming to these discussions. He and other Black users of the platform have repeatedly had issues with material they post not appearing on the site or being taken down. In one case, Geer says, a video he uploaded to the site seemed to simply disappear, although LinkedIn says it was never properly uploaded, as a result of a technical error on the company’s end.
He and other Black creatives, particularly in the ad industry, told Fast Company they see signs the platform is biased against them. They point to posts about being Black in the workforce and other racial issues that don’t post or get removed, content about race and diversity not getting the traffic they expected to receive, and cryptic messages from LinkedIn customer support representatives about their experiences. They say the platform, which since last summer’s protests after the murder of George Floyd has increasingly become home to discussions of race in the workplace, seems to be censoring discussions they consider critical to improving their professional lives and those of their colleagues.
LinkedIn representatives are adamant the issues Geer and others experienced aren’t the result of any kind of systemic bias, and instead stem from a mix of content-neutral technical bugs, the popularity among other users of particular posts, and the occasional moderation error. They also point to examples of other posts by the users, including some that discuss racial issues, that didn’t have these problems. “End of the day, LinkedIn is a members-first company,” says Paul Rockwell, head of trust and safety at LinkedIn. “So making sure that we’re creating a community where members can feel supported, and they can count on a safe, constructive professional environment, is of paramount importance to us.”
But the specific issues that users experienced contribute to a larger perception problem, tied to the reality that social media sites are governed by mysterious and opaque algorithms and moderation processes developed behind closed doors. It’s not limited to LinkedIn: Facebook users also complain when their content is taken down by the network for violating its often complex rules. Black TikTok users have expressed fears that their content has been shadowbanned, a term that refers to surreptitiously limiting the reach of a post, something TikTok has generally denied. Conservatives have also expressed concerns about being shadowbanned by Twitter, although that company also says it doesn’t engage in the practice.
“What I haven’t seen platforms do on the whole is adopt systems that really meaningfully involve user communities in those processes,” says Sarah Myers West, a researcher at the AI Now Institute at New York University, who has studied content moderation systems.
Bugs and errors
Despite LinkedIn’s proclamations that its systems aren’t biased, the platform has had some difficulty in convincing users that’s the case.
Like Geer, Andrew Bailey, who has worked in the ad industry for eight years, says that he’s had issues posting content to LinkedIn—particularly, he says, when he’s been discussing racial or diversity, equity, and inclusion issues. At times, he says, including one instance where he was able to capture a screenshot and share it with Fast Company, he’d receive a message saying “posting failed” in LinkedIn’s iOS app.
“I started to notice my posts would fail if I was talking specifically about Black business people or Black people in advertising, especially if I used any of the DE&I hashtags,” he says.
According to LinkedIn, that was just a coincidence. A message he received from the company and then shared with Fast Company said the error was “the result of technical issues” and “not in any way related to the meaningful content” he was trying to post. Tanya Staples, head of trust product at LinkedIn, similarly tells Fast Company that both Bailey and Geer received errors before the content they tried to post ever fully made it to LinkedIn’s servers, meaning that the algorithms that analyze the content hadn’t yet kicked in. The bug that affected Bailey was fixed by August 12, and the one that affected Geer’s video post was fixed by July 16, according to the company.
“It’s obviously a huge issue for us when our members can’t post content to the platform,” Staples says. “We want our members to be engaging in conversations as much as humanly possible on the platform, so we take those issues very, very seriously to make sure our members can upload their posts and make comments.”
Bailey also had at least one post removed by LinkedIn after he posted it, in which he reposted Geer accusing LinkedIn CEO Ryan Roslansky of a lack of transparency around this issue, and added his own message calling Roslansky a “coward” and a “prick.” He received a message that the post violated LinkedIn policies but says he’s used strong language in other posts without similar issues.
“I post swear words all the time in other things that have nothing to do with DE&I and nothing to do with Black people or people of color,” he says. “That never gets blocked.”
In general, Rockwell says LinkedIn’s professional emphasis means that it often has stricter content standards than other popular social networks—after all, it’s easy to find people calling Facebook’s Mark Zuckerberg and Twitter’s Jack Dorsey harsher names than “coward” and “prick” on their platforms. “We think that it contributes to a healthier environment where people are free to engage in tough conversations, but they’re doing it in a constructive way, trying to learn and share life experience and see what they can do to support each other,” he says.
More than just a résumé site
Last year, The New York Times reported that even as LinkedIn has become an increasingly important resource for Black professionals to connect on, some users’ experiences had similarly led them to believe that the platform was silencing them. Their experiences have historical echoes: Channels for professional networking haven’t been open to everyone. Venues like golf courses and social clubs excluded people based on race, religion, and gender. Access to professionally beneficial real-life social networks can still be heavily correlated with race and class. People still have difficulty discussing race, gender, and other facets of their identity linked to discrimination with colleagues, even if they affect their experiences in the office.
For its part, LinkedIn has said it’s set out to be an inclusive platform. Rockwell describes the company as explicitly “anti-racist.” Like other big tech companies, the Microsoft-owned social network publicly embraced this role after the murder of George Floyd and subsequent protests last year. “In 2020, we committed to identifying and mitigating systematic inequality,” says Staples. The company created a discussion series called #ConversationsForChange addressing “critical topics that have been kept out of professional settings for too long.” It recently added a feature making it easier for users to share their pronouns and promoted Black voices on the social network through company blog posts, newsletters, and streamed video conversations.
But the company was pulled into controversy last year when employees at a town hall-style meeting about race and diversity made anonymous comments considered racially insensitive. Even CEO Ryan Roslansky later referred to the comments as “appalling.” The company said it would no longer allow anonymous comments at such meetings, and Roslansky later blogged about company efforts including training employees and offering free online courses for users on allyship and anti-racism.
“We recognize that this is a journey, and we know we have work to do ourselves, but we are deeply committed, investing in inclusive hiring and retention practices we know work, training every manager at the company to be an inclusive leader, and holding all of ourselves accountable with documented commitments,” he wrote.
Many social media companies have generally come under fire from civil rights activists for racial issues: For years, the online civil rights group Color of Change has criticized tech leaders for failing to do enough to combat hate on their platforms and called on them to hire more diverse staffs and evaluate their products for potential discrimination.
“If LinkedIn’s vision is to create economic opportunity for every member of the global workforce, its platform should welcome conversations about what’s standing in the way of that vision,” Jade Magnus Ogunnaike, senior director of media, culture, and economic justice at Color of Change, said in an emailed statement to Fast Company. “The company must empower Black professionals to bring their full selves to the platform. That means they should not be blocking Black LinkedIn members from speaking out about racial justice and discussing the imperfect realities of their professional lives, including the discrimination they face.”
The allegations of bias, which have received coverage in the business and tech press as well as the national media, raise questions about how tech companies can adequately demonstrate their anti-racist credentials amid an ongoing discussion about AI bias, especially against Black people and other people of color, and the racial homogeneity of the software industry.
Questions of moderation
Like other sources Fast Company spoke to for this story, diversity recruitment consultant Peter Ukhurebor has turned to LinkedIn to discuss racism in the ad industry. He has also seen posts taken down when he discusses racial issues and, in particular, when he calls out ad industry leaders. According to LinkedIn, though, his posts weren’t taken down by the company, though they may have been taken down by another user whose posts he was commenting on.
Still, the company acknowledges its moderation isn’t infallible. Lisa Hurley, a writer and digital activist who does work around anti-racism education, says something she posted on the platform about the need for rest, particularly during the pandemic, began getting a lot of attention earlier this year. While the post isn’t explicitly related to race, Hurley said it was particularly relevant to people of color.
“The post struck a nerve,” she says. “It was doing exceptionally well.”
Then, she found herself unable to interact with comments on the post, and soon she and her LinkedIn connections were unable to see the post at all. She and her contacts continually tagged LinkedIn in posts to ask what the matter was, she says. By the next morning, the post was restored. The experience still left her feeling frustrated, even after an apology from the company.
“It was exhausting. It was horrifying. It was disempowering,” she says.
Rockwell says LinkedIn posts are moderated by a mix of automated processes, which scan for “known patterns of bad content” like “commercial spam,” and humans who review cases where the algorithms are unsure or content is flagged by another user as inappropriate. In cases where content is taken down, users are notified and told of their chance to appeal to a different human moderator, he says. As The New York Times reported, LinkedIn has previously acknowledged content moderation errors by reinstating users’ posts about racism after they were deleted.
Algorithmic processes are monitored to make sure they’re not producing biased results, Staples says. “We set criteria for how we want the model to perform, and then we’re continually evaluating how it performs against that criteria,” she says.
The algorithms behind the scenes
Hurley also expressed concern that some of her other posts she expected to have gained wider audiences had been shadowbanned or otherwise restricted. LinkedIn spokespeople say the company notifies affected users when it finds content has violated its policies and that the formulas for determining how widely content gets circulated depend on what people find interesting.
“We have a series of complex algorithms that basically look at a variety of factors to help decide for every post and every comment what virality and distribution it gets,” Staples says. “And it’s largely based on, in a lot of cases, the engagement of other members and who’s in somebody’s network.”
The trouble is that the complexity of LinkedIn’s algorithms, apps, and other aspects might make it hard for the company to truly convince users the system is unbiased, according to the content moderation researcher West. While she hasn’t studied LinkedIn in particular, she says she’s found that a lack of clear explanations from tech companies can lead users to develop their own theories of what’s actually happening.
“What my research showed there is that you see users of many different kinds of backgrounds developing their own folk theories about how moderation works in the absence of clear explanations from the platforms themselves,” she says.
Even small mistakes in moderation or software malfunctions can be thoroughly demoralizing for social media users.
“You feel like all control has been wrested from your grasp,” says Hurley.