Fast company logo
|
advertisement

A decision by YouTube this week to demonetize videos from conservative pundit Candace Owens underscores efforts to address intentional misgendering.

The uptick in anti-trans social media content means platforms need better ways to combat it

[Source Photos: rawpixel]

BY Sarah Bregel2 minute read

This week, YouTube demonetized several videos posted by conservative pundit Candace Owens. The videos in question include derogatory content, such as misgendering and otherwise making disparaging remarks about trans individuals, as well as the community as a whole. 

Owens intentionally misgenders individuals in her content. She also shows graphic photos of gender reassignment surgeries, rails against rights for trans people, and attacks parents of trans children for supporting their identity, as well as the children themselves.

The Google-owned social media site said that the videos violated its monetization policies. While the platform doesn’t explicitly list policies about misgendering or deadnaming, such behavior is broadly viewed as a form of discrimination against trans or nonbinary individuals. Still, it remains a rampant way to disparage trans and nonbinary people, even though it can be deeply harmful, as it essentially denies their identity.

Owens spoke about YouTube’s decision on her podcast, revealing that she was given the option to remove the videos before the company demonetized them. Leaning further into her anti-trans rhetoric, she said the videos were “pertaining to gender in which I have accurately gendered someone.”

Google spokesperson Michael Aciman said the company blocked ads on the videos because they violated guidelines against hateful and derogatory content. The guidelines state, “Content that incites hatred against, promotes discrimination, disparages, or humiliates an individual or group of people is not suitable for advertising.” He also said the company’s policies could lead to action against any content that “may include deliberate deadnaming or misgendering of transgender individuals.”

YouTube had previously said that it did not consider misgendering hate speech or a violation of its guidelines. Its move this week comes as debates about trans rights wage on, as do debates about what constitutes hate speech more broadly.

LGBTQ advocates say that social media companies, whose policies and enforcement measures are not always consistent on such issues, have to decide how to protect already at-risk communities on their sites. Earlier this year, Twitter actually reversed its policy against misgendering under new owner Elon Musk, a move that was widely criticized. 

Sarah Kate Ellis, president and CEO of GLAAD, spoke out about Twitter’s move in a powerful statement in April. “The practice of targeted misgendering and deadnaming has been identified by the ADL and other civil society groups as a form of hate speech,” she said. “Social media companies committed to maintaining safe environments for LGBTQ people should be working to improve hate speech policies, not deleting long-standing ones.”

In 2022, TikTok banned misgendering on its platform in an effort to make it a better environment for the trans community. The move came after a report from GLAAD said that top social sites weren’t safe places for LGBTQIA+ users. 

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the early-rate deadline, May 3.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Sarah Bregel is a writer, editor, and single mom living in Baltimore, Maryland. She's contributed to NYMag, The Washington Post, Vice, In Style, Slate, Parents, and others. More


Explore Topics