Fast company logo
|
advertisement

The Stanford Internet Observatory found over 3,200 explicit images in the open-source training data set LAION, which was used to train popular AI tools.

AI image generators were trained on explicit images of children, Stanford says

[Source photo: dem10/Getty Images]

BY Max Ufberg5 minute read

Welcome to AI Decoded, Fast Company’s weekly LinkedIn newsletter that breaks down the most important news in the world of AI. If a friend or colleague shared this newsletter with you, you can sign up to receive it every week here.

AI image generators are trained on explicit photos of children, Stanford Internet Observatory says

A new report reveals some disturbing news from the world of AI image generation: A Stanford-based watchdog group has discovered thousands of images of child sexual abuse in a popular open-source image data set used to train AI systems.

The Stanford Internet Observatory found more than 3,200 explicit images in the AI database LAION (specifically the LAION‐5B repository, so named because it contains over 5 billion image-text pairs), which was used to train a version of the popular image-maker Stable Diffusion, among other tools. As the Associated Press reports, the Stanford study runs counter to conventional belief that AI tools create images of child sexual abuse only by merging adult pornography with photos of children. Now we know it’s even easier for some AI systems that were trained using the LAION database to produce such illegal materials. 

“We find that having possession of a LAION‐5B data set populated even in late 2023 implies the possession of thousands of illegal images,” write study authors David Thiel and Jeffrey Hancock, “not including all of the intimate imagery published and gathered non‐consensually, the legality of which is more variable by jurisdiction.”

In response to the Stanford study, LAION announced it was temporarily removing its data sets, and Stability AI—the maker of Stable Diffusion—said it has “taken proactive steps to mitigate the risk of misuse,” namely by enforcing stricter filters on its AI tool. However, an older version of Stable Diffusion, called 1.5, is still “the most popular model for generating explicit imagery,” according to the Stanford report.

The study also suggested that any users who built a tool using the LAION database delete or scrub their work, and encouraged improved transparency around any image-training data sets. “Models based on Stable Diffusion 1.5 that have not had safety measures applied to them should be deprecated and distribution ceased where feasible,” Thiel and Hancock write.

The FTC proposes banning Rite Aid from using facial-recognition tech in its stores

The Federal Trade Commission on Tuesday proposed banning Rite Aid from using facial-recognition software in its stores for five years as part of a settlement. 

The FTC alleged in a complaint that Rite Aid had used facial-recognition software in hundreds of its stores between 2012 and 2020 to identify customers suspected of shoplifting or other criminal activity. But the technology generated a number of “false positives,” the FTC says, and led to instances of heightened surveillance, unwarranted bans from stores, verbal harassment from store employees, and baseless calls to the police. “Rite Aid’s failures caused and were likely to cause substantial injury to consumers, and especially to Black, Asian, Latino, and women consumers,” the complaint reads.

The complaint did not specify which facial technology vendors Rite Aid used in its stores. However, it did say that the pharmacy giant kept a database of “at least tens of thousands of individuals” that included security camera footage of persons of interest alongside IDs and “information related to criminal or ‘dishonest’ behavior in which individuals had allegedly engaged.” Rite Aid workers would receive phone alerts “indicating that individuals who had entered Rite Aid stores were matches for entries in Rite Aid’s watchlist database.”

In addition to a five-year ban on any facial-recognition technology, the proposed settlement says Rite Aid has to delete any images already collected by its facial-recognition system and to direct any third parties to do the same. The FTC also called on Rite Aid to create safeguards to prevent any additional customer harm.

advertisement

Rite Aid, for its part, said in a statement that it used the facial-recognition technology only in “a limited number of stores” and added that it “fundamentally disagree[s] with the facial recognition allegations in the agency’s complaint.” Nonetheless, the drugstore chain said it welcomed the proposed settlement. “We are pleased to reach an agreement with the FTC and put this matter behind us,” it said.

How RAND helped shape Biden’s executive order on AI

Notable D.C. think tank the RAND Corporation had a hand in creating President Joe Biden’s executive order on AI, Politico reported late last week. That revelation, which Politico learned of through a recording of an internal RAND meeting, further cements the link between the AI sector and the people tasked with regulating it.

RAND lobbied hard for including in the executive order a set of reporting requirements for powerful AI systems—a push that aligns with the agenda of Open Philanthropy, a group that gave RAND $15 million this year alone. 

Open Philanthropy is steeped in the “effective altruism” ideology, which was made popular by FTX founder Sam Bankman-Fried and advocates for a more metric-heavy approach to charity. Open Philanthropy is funded by Facebook cofounder and Asana CEO Dustin Moskovitz and his wife, Cari Tuna. Effective altruists have long been active in the AI world, but the Politico story shows how the movement is shaping policy via RAND. 

Not everyone at RAND is apparently pleased with the think tank’s ties to Open Philanthropy. At the internal RAND meeting, an unidentified person said the Open Philanthropy connection “seems at odds” with the organization’s mission of “rigorous and objective analysis” and asked whether the “push for the effective altruism agenda, with testimony and policy memos under RAND’s brand, is appropriate.”

RAND CEO Jason Matheny countered that it would be “irresponsible . . . not to address” concerns around AI safety, “especially when policymakers are asking us for them.”

Recognize your brand’s excellence by applying to this year’s Brands That Matter Awards before the extended deadline, June 14.

Sign up for Brands That Matter notifications here.

PluggedIn Newsletter logo
Sign up for our weekly tech digest.
This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
Privacy Policy

ABOUT THE AUTHOR

Max Ufberg is a senior staff editor on Fast Company's technology section. More


Explore Topics