Large AI training data set removed after study finds child abuse material

LAION has reportedly taken down a major AI training data set after a study after it was found to contain images of child sexual abuse.
LAION has reportedly taken down a major AI training data set after a study after it was found to contain images of child sexual abuse.

A widely-used artificial intelligence data set used to train Stable Diffusion, Imagen and other AI image generator models has been removed by its creator after a study found it contained thousands of instances of suspected child sexual abuse material. 

LAION — also known as Large-scale Artificial Intelligence Open Network, is a German nonprofit organization that makes open-sourced artificial intelligence models and data sets used to train several popular text-to-image models.

Screenshot of the dataset Source: LAION

A Dec. 20 report from researchers at the Stanford Internet Observatory’s Cyber Policy Center said they identified 3,226 instances of suspected CSAM — or child sexual abuse material — in the LAION-5B data set, "much of which was confirmed as CSAM by third parties,” according to Stanford Cyber Policy Center's Big Data Architect and Chief Technologist David Thiel.

Thiel noted that while the presence of CSAM doesn’t necessarily mean it will “drastically” influence the output of models trained on the data set, it could still have some effect.

“While the amount of CSAM present does not necessarily indicate that the presence of CSAM drastically influences the output of the model above and beyond the model’s ability to combine the concepts of sexual activity and children, it likely does still exert influence," said Thiel.

“The presence of repeated identical instances of CSAM is also problematic, particularly due to its reinforcement of images of specific victims,” he added.

The LAION-5B dataset was released in March 2022 and includes 5.85 billion image-text pairs, according to LAION. 

In a statement, LAION said it has removed the data sets out of “an abundance of caution,” including both LAION-5B and its LAION-400M, “to ensure they are safe before republishing them.”