by Lucas Nolan, Breitbart:
A recent report has uncovered a concerning trend in the development of artificial intelligence image generators, revealing the use of explicit photos of children in their training datasets.
The Associated Press reports that The Stanford Internet Observatory, in collaboration with the Canadian Centre for Child Protection and other anti-abuse charities, conducted a study that found more than 3,200 images of suspected child sexual abuse in the AI database LAION. LAION, an index of online images and captions, has been instrumental in training leading AI image-makers such as Stable Diffusion.
TRUTH LIVES on at https://sgtreport.tv/
This discovery has raised alarms across various sectors, including schools and law enforcement. The child pornography has enabled AI systems to produce explicit and realistic imagery of fake children and transform social media photos of real teens into deepfake nudes. Previously, it was believed that AI tools produced abusive imagery by combining adult pornography with benign photos of kids. However, the direct inclusion of explicit child images in training datasets presents a more direct and disturbing reality.
The issue is compounded by the competitive rush in the generative AI market, leading to hasty releases of AI tools without sufficient safety measures. Despite LAION’s immediate response of temporarily removing its datasets following the report, the concern remains about the lasting impact and widespread accessibility of these tools.
Stability AI, a notable user of LAION’s dataset, has implemented stricter controls in newer versions of its Stable Diffusion models. However, older versions without these safeguards continue to circulate and are used for generating explicit content. The study emphasizes the difficulty in rectifying this problem due to the open-source nature of many AI models and the ease of their distribution.