by Lucas Nolan, Breitbart:
A recent report has uncovered a concerning trend in the development of artificial intelligence image generators, revealing the use of explicit photos of children in their training datasets.
The Associated Press reports that The Stanford Internet Observatory, in collaboration with the Canadian Centre for Child Protection and other anti-abuse charities, conducted a study that found more than 3,200 images of suspected child sexual abuse in the AI database LAION. LAION, an index of online images and captions, has been instrumental in training leading AI image-makers such as Stable Diffusion.