A new report warns that the proliferation of child sexual abuse images on the internet could become much worse if something is not done to put controls on artificial intelligence tools that generate deepfake photos.
It knows what naked people look like, and it knows what children look like. It doesn’t need naked children to fill in those gaps.
Also, these models are trained with images scraped from the clear net. Somebody would have to had manually added CSAM to the training data, which would be easily traced back to them if they did. The likelihood of actual CSAM being included in any mainstream AI’s training material is slim to none.
It knows what naked people look like, and it knows what children look like. It doesn’t need naked children to fill in those gaps.
Also, these models are trained with images scraped from the clear net. Somebody would have to had manually added CSAM to the training data, which would be easily traced back to them if they did. The likelihood of actual CSAM being included in any mainstream AI’s training material is slim to none.
Defending AI generated child porn is a weird take, and the support you’re receiving is even more concerning
I’m not defending it, dipshit. I’m explaining how generative AI training works.
The fact that you can’t see that is what’s really concerning.