
Major artificial-intelligence companies including OpenAI, Meta Platforms and Google agreed on Tuesday to incorporate new safety measures to protect children from exploitation and plug several holes in their current defenses.
A host of new generative-AI powered tools have supercharged predators’ ability to create sexualized images of children and other exploitative material. The goal of the new alliance is to throttle the creation of such content before these tools can proliferate and hurt more children, according to Thorn, the child-safety group that helped organize the initiative along with the nonprofit organization All Tech Is Human.
Thorn and several AI companies agreed to implement principles to minimize the risks of their tools. One such principle calls for AI labs to avoid data sets that might contain child sexual content and scrub such material from their own training materials.