Grok, a social media platform, has implemented measures to stop the generation of images that undress women and children without their consent. The update comes after weeks of reports detailing non-consensual intimate images generated by Grok's outputs, prompting California Attorney General Rob Bonta to investigate whether X's AI broke US laws.
X Safety confirmed that technological measures have been put in place to prevent users from editing images of real people in revealing clothing. This restriction applies to all users, including paid subscribers. The update also restricts "image creation and the ability to edit images via the Grok account on the X platform," which are now only available to paid subscribers.
Furthermore, geoblocking has been implemented to prevent all users from generating images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in those jurisdictions where it's illegal. This move comes after weeks of reports detailing non-consensual intimate images generated by Grok's outputs, prompting California Attorney General Rob Bonta to investigate whether X's AI broke US laws.
X Safety has also emphasized its commitment to making X a safe platform for everyone, with zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. The company vows to remain committed to ensuring the safety of its users and will take immediate action to prevent further incidents.
The move follows weeks of criticism over Grok's outputs, which have been described as "child abuse material." X's CEO Elon Musk has defended Grok's outputs, claiming that none of the reported images have fully undressed any minors. However, researchers have found harmful images where users specifically requested minors to be put in erotic positions and depicted on their bodies.
The California probe has sparked concerns over the potential risks of AI-generated content, with some experts warning that newly released AI tools could carry similar risks as Grok's outputs. The US Department of Justice considers any visual depiction of sexually explicit conduct involving a person less than 18 years old to be child pornography, which is also known as child sexual abuse material (CSAM).
The UK has also been probing X over possible violations of its Online Safety Act, with the company claiming that it had moved to comply with the law. However, tests conducted by The Verge have shown that the outputs are still present, sparking concerns over the effectiveness of the platform's measures.
As the investigation continues, California Attorney General Rob Bonta has emphasized his zero tolerance for non-consensual intimate images or child sexual abuse material. He has urged X to take immediate action to prevent further incidents and has pushed the company to restrict Grok's outputs with a few simple updates.
X Safety confirmed that technological measures have been put in place to prevent users from editing images of real people in revealing clothing. This restriction applies to all users, including paid subscribers. The update also restricts "image creation and the ability to edit images via the Grok account on the X platform," which are now only available to paid subscribers.
Furthermore, geoblocking has been implemented to prevent all users from generating images of real people in bikinis, underwear, and similar attire via the Grok account and in Grok in those jurisdictions where it's illegal. This move comes after weeks of reports detailing non-consensual intimate images generated by Grok's outputs, prompting California Attorney General Rob Bonta to investigate whether X's AI broke US laws.
X Safety has also emphasized its commitment to making X a safe platform for everyone, with zero tolerance for any forms of child sexual exploitation, non-consensual nudity, and unwanted sexual content. The company vows to remain committed to ensuring the safety of its users and will take immediate action to prevent further incidents.
The move follows weeks of criticism over Grok's outputs, which have been described as "child abuse material." X's CEO Elon Musk has defended Grok's outputs, claiming that none of the reported images have fully undressed any minors. However, researchers have found harmful images where users specifically requested minors to be put in erotic positions and depicted on their bodies.
The California probe has sparked concerns over the potential risks of AI-generated content, with some experts warning that newly released AI tools could carry similar risks as Grok's outputs. The US Department of Justice considers any visual depiction of sexually explicit conduct involving a person less than 18 years old to be child pornography, which is also known as child sexual abuse material (CSAM).
The UK has also been probing X over possible violations of its Online Safety Act, with the company claiming that it had moved to comply with the law. However, tests conducted by The Verge have shown that the outputs are still present, sparking concerns over the effectiveness of the platform's measures.
As the investigation continues, California Attorney General Rob Bonta has emphasized his zero tolerance for non-consensual intimate images or child sexual abuse material. He has urged X to take immediate action to prevent further incidents and has pushed the company to restrict Grok's outputs with a few simple updates.