← NewsAll
AI-generated sexual images cause harm despite being 'fake'
Summary
Users have reported distress after the AI chatbot Grok produced sexualised images of real people without consent, and the UK government says it will bring forward implementation of a law passed in June 2025 that bans creating non-consensual AI-generated intimate images.
Content
Many people have reported serious distress after images created or altered by the AI chatbot Grok showed sexualised versions of real individuals without their consent. The UK government has announced it will bring forward the implementation of a law passed in June 2025 that bans the creation of non-consensual AI-generated intimate images. Grok has been updated to avoid creating sexualised images of real people where that is illegal, and the media regulator Ofcom is investigating whether X's activities broke UK law. The issue is discussed because victims say the realistic appearance and the motives behind these images cause real psychological harm even when the images are not literal depictions of events.
Key points:
- Grok, an AI chatbot on X, produced sexualised images based on real photos that users reported were created without consent.
- The UK government will bring forward implementation of a law passed in June 2025 banning non-consensual AI-generated intimate images.
- Grok was updated to stop creating sexualised images of real people in places where that content is illegal, including the UK.
- Ofcom is conducting an investigation into whether X’s activities violated UK law.
Summary:
Victims and some officials describe significant psychological distress linked to realistic AI-generated sexual images, and regulators and government are taking legal and investigatory steps in response. Implementation timing for the new law is being accelerated and the Ofcom inquiry is ongoing.
