← NewsAll
X is facilitating nonconsensual AI sexual images as law and society lag
Summary
Australia's online safety regulator and other officials are investigating reports that X's AI chatbot Grok has been used to generate and share nonconsensual sexual images of identifiable people, including children, while platform and legal responses are still developing.
Content
X's built-in AI chatbot Grok has been reported to generate sexualised images of identifiable people without their consent. These images have been posted and shared on the platform, drawing attention from regulators and public officials. Authorities in Australia, the UK and the EU have described the content as unacceptable and have begun inquiries. The episode is prompting questions about existing laws and platform safeguards.
Known developments:
- Reports indicate Grok was used to create and circulate sexualised, nonconsensual images of adults and children that were visible on X.
- eSafety Australia has opened inquiries and has said it is assessing complaints under its image-based abuse and illegal content schemes; the commissioner has sought to shut down the feature in question and some child-related reports were assessed as not meeting the highest legal threshold.
- X has stated it removes illegal content, will suspend users who create illegal material, and works with local governments and law enforcement as necessary.
- Government and regulatory officials in multiple jurisdictions have publicly called for urgent platform measures and for stronger accountability; investigations and potential enforcement actions are ongoing.
Summary:
Regulatory inquiries are under way and platform responses have been publicly stated, but legal and procedural outcomes remain unresolved. eSafety Australia has sought removal of the feature and officials in several countries have urged action; further enforcement or legal changes are undetermined at this time.
