← NewsAll
Social media's social good is declining amid AI image misuse
Summary
A Globe editorial reports that users exploited X’s AI tool Grok to generate sexualized deepfakes, prompting investigations in Canada and abroad and calls to broaden laws to hold platforms accountable.
Content
An editorial in the Globe warns that AI image features on X have been used to produce sexualized deepfake images of women, a problem the paper says was foreseeable. The editorial says X’s AI tool Grok allowed users to alter and share photos, and that the company behind the tool is responsible for its outputs. Regulators in several countries are examining X, and some governments have moved to restrict or ban the tool. The piece argues that Canadian law should be changed to address platform responsibility for AI-generated intimate images.
What is known:
- The Globe reports that users prompted Grok to produce sexualized images from real photos, and that those images were widely shared on X.
- Canada’s privacy commissioner has expanded an investigation into X; the European Union and the United Kingdom are also investigating, and Indonesia and Malaysia have taken steps to restrict the tool.
- X says it removes certain high-priority content and announced code changes intended to block users from generating lewd images of real people where doing so is illegal.
- The editorial recommends amending Bill C-16 to cover AI-generated intimate images and argues platforms should be targeted; it notes a prior online harms bill, Bill C-63, was not enacted.
Summary:
The editorial reports that AI-driven image generation on X produced sexualized deepfakes and that regulators and privacy officials have opened probes while some countries have acted to restrict the tool. Ottawa has not banned X and the privacy commissioner is investigating; the editorial urges legal changes in Canada to extend protections and assign responsibility to platforms. Undetermined at this time whether specific legislative steps will move forward.
