New report tackles global failures in AI governance that Grok scandal exposed
The 2026 International AI Safety Report, released in early February by more than 100 experts from over 30 countries, delivers a sobering message: the speed of artificial intelligence development continues to outstrip the world’s ability to put effective safeguards in place. The report’s chair warned that reaching international agreement on AI governance is now in every nation’s rational self-interest, drawing comparisons to the global framework created to manage nuclear risks.
A controversy surrounding a feature on Elon Musk’s chatbot Grok in December offered a striking example of what can happen in the absence of such coordination, as detailed in an article by Geopolitical Monitor.
The chatbot, developed by xAI — one of the South African billionaire's ventures — began producing thousands of sexualized images per hour, including images involving minors. Users found they could upload photos of real individuals and instruct the AI to “undress” them. While governments issued statements and regulators opened investigations, the feature itself could not be immediately shut down.
According to the article, what followed was a textbook example of fragmented global reaction. Malaysia and Indonesia banned Grok outright. United Kingdom stepped up enforcement of new legislation and launched an investigation through Ofcom. France broadened an existing probe. India demanded compliance reports, while Brazil called for a nationwide suspension. The European Commission ordered X to preserve internal documentation, and 57 members of the European Parliament pushed for bans on “nudification” tools under the AI Act. In the United States, senators instead urged Apple and Google to remove X from their app stores as a form of penalty.
xAI ultimately said it would comply “in jurisdictions where it is illegal.” The article argues this response was representing the bitter truth that nobody wants to say out loud, namely that Grok would meet only the minimum legal requirements in each country, seeing as no coordinated international standard compels a broader approach.
Each government acted according to its own legal framework, enforcement capacity, and timeline. Yet without a unified structure, the wave of outrage resulted in a patchwork of measures that had limited overall impact.
The article characterizes this as a failure of infrastructure rather than political will. Mechanisms for rapid, coordinated international action in response to AI-related harms simply do not exist, a gap the latest AI Safety Report documents in detail.
Intense competitive pressure encourages companies to release products quickly, sometimes at the expense of safety precautions, even when leadership might prefer a more cautious rollout. There is also no reliable system to verify claims about model capabilities, training processes, or safety safeguards, making trust and enforceable agreements difficult. Frontier AI labs lack standardized procedures for reporting serious incidents, meaning issues often remain contained internally until they escalate into public controversies.
Dangers to exponentially increase with time
Although the creation of non-consensual intimate imagery is deeply troubling, the article warns that more severe and tangible threats could emerge if oversight continues to lag. The AI Safety Report notes that current systems are already capable of helping non-experts design dangerous biological agents — with 23 percent of the highest-performing biological AI tools deemed to have high misuse potential — and are being adapted into semi-autonomous cyberattack tools. These are not theoretical risks but documented capabilities. The key question, therefore, is not whether more serious challenges will arise, but whether the necessary global infrastructure will be in place to respond when they do.
The Grok episode illustrates that the technology operated as designed; the deeper problem lies in the absence of clearly enforced boundaries on AI systems rather than in the underlying capabilities themselves.
While this incident paints a grim picture and serves as a wake-up call, there are initiatives aimed at improving coordination are underway, and there are examples of progress. In 2023, 16 major AI firms — including Amazon, Anthropic, Google, Meta, Microsoft, and OpenAI — agreed to voluntary safety commitments coordinated by the White House. These commitments included shared standards for security testing, information sharing, and watermarking AI-generated content. The Frontier Model Forum grew out of that initiative to promote industry-wide safety practices. More recently, the Coalition for Content Provenance and Authenticity has been developing technical standards to help verify and authenticate AI-generated media.
By Nazrin Sadigova







