twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2025. .
WORLD
A+
A-

Why world leaders are letting AI firms self-regulate

03 December 2025 23:03

Former British Prime Minister Rishi Sunak, once a leading voice calling for strong guardrails on artificial intelligence, has sharply reversed course as governments worldwide pull back from early ambitions to tightly regulate the fast-moving technology.

In 2023, Sunak convened the world’s first “AI Safety Summit,” bringing together global policymakers and longtime AI doomer Elon Musk to discuss risks triggered by breakthroughs such as ChatGPT. But speaking last month at Bloomberg’s New Economy Forum, Sunak signaled a far more relaxed stance, Bloomberg writes.

“The right thing to do here is not to regulate,” he said, praising companies like OpenAI for “working really well” with London-based security researchers who test models for potential harms. 

He noted that firms were voluntarily submitting to audits, and when asked what would happen if that cooperation ended, Sunak responded, “So far we haven’t reached that point, which is positive.”

Sunak’s shift from arguing that Britain should be the “home of AI safety regulation” to advocating no legislation reflects a broader political recalibration. Governments are increasingly prioritising the economic promise of AI and signaling that strict rules may be unnecessary without concrete evidence of widespread harm.

Yet critics warn that waiting for a clear disaster may be a risky bet as AI spreads at unprecedented speed. ChatGPT, now used by an estimated 10% of the global population, is widely seen as the fastest-growing consumer software in history. Concerns about its psychological and social effects persist. 

OpenAI faces lawsuits from families whose loved ones experienced delusional spirals or suicidal ideation after spending hours with the system. A campaign group has also collected more than 160 accounts from individuals who say the technology damaged their mental health. Meanwhile, AI continues to disrupt schoolwork, reinforce stereotypes, sparking a novel kind of dependency and engaging in artistic theft.

Despite these issues, many early advocates of caution have embraced the industry’s boom. Sunak has accepted advisory roles at Anthropic PBC and Microsoft Corp., pledging to donate his salary to charity but nonetheless gaining ties that could be valuable beyond politics. Musk, once vocal about AI’s existential dangers, has been quieter since launching his own AI company, xAI Corp., maker of the chatbot Grok.

This loosening of regulatory ambition extends across the United States, Europe and Asia. The US has shifted from President Joe Biden’s 2023 executive order on AI safety—later banned under Donald Trump—to a strategy focused on accelerating data-center construction, chip exports and efforts to block state-level AI legislation. Silicon Valley figures, including Marc Andreessen, have poured tens of millions of dollars into lobbying against future restrictions.

The UK, despite a history of swift tech rule-making, appears unlikely to impose heavy requirements on generative AI. The European Union has delayed major components of its AI Act until 2027, while its Code of Practice has been postponed.

China, often associated with tight digital controls, has adopted a similarly growth-oriented stance. Although chatbots and generative tools face rules on deepfake labeling and political content, mass-market consumer chatbots are only a slice of China’s AI market. 

The country’s biggest AI sectors are in areas such as industrial automation, logistics, e-commerce and AI infrastructure. The Chinese Communist Party, a major customer of domestic AI systems, is seen as reluctant to constrain an industry central to both economic strategy and national surveillance capabilities.

Scholars warn this approach offers “little protective value to the Chinese public” and could heighten risks ranging from AI-designed pathogens to disruptions of critical infrastructure.

Despite these concerns, the prevailing view in many capitals is that companies can police themselves.

“Look, I don’t think anyone wants to put something into the world which they think would genuinely cause significant harm,” Sunak said.

 But as past technological cycles show, self-regulation can hold—until it doesn’t.

By Sabina Mammadli

Caliber.Az
Views: 137

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
WORLD
The most important world news
loading