twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2024. .
WORLD
A+
A-

US companies, Chinese experts engaged in secret diplomacy on AI safety

12 January 2024 02:39

Financial Times has published an article saying that OpenAI, Anthropic and Cohere held back-channel talks with Chinese state-backed groups in Geneva. Caliber.Az reprints the article.

Artificial intelligence companies OpenAI, Anthropic and Cohere have engaged in secret diplomacy with Chinese AI experts, amid shared concern about how the powerful technology may spread misinformation and threaten social cohesion. 

According to multiple people with direct knowledge, two meetings took place in Geneva in July and October last year attended by scientists and policy experts from the North American AI groups, alongside representatives of Tsinghua University and other Chinese state-backed institutions.

Attendees said the talks allowed both sides to discuss the risks from the emerging technology and encourage investments in AI safety research. They added that the ultimate goal was to find a scientific path forward to safely develop more sophisticated AI technology. 

“There is no way for us to set international standards around AI safety and alignment without agreement between this set of actors,” said one person present at the talks. “And if they agree, it makes it much easier to bring the others along.” 

The previously unreported talks are a rare sign of Sino-US co-operation amid a race for supremacy between the two major powers in the area of cutting-edge technologies such as AI and quantum computing. Currently, Washington has blocked US exports of the high-performance chips made by the likes of Nvidia that are needed to develop sophisticated AI software. 

But the topic of AI safety has become a point of common interest between developers of the technology across both countries, given the potential existential risks for humanity.

The Geneva meetings were arranged with the knowledge of the White House as well as that of UK and Chinese government officials, according to a negotiator present, who declined to be named.

“China supports efforts to discuss AI governance and develop needful frameworks, norms and standards based on broad consensus,” said the Chinese embassy in the UK.

“China stands ready to carry out communication, exchange and practical co-operation with various parties on global AI governance, and ensure that AI develops in a way that advances human civilisation.” 

The talks were convened by the Shaikh Group, a private mediation organisation that facilitates dialogue between key actors in regions of conflict, particularly in the Middle East.

“We saw an opportunity to bring together key US and Chinese actors working on AI. Our principal aim was to underscore the vulnerabilities, risks and opportunities attendant with the wide deployment of AI models that are shared across the globe,” said Salman Shaikh, the group’s chief executive.

“Recognising this fact can, in our view, become the bedrock for collaborative scientific work, ultimately leading to global standards around the safety of AI models.” 

Those involved in the talks said Chinese AI companies such as ByteDance, Tencent and Baidu did not participate; while Google DeepMind was briefed of the details of the discussions, it did not attend. 

During the talks, AI experts from both sides debated areas for engagement in technical co-operation, as well as more concrete policy proposals that fed into discussions around the UN Security Council meeting on AI in July 2023, and the UK’s AI summit in November last year. 

The success of the meetings has led to plans for future discussions that will focus on scientific and technical proposals for how to align AI systems with the legal codes and the norms and values of each society, according to the negotiator present.

There have been growing calls for co-operation between leading powers to tackle the rise of AI.

In November, Chinese scientists working on artificial intelligence joined western academics to call for tighter controls on the technology, signing a statement that warned that advanced AI would pose an “existential risk to humanity” in the coming decades.

The group, which included Andrew Yao, one of China’s most prominent computer scientists, called for the creation of an international regulatory body, the mandatory registration and auditing of advanced AI systems, the inclusion of instant “shutdown” procedures, and for developers to spend 30 per cent of their research budget on AI safety.

OpenAI, Anthropic and Cohere declined to comment about their participation. Tsinghua University did not immediately respond to a request for comment.

This article has been amended to make clear in the subheading that Anthropic not Inflection were involved in the Geneva talks

Caliber.Az
Views: 203

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
youtube
Follow us on Youtube
Follow us on Youtube
WORLD
The most important world news