Foreign Affairs: Illusion of China’s AI prowess

    WORLD  06 June 2023 - 00:02

    The Foreign Affairs magazine has published an article arguing that regulating artificial intelligence (AI) will not set America back in the technology race. Caliber.Az reprints the article.

    The artificial intelligence revolution has reached Congress. The staggering potential of powerful AI systems, such as OpenAI’s text-based ChatGPT, has alarmed legislators, who worry about how advances in this fast-moving technology might remake economic and social life. Recent months have seen a flurry of hearings and behind-the-scenes negotiations on Capitol Hill as lawmakers and regulators try to determine how best to impose limits on the technology. But some fear that any regulation of the AI industry will incur a geopolitical cost.

    In a May hearing at the US Senate, Sam Altman, the CEO of OpenAI, warned that “a peril” of AI regulation is that “you slow down American industry in such a way that China or somebody else makes faster progress.” That same month, AI entrepreneur Alexandr Wang insisted that “the United States is in a relatively precarious position, and we have to make sure we move fastest on the technology.” Indeed, the notion that Washington’s propensity for red tape could hurt it in the competition with Beijing has long occupied figures in government and in the private sector. Former Google CEO Eric Schmidt claimed in 2021 that “China is not busy stopping things because of regulation.” According to this thinking, if the United States places guardrails around AI, it could end up surrendering international AI leadership to China.

    In the abstract, these concerns make sense. It would not serve US interests if a regulatory crackdown crippled the domestic AI industry while Chinese AI companies, unshackled, could flourish. But a closer look at the development of AI in China—especially that of large language models (LLMs), the text generation systems that underlie applications such as ChatGPT—shows that such fears are overblown.

    Chinese LLMs lag behind their US counterparts and still depend in large part on American research and technology. Moreover, Chinese AI developers already face a far more stringent and limiting political, regulatory, and economic environment than do their US counterparts. Even if it were true that new regulations would slow innovation in the United States—and it very well may not be—China does not appear poised to surge ahead.

    US companies are building and deploying AI tools at an unprecedented pace, so much so that even they are actively seeking guidance from Washington. This means that policymakers considering how to regulate the technology are in a position of strength, not one of weakness. Left untended, the harms from today’s AI systems will continue to multiply while the new dangers produced by future systems will go unchecked. An inflated impression of Chinese prowess should not prevent the United States from taking meaningful and necessary action now.

    The sincerest form of flattery

    Over the past three years, Chinese labs have rapidly followed in the footsteps of US and British companies, building AI systems similar to OpenAI’s GPT-3 (the forerunner to ChatGPT), Google’s PaLM, and DeepMind’s Chinchilla. But in many cases, the hype surrounding Chinese models has masked a lack of real substance.

    Chinese AI researchers, we have spoken with believe that Chinese LLMs are at least two or three years behind their state-of-the-art counterparts in the United States—perhaps even more. Worse, AI advances in China rely a great deal on reproducing and tweaking research published abroad, a dependence that could make it hard for Chinese companies to assume a leading role in the field. If the pace of innovation slackened elsewhere, China’s efforts to build LLMs—like a slower cyclist coasting in the leaders’ slipstream—would likely decelerate.

    Take, for instance, the Beijing Academy of Artificial Intelligence’s WuDao 2.0 model. After its release in the summer of 2021, Forbes thrilled at the model as an example of “bigger, stronger, faster AI,” largely because WuDao 2.0 boasted ten times more parameters—the numbers inside an AI model that determine how it processes information—than GPT-3. But this assessment was misleading in several ways.

    Merely having more parameters does not make one AI system better than another, especially if not matched by corresponding increases in data and computing power. In this case, comparing parameter counts was especially unwarranted given that WuDao 2.0 worked by combining predictions from a series of models rather than as a single language model, a design that artificially inflated the parameter count. What’s more, the way researchers posed questions to the model helped its performance in certain trials appear stronger than it actually was.

    Baidu’s “Ernie Bot” also disappointed. Touted as China’s answer to ChatGPT, the development of Ernie Bot was clearly—like that of WuDao 2.0—spurred by pressure to keep up with a high-profile breakthrough in the United States. The Chinese bot failed to live up to aspirations. Baidu’s launch event included only prerecorded examples of its operation, a telltale sign that the chatbot was unlikely to perform well in live interactions. Reviews from users who have since gained access to Ernie Bot have been mediocre at best, with the chatbot stumbling on simple tasks such as basic math or translation questions.

    Chinese AI developers struggle with the pressure to keep up with their US counterparts. In August 2021, more than 100 researchers at Stanford collaborated on a major paper about the future of so-called foundation models, a category of AI systems that includes LLMs. Seven months later, the Beijing Academy of AI released a similarly lengthy literature review on a related subject, with almost as many co-authors. But within a few weeks, a researcher at Google discovered that large sections of the Chinese paper had been plagiarized from a handful of international papers—perhaps, Chinese-language media speculated, because the graduate students involved in drafting the paper faced extreme pressure and were up against very short deadlines.

    Americans should not be haunted by the spectre of an imminent Chinese surge in LLM development. Chinese AI teams are fighting—and often failing—to keep up with the blistering speed of new research and products emerging elsewhere. When it comes to LLMs, China trails years, not months, behind its international competitors.

    Headwinds and handicaps

    Forces external to the AI industry also impede the pace of innovation in China. Because of the outsize computational demands of LLMs, the international competition over semiconductors inevitably affects AI research and development. The Chinese semiconductor industry can only produce chips several generations behind the latest cutting-edge ones, forcing many Chinese labs to rely on high-end chips developed by US firms. In recent research analyzing Chinese LLMs, we found 17 models that used chips produced by the California-based firm NVIDIA; by contrast, we identified only three models built with Chinese-made chips.

    Huawei’s PanGu-α, released in 2021, was one of the three exceptions. Trained using Huawei’s in-house Ascend processors, the model appears to have been developed with significantly less computational power than best practices would recommend. Although it is currently perfectly legal for Chinese research groups to access cutting-edge US chips by renting hardware from cloud providers such as Amazon or Microsoft, Beijing must be worried that the intensifying rhetoric and restrictions around semiconductors will hobble its AI companies and researchers.

    More broadly, pessimism about the overall economic and technological outlook in China may hamper domestic AI efforts. In response to a wave of regulatory scrutiny and a significant economic slowdown in the country, many Chinese startups are now opting to base their operations overseas and sell to an international market instead of selling primarily to the Chinese market. This shift has been driven by the increasing desire among Chinese entrepreneurs to gain easier access to foreign investment and to escape China’s stringent regulatory environment—while also skirting restrictions imposed on Chinese companies by the United States.

    Hal, meet big brother

    China’s thicket of restrictions on speech also pose a unique challenge to the development and deployment of LLMs. The freewheeling way in which LLMs operate—following the user’s lead to produce text on any topic, in any style—is a poor fit for China’s strict censorship rules. In a private conversation with one of us, one Chinese CEO quipped that China’s LLMs are not even allowed to count to 10, as that would include the numbers eight and nine—a reference to the state’s sensitivity about the number 89 and any discussion of the 1989 Tiananmen Square protests.

    Because the inner workings of LLMs are poorly understood—even by their creators—existing methods for putting boundaries around what they can and cannot say function more like sledgehammers than scalpels. This means that companies face a stark tradeoff between how useful the AI’s responses are and how well they avoid undesirable topics. LLM providers everywhere are still figuring out how to navigate this tradeoff, but the potentially severe ramifications of a misstep in China force companies there to choose a more conservative approach. Popular products such as the Microsoft spinout XiaoIce are prohibited from discussing politically sensitive topics such as the Tiananmen Square protests or Chinese leader Xi Jinping.

    Some users we spoke to even claim that XiaoIce has gotten less functional over time, perhaps as Microsoft has added additional guardrails. Journalists have likewise found that Baidu’s Ernie Bot gives canned answers to questions about Xi and refuses to respond on other politically charged topics. Given the wide range of censored opinions and subjects in China—from the health of the Chinese economy to the progress of the war in Ukraine to the definition of “democracy”—developers will struggle to make chatbots that do not cross redlines while still being able to answer most questions normally and effectively.

    In addition to these political constraints on speech, Chinese AI companies are also subject to the country’s unusually detailed and demanding regulatory regime for AI. One set of rules came into force in January 2023 and applies to providers of online services that use generative AI, including LLMs. A draft of further requirements, which would apply to research and development practices in addition to AI products, was released for comment in April.

    Some of the rules are straightforward, such as requiring that sensitive data must be handled according to China’s broader data governance regime. Other provisions may prove quite onerous. The January regulations, for instance, oblige providers to “dispel rumours” spread using content generated by their products, meaning that companies are on the hook if their AI tools produce information or opinions that go against the Chinese Communist Party line.

    The April draft would go further still, forcing LLM developers to verify the truth and accuracy not just of what the AI programs produce but also of the material used to train the programs in the first place. This requirement could be a serious headache in a field that relies on massive stores of data scraped from the Web. When carefully designed, regulation need not obstruct innovation. But so far, the CCP’s approach to regulating LLMs and other generative AI technology appears so heavy-handed that it could prove a real impediment to Chinese firms and researchers.

    Fear of the chimera

    Despite the difficulties it currently faces, Chinese AI development may yet turn a corner and establish a greater track record of success and innovation. Americans, however, have a history of overestimating the technological prowess of their competitors. During the Cold War, bloated estimates of Soviet capabilities led US officials to make policy on the basis of a hypothesized “bomber gap” and then “missile gap,” both of which were later proved to be fictional.

    A similarly groundless sense of anxiety should not determine the course of AI regulation in the United States. After all, where social media companies resisted regulation, AI firms have already asked for it. Five years ago, Facebook founder Mark Zuckerberg warned Congress that breaking up his social media company would only strengthen Chinese counterparts. In AI, by contrast, industry leaders are proactively calling for regulation.

    If anything, regulation is the area where the United States most risks falling behind in AI. China’s recent regulations on generative AI build on top of existing rules and a detailed data governance regime. The European Union, for its part, is well on its way to passing new rules about AI, in the form of the AI Act, which would categorize levels of risk and impose additional requirements for LLMs. The United States has not yet matched such regulatory efforts, but even here, US policymakers are in better shape than often assumed.

    The federal government has already drafted thorough frameworks for managing AI risks and harms, including the White House’s Blueprint for an AI Bill of Rights and the National Institute for Standards and Technology’s AI Risk Management Framework. These documents provide in-depth guidance on how to navigate the multifaceted risks and harms—as well as benefits—of this general-purpose technology. What is needed now is legislation that allows the enforcement of the key tenets of these frameworks, in order to protect the rights of citizens and place guardrails around the rapid advance of AI research.

    There are still plenty of issues to work through, including where new regulatory authorities should be housed, what role third-party auditors can play, what transparency requirements should look like, and how to apportion liability when things go wrong. These are thorny, urgent questions that will shape the future of the technology, and they deserve to receive serious effort and policy attention. If the chimera of Chinese AI mastery dissuades policymakers from pursuing regulation of the industry, they will only be hurting US interests and imperilling the country’s prosperity.


    Subscribe to our Telegram channel

Read also

US State Department vows to investigate Armenian radicals' provocation in Los Angeles

01 October 2023 - 09:21

Minister: France studying Armenia's defence needs

01 October 2023 - 09:37

Pro-Russian ex-PM Fico wins Slovak election, needs allies for government

01 October 2023 - 09:48

The problem with Nobel’s ‘rule of three’

01 October 2023 - 04:03

‘No turning back’: How Ukraine war has profoundly changed EU

01 October 2023 - 02:03

Hungary sees Ukraine’s EU membership in near future as unrealistic

30 September 2023 - 17:20
Latest news

    Azerbaijani envoy warns of attack plan by Armenian radicals on embassy in Belgium

    01 October 2023 - 10:04

    Azerbaijan organises medical services in Khankendi

    Presidential Administration's message

    01 October 2023 - 09:55

    Pro-Russian ex-PM Fico wins Slovak election, needs allies for government

    01 October 2023 - 09:48

    Minister: France studying Armenia's defence needs

    01 October 2023 - 09:37

    Azerbaijan urges global donors to avoid double standards in assisting Garabagh Armenians

    01 October 2023 - 09:28

    US State Department vows to investigate Armenian radicals' provocation in Los Angeles

    01 October 2023 - 09:21

    Azerbaijani MFA condemns Armenian radicals' attack on conference participants in Los Angeles

    01 October 2023 - 09:14

    Azerbaijani serviceman killed as result of sniper fire by Armenia

    01 October 2023 - 09:07

    Uzbekistan: President demands swift investigation into huge Tashkent blast

    01 October 2023 - 08:03

    Kyrgyzstan: MPs give president power to overturn court rulings on moral grounds

    01 October 2023 - 06:05

    The problem with Nobel’s ‘rule of three’

    01 October 2023 - 04:03

    ‘No turning back’: How Ukraine war has profoundly changed EU

    01 October 2023 - 02:03

    For the US, fentanyl is all about China

    01 October 2023 - 00:02

    The costs of Russia’s war are about to hit home

    30 September 2023 - 22:02

    Vladimir Putin and Xi Jinping: The empires strike back

    30 September 2023 - 20:03

    Senator Cardin: No decision on F-16s for Türkiye yet

    30 September 2023 - 18:02

    Georgia to host World Tourism Day in 2024

    30 September 2023 - 17:49

    Hunka's family goes into hiding after being honoured in Canada's Parliament

    30 September 2023 - 17:35

    Hungary sees Ukraine’s EU membership in near future as unrealistic

    30 September 2023 - 17:20

    Expanding cooperation with China – one of Azerbaijan’s top foreign policy priorities

    President Aliyev congratulates Xi Jinping

    30 September 2023 - 17:05

    French government survives no-confidence vote

    30 September 2023 - 16:50

    Senegal, Cote d'Ivoire not sending troops for intervention in Niger, media says

    30 September 2023 - 16:36

    Presidential rep: Five historical sites restored in Shusha

    30 September 2023 - 16:22

    President Ilham Aliyev receives president of International Astronautical Federation

    30 September 2023 - 16:07

    What is the former commander of the Armenian occupation troops in Garabagh accused of?


    30 September 2023 - 16:04

    Baku airport welcomes first flight of Greek airline


    30 September 2023 - 15:52

    Navy chief: Iran to establish permanent base in Antarctica

    30 September 2023 - 15:40

    Azerbaijan brings to justice former separatist "minister" for crimes against humanity

    Statement by the Prosecutor General's Office

    30 September 2023 - 15:37

    Azerbaijan confiscates military equipment, weapons, ammunition in Agdara


    30 September 2023 - 15:26

    Pundit raps Armenian governments for disregarding Azerbaijan’s conditions

    30 September 2023 - 15:25

    Azerbaijan to build 498 individual residential houses in Zangilan's Alibeyli village

    30 September 2023 - 15:10

    Bus accident in Türkiye leaves six dead, dozens injured


    30 September 2023 - 14:55

    BAYKAR to invest $100 million in Ukraine

    30 September 2023 - 14:41

    Prominent boxer explains refusal to serve in Ukrainian army

    30 September 2023 - 14:23

    Armenian government considers taking Russian TV off the air

    30 September 2023 - 14:01

    British FM: London resolute in its support for Kyiv

    30 September 2023 - 13:47

    Azerbaijan allocates $1 million to UN-HABITAT Program

    30 September 2023 - 13:32

    Voting begins for early parliamentary election in Slovakia

    30 September 2023 - 13:17

    Azerbaijan uncovers remains of nearly 500 people in liberated lands

    Statement by the Azerbaijani State Security Service

    30 September 2023 - 13:02

    New York City under state of emergency as massive flooding takes hold

    30 September 2023 - 12:49

All news