OpenAI: Chinese operatives tried using ChatGPT for mass surveillance
Suspected Chinese government operatives reportedly used ChatGPT to draft proposals for large-scale surveillance tools and to promote software that scans social media accounts for “extremist speech,” according to a report published by OpenAI on October 7.
The report highlights how artificial intelligence, a highly sought-after technology, can be exploited to make state repression more efficient, according to CNN.
OpenAI described the findings as providing “a rare snapshot into the broader world of authoritarian abuses of AI.”
The United States and China are engaged in an open contest for AI supremacy, each investing billions in developing new technologies. The report, however, shows that suspected state actors are often using AI for relatively routine tasks, such as data analysis or polishing language, rather than creating groundbreaking technologies.
“There’s a push within the People’s Republic of China to get better at using artificial intelligence for large-scale things like surveillance and monitoring,” said Ben Nimmo, principal investigator at OpenAI. “It’s not last year that the Chinese Communist Party started surveilling its own population. But now they’ve heard of AI and they’re thinking, oh maybe we can use this to get a little bit better.”
One case detailed in the report involved a ChatGPT user “likely connected to a [Chinese] government entity” who asked the AI to help draft a proposal for a tool that analyses the travel movements and police records of Uyghurs and other “high-risk” individuals.
The US State Department previously accused the Chinese government of genocide and crimes against humanity against Uyghur Muslims, charges Beijing denies.
In another instance, a Chinese-speaking user requested ChatGPT’s assistance in designing “promotional materials” for software that purportedly scans X, Facebook, and other social media platforms for political and religious content. OpenAI said both users were banned.
AI is a major area of competition between the US and China. In January, Chinese firm DeepSeek raised concern among US officials and investors by unveiling R1, a ChatGPT-like model with comparable capabilities at a fraction of the cost. That same month, President Donald Trump promoted a plan by private companies to invest up to $500 million in AI infrastructure.
When asked about OpenAI’s findings, Liu Pengyu, a spokesperson for the Chinese Embassy in Washington, said: “We oppose groundless attacks and slanders against China.”
China is “rapidly building an AI governance system with distinct national characteristics,” Liu added. “This approach emphasizes a balance between development and security, featuring innovation, security and inclusiveness. The government has introduced major policy plans and ethical guidelines, as well as laws and regulations on algorithmic services, generative AI, and data security.”
The OpenAI report also documents how state-backed hackers and criminal actors routinely use AI in their operations. Suspected Russian, North Korean, and Chinese hackers have reportedly used ChatGPT to refine code or make phishing links appear more convincing.
“Adversaries are using AI to refine existing tradecraft, not to invent new kinds of cyberattacks,” said Michael Flossman, another OpenAI security expert.
Meanwhile, scammers thought to be operating from Myanmar have employed OpenAI’s tools for tasks ranging from managing finances to researching criminal penalties for online scams. Despite this, OpenAI notes that more people are using ChatGPT to detect scams than to perpetrate them, estimating that the AI is “being used to identify scams up to three times more often than it is being used for scams.”
CNN also asked OpenAI whether US military or intelligence agencies had used ChatGPT for hacking operations. The company did not answer directly but referred to its policy of using AI to support democracy.
US Cyber Command, which handles the military’s offensive and defensive cyber operations, has confirmed it will use AI to advance its mission. An “AI roadmap” approved by the command pledges to “accelerate adoption and scale capabilities” in artificial intelligence.
Former officials told CNN that Cyber Command is exploring how AI might support offensive operations, including exploiting software vulnerabilities in equipment used by foreign targets.