Deepfake technologies target Azerbaijan A global threat is already near
In recent days, Azerbaijan’s social media segment has been actively discussing cases involving the rapid spread of offensive photo and video materials created using deepfake technology and images of students from Baku State University. These fabricated images and videos bear no relation to reality.
An analysis of the situation indicates that such accounts typically appear during the night. Falsified materials are posted, and by morning the profiles are either deleted or become inaccessible. Several such pages have emerged in succession, significantly complicating efforts to identify and track them.
The primary targets of the perpetrators have been students whose social media accounts were publicly accessible, enabling the misuse of their photos and videos to create fake content. In a number of cases, the posts were accompanied by insulting captions and appeared to be aimed at psychological intimidation.

The press service of the Ministry of Internal Affairs of Azerbaijan confirmed that the situation is under the scrutiny of law enforcement authorities.
The incident involving the Baku State University students is alarming, but far from unique. Deepfake technology has long ceased to be an exotic novelty and has evolved into a fully-fledged tool of manipulation and discreditation. It enables the alteration of faces, facial expressions, and voices, creating the illusion that a person is saying or doing something that, in reality, never happened.
When a TikTok account featuring a fake yet highly convincing “Tom Cruise” appeared in 2020, its creator reassured the public, claiming that such content could not simply be replicated at the press of a button.

A few years have passed—and deepfake technology has entered the arsenal of high politics.
For instance, on the third day of the Russia–Ukraine war, a video circulated widely online in which Ukraine’s president, Volodymyr Zelenskyy, allegedly called on the Ukrainian Armed Forces to lay down their arms. The forgery was crude and was exposed the same day. Soon after, a mirror fake appeared—Vladimir Putin supposedly “acknowledging” the signing of peace with Ukraine. Since then, the technology has made a qualitative leap forward.
In July 2025, The Guardian reported an incident that seriously alarmed Washington. An unknown fraudster used deepfake technology to impersonate U.S. Secretary of State Marco Rubio and managed to establish contact with at least five senior officials. The aim was to gain access to classified government reports. Only a coincidence prevented a data breach. David Axelrod, a former adviser to Barack Obama, commented on the incident succinctly: It was “only a matter of time. This is the new world in which we live and we’d better figure out how to defend against it.” So far, however, that lesson has yet to be learned.
In December 2025, the respected British publication PoliticsHome reported that Members of the UK Parliament have no effective protection against the creation of deepfakes featuring them. The conclusion followed the case of Conservative MP George Freeman, who became the target of a deepfake campaign two months earlier.
In October 2025, fabricated videos spread widely online in which Freeman allegedly announced his defection to the Reform UK party. The MP himself put it bluntly: “I was the victim of an AI deepfake yet the law was unable to protect me.” In that statement lies a damning verdict on the entire existing system designed to counter digital falsifications.

By 2026, the digital environment has learned to generate fabricated videos of virtually any content and quality, yet the international community still has little to counter anonymous smear campaigns. The public is often simply unable to distinguish real footage from manipulated material — especially when a fake is produced not by a lone enthusiast, but by a state-backed machine.
The situation surrounding the Baku State University students clearly demonstrates that Azerbaijan has not remained on the sidelines of this global threat. If such technologies are already being deployed against politicians in other countries, it would be naïve to assume they will bypass our own. The very emergence of deepfake content within Azerbaijan’s information space is shaping a new reality — and ignoring it would be dangerous.
If the global experience teaches anything, however, it is that awareness remains the only truly effective weapon against deepfakes. No algorithm can reliably filter out every forgery. No law can keep pace with the speed of technological development. What remains is society’s ability to ask the right questions: Who benefits? Who is disseminating this content? Why now? Azerbaijani society has demonstrated this capacity more than once.







