An experiment that should wake everyone up Deepfakes vs. critical thinking
Just recently, Caliber.Az took an in-depth look at what deepfake technologies are and how they work. We presented a whole series of examples from around the world, showing just how far artificial intelligence has advanced in falsifying images and voices. But there is a huge gap between knowing about the danger and being able to counter it. And this gap was laid bare by an experiment conducted by Baku TV.
Here’s what happened. On Baku TV, editor-in-chief of BakuTV/Ru Natella Osmanli interviewed Polina Kovalevskaya, the producer of the popular app Reface, which can generate eerily realistic visual content from just a few publicly available photos. To demonstrate the technology’s capabilities, a well-known Azerbaijani journalist and Oxu.Az contributor, Kubra Maharramova volunteered as the “subject” of the experiment. Using only a handful of ordinary social media photos, AI tools created content within minutes that looked so realistic it was nearly impossible to tell apart from genuine footage. Perfectly synchronised facial features, expressions, and movements gave viewers an uncanny sense of authenticity.

When the provocative photo and video materials, allegedly featuring Kubra Maharramova, began circulating in the media and on social networks, the impact was explosive. The journalist’s colleagues were inundated with calls and messages. Readers and viewers were stunned, expressing open disbelief. The Oxu.Az article quickly became one of the most-read pieces in recent memory, amassing nearly fifty thousand views in less than a day, alongside hundreds of social media comments. Yes, a significant number of people defended Kubra, but the many negative reactions highlighted the main point: the vast majority of users didn’t hesitate for a second to accept what they saw as genuine.
This is the key—and truly alarming—takeaway from the experiment. People who spend hours on social media every day, who consider themselves experienced and discerning consumers of information, were completely defenceless against content created in just a few minutes using a publicly available mobile app. Hardly anyone even considered the possibility that what they were seeing was a high-tech forgery. The era of digital trust we all live in played a cruel trick on the public: the habit of believing what appears on a smartphone screen proved stronger than critical thinking.
During the interview, Polina Kovalevskaya herself confirmed the scale of the problem: modern AI tools can generate content that is virtually indistinguishable from reality using just a few publicly available photos or short videos. The Reface app, launched in 2018, operates on generative adversarial network (GAN) algorithms, where one neural network creates the fake content while another evaluates its realism, and through this “competition,” the quality of the forgery is perfected. And there are dozens of similar tools on the market—available to anyone.

The experiment with Kubra Maharramova occurred against the backdrop of another high-profile incident that had shaken Azerbaijani society just days earlier. A malicious actor began blackmailing students of the Law faculty at Baku State University using fake photos and videos created with the same deepfake technologies. The scale of the situation was such that the Main Department for Combating Cybercrime of the Ministry of Internal Affairs of Azerbaijan immediately got involved. The distributor of the falsified material was quickly identified and detained. Yet the very fact that a single individual, using publicly available technology, could inflict a massive psychological blow on dozens of young people and their families raises questions for society that have no easy answers.
The parallel of these two events—the controlled Baku TV experiment and a real-life crime against students—paints a striking picture, one that should finally compel Azerbaijani society to recognise the true magnitude of the problem.
But the problem goes far beyond isolated cases of blackmail, touching on an issue of strategic importance for Azerbaijan. Deepfake technologies have long been deliberately used in information warfare against the country—a topic Caliber.Az has covered repeatedly and in detail. Fabricated videos, fake “eyewitness accounts,” and forged “documents”—this entire arsenal is actively employed by those conducting a systematic campaign to discredit Azerbaijan. Pro-Armenian lobbying groups, Western neoliberal circles connected to organisations such as ANCA and the Aurora Foundation, and neo-imperial forces in the North—all are well aware of the potential of deepfakes as a tool of hybrid warfare. And as these technologies become more advanced, distinguishing fakes from reality grows increasingly difficult.
The December warning from the Azerbaijani Media Development Agency, which issued a statement about the threat of disinformation spreading via deepfakes on social networks, now sounds almost prophetic. The problem turned out to be even closer and more acute than experts had anticipated. Global statistics are unforgiving: according to specialised studies, around five hundred separate deepfake incidents were recorded in the second quarter of 2025, and by the third quarter, this number had more than quadrupled. The total global financial damage from deepfake-related fraud in 2025 reached approximately $1.1 billion—three times higher than the previous year. And this accounts only for monetary losses, not for ruined careers, shattered trust, or broken lives.
The Baku TV experiment involving Kubra Maharramova pursued an important social goal—and in that respect, it succeeded brilliantly. Fifty thousand views, hundreds of comments, and lively discussions all testify to the fact that the message reached the audience. The question now is what conclusions society will draw from this lesson.

The first and most obvious takeaway is the urgent need to fundamentally rethink our attitude toward any visual content circulating on social media. Trust, built over years, can be destroyed in seconds by artificial intelligence algorithms. A photograph or video is no longer proof of anything. The era in which “seeing meant believing” is definitively over. Each of us must develop a new habit—a habit of information hygiene, of healthy scepticism, of verifying sources and cross-checking facts before forming judgments, and especially before sharing unverified information.
The second issue concerns the legislative and institutional response. The swift arrest of the Baku State University student blackmailer by the Main Department for Combating Cybercrime demonstrated that the country’s law enforcement agencies can respond effectively to such threats. But the pace of technological development is such that a reactive approach is no longer sufficient. Preventive mechanisms are needed—educational programs, and the systematic improvement of digital literacy among the population. Global experience shows that countries that recognise the scale of the problem early—ranging from the United States’ Take It Down Act to European AI regulatory initiatives—gain a significant advantage in protecting both their citizens and their information space.
The third—and perhaps most significant for Azerbaijan—is understanding that deepfake technologies are a tool in the information warfare directed against the country. Falsified content can be manufactured to incite social divisions. If today a single smartphone app can produce material that makes thousands of people believe compromising content about a well-known journalist, it is not hard to imagine what could happen when similar technologies—far more powerful and sophisticated—are deployed by intelligence agencies or professional hybrid warfare units.
The world has entered an era where truth and falsehood are nearly indistinguishable at a visual level. This is the reality of our time, as unmistakably demonstrated by the experiment with Kubra Maharramova. And in these conditions, the only shield is the critical thinking of every citizen: not believing blindly, verifying, cross-checking, asking questions. Azerbaijan, having faced large-scale information attacks over the past years, understands the cost of disinformation better than many countries. All the more important, then, that this experience be converted into a collective societal immunity to manipulation—from everyday blackmail to geopolitical provocations.







