How cognitive wars hide behind legal blind spots
The rise of sophisticated tactics and technologies to manipulate thought and emotion represents one of the most dangerous threats to human autonomy today.
Cognitive warfare seeks to gain strategic advantage by influencing attitudes and behaviour at the level of individuals, groups, or entire populations. It aims to distort reality itself, making the human mind a new battlefield.
Since it avoids physical harm as defined by existing war laws, cognitive warfare operates in a legal void.
The Conversation publication aimed to illustrate this method of warfare by providing the following example in their latest article: Imagine waking up to reports of a deadly new flu spreading through your city. Officials urge calm, but social media floods with conflicting claims from “experts” about its lethality and origins.
Emergency rooms are overwhelmed, regular patients turned away, and fatalities mount. Later, it’s revealed that a foreign actor seeded panic by releasing disinformation suggesting the flu was far deadlier than it really was. Despite the toll, no legal framework defines this as warfare. This is cognitive warfare—“cog war”—in which perception is manipulated as a strategic act below traditional war thresholds.
Cog war functions by influencing perception and behaviour, aiming to shape reality itself. It plays out not in physical terrain but in the mental spaces where beliefs and emotions are formed. Its effects can still provoke real-world consequences, such as violence triggered by falsehoods or secondary harms stemming from altered behaviour.
Battle of minds
The idea that war is fundamentally a struggle of minds goes back to Sun Tzu, the ancient Chinese strategist who emphasised deception and mental dominance. Today, the internet is the main battleground for cognitive warfare.
The digital revolution enables precision-targeted messaging based on user behaviour and data, a tactic known as “microtargeting.” Sophisticated AI systems can now produce tailored messages that support malicious narratives—without ever recording a video or taking a photo—simply by manipulating attention and emotion.
The article highlights that such disinformation is now bleeding into the physical domain. In Ukraine, ongoing cognitive warfare campaigns have included false reports that Kyiv is hiding cholera outbreaks or operating bioweapons labs with US support—claims aimed at justifying military aggression.
During the COVID-19 pandemic, disinformation cost lives when people rejected vaccines or used dangerous home remedies. These false narratives were often strategically driven.
Microtargeting is poised to evolve dramatically as brain-machine interface technologies advance. Devices such as scalp electrodes or virtual reality headsets with immersive feedback are already being developed to record and influence neural patterns.
DARPA’s Next-Generation Nonsurgical Neurotechnology (N3) program, for example, explores how to read and write signals across multiple areas of the brain. But these technologies could also be compromised, manipulated, or repurposed as tools of cognitive attack—directly linking the brain to digital threats in unprecedented ways.
Legal gap
The existing laws of war are based on physical violence—bullets, bombs, and troop movements—not psychological manipulation. But what if mental influence can produce equally devastating consequences? Can targeted disinformation that causes mass casualties be considered an “armed attack” under the UN Charter? There is currently no consensus.
Even in active war zones, cognitive attacks blur distinctions between lawful deception and prohibited trickery. For instance, imagine a vaccination campaign used covertly to collect DNA data and map tribal loyalties. This breach of medical neutrality could qualify as perfidy—a war crime—if cognitive tactics are recognized as legitimate elements of warfare.
Developing regulations
How do we respond to this new threat landscape? First, the article urges that we must broaden our understanding of “force.” The UN Charter bans “threats to use force,” but only in the physical sense. Yet when a foreign actor floods your country with false alerts about a health crisis to incite panic, the outcome may resemble a military blockade in its disruptive impact.
While cognitive threats were flagged as early as 2017 in the Tallinn Manual’s Rule 70 on cyber warfare, existing legal norms have yet to adapt to this changing reality.
Second, leaders must affirm that psychological harm is real harm. Though injuries from war are usually imagined as physical wounds, post-traumatic stress has long been acknowledged as a legitimate outcome of conflict. Mental trauma inflicted by precision-targeted disinformation deserves similar recognition.
Finally, if traditional war law isn’t enough, the article calls out that the international community should turn to human rights law for guidance. Legal instruments already protect freedom of thought, conscience, and opinion, and prohibit propaganda for war.
These rights could form the legal shield needed to protect civilians from cognitive manipulation. States are bound to uphold them both domestically and beyond their borders.
By Nazrin Sadigova