twitter
youtube
instagram
facebook
telegram
apple store
play market
night_theme
ru
arm
search
WHAT ARE YOU LOOKING FOR ?






Any use of materials is allowed only if there is a hyperlink to Caliber.az
Caliber.az © 2024. .
WORLD
A+
A-

Foreign Affairs: Ground rules for age of artificial intelligence warfare

08 June 2023 04:03

The Foreign Affairs magazine has published an article claiming that the time for an Autonomous Incidents Agreement is ripe, given that AI is at an inflexion point. Caliber.Az reprints the article.

On March 14, a US surveillance drone was on a routine mission in international airspace over the Black Sea when it was intercepted by two Russian fighter jets.

For nearly half an hour, the jets harassed the American system, an MQ-9 Reaper drone, buzzing past and dumping fuel over its wings and sensors. One of the jets clipped the Reaper’s propeller, rendering it inoperable and forcing its American handlers to crash the drone into the sea. Not long after, Moscow awarded medals to the two Russian pilots involved in the incident.

The Reaper’s every move—including its self-destruction after the collision—was overseen and directed by US forces from a control room thousands of miles away. But what if the drone had not been piloted by humans at all, but by independent, artificially intelligent software? What if that software had perceived the Russian harassment as an attack? Given the breakneck speed of innovation in artificial intelligence (AI) and autonomous technologies, that scenario could soon become a reality.

Traditional military systems and technologies come from a world where humans make onsite, or at least real-time, decisions over life and death. AI-enabled systems are less dependent on this human element; future autonomous systems may lack it entirely.

This prospect not only raises thorny questions of accountability but also means there are no established protocols for when things go wrong. What if an American autonomous drone bombarded a target it was meant only to surveil? How would Washington reassure the other party that the incident was unintentional and would not reoccur?

When the inevitable happens, and a partially or fully autonomous system is involved in an accident, states will need a mechanism they can turn to—a framework to guide the involved parties and provide them with potential off-ramps to avert unwanted conflict.

The United States took a small step in this direction when it released a declaration in February that distilled its vision for responsible military use of AI and autonomous systems. The declaration included several sound proposals, including that AI should never be allowed to determine the use of nuclear weapons. But it did not offer precise guidelines for how states might regulate the behaviour of AI systems, nor did it set up any channels through which states could quickly clear up any miscommunications. A more comprehensive framework, with more buy-in from other governments, is sorely needed.

For inspiration, states could look to an underappreciated episode of the Cold War. In the 1970s, US and Soviet leaders calmed rising tensions between their navies by setting rules for unplanned encounters on the high seas. Governments today should take a similar route through the uncharted waters of AI-driven warfare.

They should agree on basic guidelines now, along with protocols to maximize transparency and minimize the risk of fatal miscalculation and miscommunication. Without such a foundational agreement, future one-off incidents involving AI-enabled and autonomous systems could too easily spin out of control.

OFF PROTOCOL

The loss of an American surveillance drone over the Black Sea in March was unsettling. The US military has well-defined procedures for how to act if one of its crewed aircraft is shot down. But recent experience shows that standardized protocols do not necessarily extend to uncrewed aircraft. In one 2019 incident, Iran shot down a US Navy drone over the Strait of Hormuz, setting off a chain reaction inside the Pentagon and the White House that nearly resulted in US retaliatory strikes against Iran.

US forces were purportedly ten minutes from engaging their target when the strike was called off, according to then-US President Donald Trump. At the last moment, leaders in Washington had decided that a strike was disproportionate and opted instead for a cyberattack against Iranian intelligence and missile systems.

The upside of remotely operated aircraft is that, when the circumstances are right, they can lower the risk of escalation rather than drive it up. This is partly because the loss of insentient machinery, no matter how expensive, is easier to stomach than the death of an aircrew. But that silver lining may dissipate as technologies evolve.

Fully autonomous military systems do not yet exist, and the deployment of AI-enabled systems on the battlefield remains limited. Yet militaries worldwide are investing heavily in AI research and development. The US Department of Defense alone has nearly 700 active AI projects.

Among them are the US Army’s Scarlet Dragon program, which has used AI to identify targets in live-fire exercises, and the US Navy’s Task Force 59, which seeks to develop cost-effective, fully autonomous surveillance systems. The US Air Force hopes to one day create swarming smart weapons capable of autonomously communicating with one another and sharing information on potential targets.

The US military is not the only innovator on this front. In April, Australia, the United Kingdom, and the United States conducted a joint trial in which a swarm of AI-enabled aerial and ground vehicles collaborated to detect and track targets.

China is investing in an array of AI-powered underwater sensors, some of which are reportedly already in use in the South China Sea. The war in Ukraine has witnessed some of the first real uses of AI in direct conflict. Among other things, Ukrainian forces have used an AI software interface that consolidates commercial satellite data, thermal images of artillery fire, and other intelligence. The information is superimposed on digital maps that commanders on the ground can use to pick their targets.

FATAL MISUNDERSTANDINGS

Encouraged by the benefits they already derive from AI-enabled systems, militaries will likely stay their current course and design future systems with growing degrees of autonomy. This push toward AI-enabled autonomy will certainly unlock strategic and tactical advantages, but they will come at a cost.

Perhaps the greatest challenge is that humans who encounter an autonomous military system may be faced, in essence, with a black box. When confronted or targeted, they may have difficulty gauging the system’s intent and understanding its decision-making. This is partly a feature inherent in the technology because the algorithm at work often will not or cannot explain its “thought process” in terms humans can grasp.

Adversaries, in turn, may have difficulty distinguishing intentional aggression from errant AI behaviour, leaving them uncertain about how to react. Worse still, research suggests that the accidental use of force by an AI-enabled autonomous weapons system may elicit a more aggressive response than conventional human error: leaders in the targeted country may feel angered by the other side’s decision to delegate any lethal decision-making to a machine in the first place, and they may opt for a forceful reaction to indicate that displeasure.

Some of the novel scenarios and the security risks they entail may differ not just from human error but also from the usual fog of war. Take a recent thought experiment conducted by an official in charge of the US Air Force’s AI testing, in which an AI-enabled drone is trained to identify targets and destroy them on approval from a human operator.

Each eliminated target equals a point, and the AI seeks to maximize a point-based score. It may conclude that its dependence on human approval for strikes limits its ability to accumulate points and may therefore decide to eliminate the operator. If the AI’s programming is tweaked to deduct points for killing the operator, the AI may instead resort to destroying the communication tower that relays the operator’s orders. What distinguishes this scenario from traditional human error or from a soldier going rogue is that the AI’s actions are neither accidental nor in violation of its programming.

The behaviour, although undesirable, is a feature, not a bug. This is a classic case of the “alignment problem”: it is challenging to develop and program AI such that its actions coincide exactly with human goals and values, and getting it wrong can have grave consequences.

An added danger is the role that autonomous and AI-enabled systems could play in military standoffs and games of chicken. Human recklessness is mitigated by survival instinct, among other things, but that instinct might not come into play when autonomous systems are deployed without a human operator on board.

Consider another scenario: a pair of fully autonomous aircraft from rival countries confront each other in the skies above contested territory. Both systems perceive the other as a threat and, since they are programmed for aggressiveness, engage in escalating manoeuvres to assert dominance. Before long, one or both systems are damaged or downed unnecessarily, and the rival countries have a crisis on their hands.

AI could also alter the domain of nuclear warfare, for better or worse. The speed of AI could allow for an incoming nuclear missile to be detected sooner, buying decision-makers valuable time to weigh their options. But when both sides are using AI, that same speed could add pressure to act fast (and think later) to avoid being outmanoeuvred.

AI-enabled nuclear deterrence would be a double-edged sword, too: autonomous nuclear-armed systems may make it harder for an attacker to take out all of a state’s nuclear defences in one fell swoop, thus lowering incentives for a preemptive first strike. On the flipside, the complexity of AI-driven systems brings with it the risk of cascading, and potentially catastrophic, failures.

UNPLANNED ENCOUNTERS

The confluence of these risk factors makes determining the correct response to incidents involving AI-enabled and autonomous systems uniquely complex and context-dependent. Given how hard it will be to grapple with such complexity on the fly, states need to build off-ramps from potential conflict ahead of time. Fortunately, in doing so, they can rely on blueprints from the past.

In 2020, around 90 per cent of US reconnaissance flights over the Black Sea were intercepted by Russian jets, according to the US military. NATO said it had intercepted Russian aircraft on over 300 occasions that same year. Such intercepts are not new; they are a modern version of the nineteenth-century practice of gunboat diplomacy. The term emerged to describe Western states’ tendency to use physical displays of naval assets to project power and intimidate other nations into complying with their demands.

As technology advanced, the gunboats gave way to aircraft carriers, then to B-52 bombers, and later still to E-3 Sentry AWACS surveillance aircraft and other imposing innovations. The use—and the interception—of increasingly high-tech, AI-enabled systems is simply the latest iteration of such techno-tactics.

As the name suggests, gunboat diplomacy is used to pursue diplomatic aims, not military ones. But given the tools involved, miscalculation and miscommunication can have dire consequences. States have long understood this, and they have, in the past, found ways to limit the danger of unintended escalation.

One of the most effective of these mechanisms emerged during the Cold War. At the time, the Soviet Union objected to US naval operations in waters it considered its own, such as the Black Sea and the Sea of Japan. It made its position clear by repeatedly and aggressively intercepting US vessels, leading to a series of dangerous close calls. By the early 1970s, the United States and the Soviet Union had come to recognize that, as the scholar Sean Lynn-Jones wrote a few years later, “the risks of naval harassment undermine any justification for its continued unconstrained practice.” That mutual insight led, in 1972, to the US-Soviet Incidents at Sea Agreement.

The INCSEA agreement, as it became known, covered any interaction between US and Soviet military vessels on the high seas, from deliberate confrontations to unplanned encounters. It created notification protocols and information-sharing procedures designed to lower the risk of accidents and unintended conflict.

As early as 1983, the US Navy declared the agreement a success for having reduced the number of aggressive high-seas interactions even as the US and Soviet navies had both expanded in size.

Like other confidence-building measures, the agreement did not constrain military operations or force structures. It neither eradicated nor fundamentally transformed US-Soviet competition in the naval domain. It did, however, make the rivalry more predictable and safer.

The success of the INCSEA agreement paved the way for similar mechanisms on the high seas and beyond. The Soviet Union and, later on, Russia replicated the agreement with 11 NATO members and several countries in the Indo-Pacific. Additional US-Soviet Union agreements created similar protocols for encounters on land and in the air.

More recently, China and the United States have developed a nonbinding Code for Unplanned Encounters at Sea to which they and nearly 20 other states now adhere. There have even been discussions of extending similar mechanisms to outer space and cyberspace.

RULES OF THE ROAD

To be sure, an agreement such as INCSEA technically applies whether a crew is on board or not, but it ultimately assumes that human operators are in control. The unique challenges presented by AI-enabled and autonomous systems demand more tailored solutions. Think of it as an INCSEA agreement for the age of AI: an Autonomous Incidents Agreement.

The first hurdle for any such agreement is the difficulty parsing the meaning and intent behind an AI-enabled system’s behaviour. Down the line, it may be possible to monitor and verify these systems’ internal workings and code, which could offer greater transparency about how they make their decisions. But as a stopgap measure, an Autonomous Incidents Agreement could start by regulating not AI code but AI behaviour—setting rules and standards for expected conduct for both AI and autonomous systems and the militaries that use them.

Elements of such an agreement could be as simple as requiring that autonomous and AI-enabled aircraft yield the right of way to nonautonomous aircraft (as Federal Aviation Administration rules already require of uncrewed commercial and recreational drones). The agreement could also require AI-enabled systems to stay at a certain distance from other entities. It might set notification and alert provisions to ensure transparency about who is deploying what.

Such provisions may seem obvious, but they would not be redundant. Outlining them in advance would set a baseline for expected behaviour. Any actions by an AI system outside those parameters would be a cut-and-dried violation. Moreover, these parameters would make it easier to point out cases in which an AI-enabled system deviated from its expected behaviour, even if it might not be technically feasible to determine a precise cause after the fact.

The time for an Autonomous Incidents Agreement is ripe, given that AI is at an inflection point. On the one hand, the technology is maturing and increasingly suitable for military use, whether as part of wargaming exercises or in combat, such as in Ukraine. On the other hand, the exact outlines of future AI military systems—and the degree of disruption they will cause—remain uncertain and, by extension, somewhat malleable.

States willing to take the initiative could build on existing momentum for stricter rules. The private sector appears willing to at least somewhat self-regulate its AI development. And in response to member state requests, the International Civil Aviation Organization is working on a model regulatory framework for uncrewed aircraft systems and has encouraged states to share existing regulations and best practices.

An Autonomous Incidents Agreement would put these nascent efforts on solid footing. The need for clearer norms, for a baseline mechanism of responsibility and accountability, is as great as it is urgent. So is the need for a protocol for handling interstate skirmishes involving these cutting-edge systems. States should start preparing now, since the real question regarding such incidents is not whether they will occur, but when.

Caliber.Az
Views: 309

share-lineLiked the story? Share it on social media!
print
copy link
Ссылка скопирована
telegram
Follow us on Telegram
Follow us on Telegram
WORLD
The most important world news