Google’s policy shift on military AI sparks fears of global conflict escalation Analysis by Bloomberg
Google's decision to abandon its longstanding principle of not using artificial intelligence (AI) for weapon development marks a "dangerous change" that could trigger uncontrolled escalation of conflicts globally, according to Bloomberg columnist Parmy Olson.
In her analysis, Olson argues that the tech giant’s abandonment of one of its "most important ethical positions" comes amid the rapid evolution of AI and its increasingly pervasive nature, per Caliber.Az.
Demis Hassabis, head of Google’s AI division, explained that the decision reflects the "rapid evolution" of the technology. However, Olson strongly disagrees with this reasoning, stating that "the notion that ethical principles should 'evolve' with the market is incorrect."
She warns that this shift in policy, motivated by technological progress, could lead to catastrophic consequences in warfare. "Abandoning a code of ethics for the sake of war could lead to consequences that spiral out of control," she writes.
Olson highlights that AI integration into military systems would lead to automated decision-making, creating “automatons issuing responses at a speed that leaves no time for diplomacy." This could increase the risk of conflicts escalating into deadly confrontations, as AI systems, without human oversight, make decisions in real-time. She emphasizes that automated military responses could strip away the critical human judgment needed in warfare, thus making conflicts more deadly.
Furthermore, Olson expresses hope that Google’s policy shift will exert pressure on governments to adopt internationally binding rules governing military AI use. She stresses that the human element in decision-making must remain central and calls for a global ban on fully autonomous weapons capable of independently selecting targets. Olson also suggests the creation of an international body to enforce security standards and regulations for military AI systems.
On February 5, Google updated its AI principles, removing its previous commitment not to use its AI technologies for weapons development or human surveillance. The company’s shift has raised alarm among critics who fear the consequences of such technological advancements being deployed without sufficient ethical constraints in the realm of military warfare.
By Tamilla Hasanova