![]() ![]() At a December 13th press conference, U.S. Last week, this long sought-after and ever-elusive breakeven goal was achieved. Ever since it was first demonstrated in 1932 at the Cavendish Laboratory of Physics, University of Cambridge-where I am now working on a PhD-scientists and engineers have attempted to design and demonstrate fusion reactions that produce more energy than they consume, a concept known as a breakeven. In this way, it addresses a gap in the literature about the strategic and theoretical implications of the AI-nuclear dilemma.Fusion has long been touted as the holy grail of energy production. Are existing notions of inadvertent escalation still relevant in the digital age? The article speaks to the broader scholarship in International Relationsnotably 'bargaining theories of war'that argues that the impact of technology on the cause of war occurs through its political effects, rather than tactical or operational battlefield alterations. How might AI be incorporated into nuclear and conventional operations in ways that affect escalation risk? It unpacks the psychological and cognitive features of escalation theorising (the security dilemma, the 'fog of war', and military doctrine and strategy) to examine whether and how the characteristics of AI technology, against the backdrop of a broader political-societal dynamic of the digital information ecosystem, might increase inadvertent escalation risk. Will AI-enabled capabilities increase inadvertent escalation risk? This article revisits Cold War-era thinking about inadvertent escalation to consider how Artificial Intelligence (AI) technology (especially AI augmentation of advanced conventional weapons) through various mechanisms and pathways could affect inadvertent escalation risk between nuclear-armed adversaries during a conventional crisis or conflict. The article also considers the nefarious use of AI-enhanced fake news, deepfakes, bots, and other forms of social media by non-state actors and state proxy actors, which might cause states to exaggerate a threat from ambiguous or manipulated information, increasing instability. In particular, if defense planners come to view AI’s ‘support’ function as a panacea for the cognitive fallibilities and human analysis and decision-making. It argues that AI-enabled decision support tools by substituting the role of human critical thinking, empathy, creativity, and intuition in the strategic decision-making process will be fundamentally destabilizing. Will the use of artificial intelligence (AI) in strategic decision-making be stabilizing or destabilizing? What are the risks and trade-offs of pre-delegating military force (or automating escalation) to machines? How might non-nuclear (and non-state actors) state leverage AI to put pressure on nuclear states? This article analyzes the impact of strategic stability of the use of AI in the strategic decision-making process, in particular, the risks and trade-offs of pre-delegating military force (or automating escalation) to machines. ![]() The article speaks to a growing consensus calling for conceptual innovation and novel approaches to nuclear deterrence, building on nascent post-classical deterrence theorising that considers the implications of introducing non-human agents into human strategic interactions. It argues that existing theories of deterrence are not applicable in the age of AI and autonomy and introducing intelligent machines into the nuclear enterprise will affect nuclear deterrence in unexpected ways with fundamentally destabilising outcomes. How might nuclear deterrence be affected by the proliferation of artificial intelligence (AI) and autonomous systems? How might the introduction of intelligent machines affect human-to-human (and human-to-machine) deterrence? Are existing theories of deterrence still applicable in the age of AI and autonomy? The article builds on the rich body of work on nuclear deterrence theory and practice and highlights some of the variegated and contradictory – especially human cognitive psychological – effects of AI and autonomy for nuclear deterrence.
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |