J. Aitkin

 

It is perhaps one of the biggest clichés to come out of Hollywood; the misapplication or malevolence of AI leading to the reduction of humanity and civilisation to a radioactive crisp. But is it realistic? The immediate impulse is towards no, but in a reality where the US Department of Defense recently invested $1 Billion in order to “compete, deter, and, if necessary, fight and win the wars of the future” can we be so sure? AI is taking up an increasingly large space in our cultural imagination, and while it may seem like science fiction it may become a very real concern for military strategists and humanity at large in the near future.

 

Before I demonstrate the issues with the increased use of AI in a military setting, and particularly that of strategy, let me first demonstrate the geopolitics of the scenario. Investment in this area seems to be inevitable and equal amongst the major powers however the resources necessary for the creation and proliferation of AI are not so even. While there are numerous types of AI that serve multiple purposes their creation consists of a few key parts, amongst them being: semiconductors, AI accelerators, and the appropriate software.

 

Currently the latter is relatively open. Much of AI software is open-source such as Google’s TensorFlow, but much of the development is within the United States with Silicon Valley being the obvious hub for innovation. Other powers such as China could also advance in this field relatively easily but are at present struggling due to the difficulties in catching up to the talent the US possesses. However this may reverse in the near future as in April 2018 China’s Ministry of Education released an ‘AI Innovation Action Plan’ intended to galvanise the area (Allen 2019).

 

But the key chokehold in the AI industry does not lie within the software which is typically replicated overseas relatively quickly, but rather it is the hardware. More specifically the semiconductors crucial to the creation of ‘AI accelerators’. The advantage here lies with the US and its allied nations of Japan and the Netherlands producing the bulk of the product, and while China is making efforts to produce their own hardware the nature of manufacturing makes this especially difficult (Imbrie 2019).

 

The developing geopolitics of this situation are absolutely fascinating and laid out in several possible scenarios by Andrew Imbrie in an article for the IISS. These detail “Shenzhen” or “Silicon Valley” realities in which either nation makes advances in either software, hardware, or both with a variety of impacts on the global balance of power (Imbrie 2019). But I would argue that regardless of what the world looks like when AI becomes commonplace in a strategic setting, even when particularly underdeveloped and basic, the result will likely be disaster.

 

Plotting the path of technological development is so difficult it might as well be a fool’s errand, but for the sake of prediction I will divide the current forecasted applications of AI into two categories; these being reconnaissance and decision-making. The former is theoretically incredibly de-stabilising, designed on the premise that an artificial intelligence could scan satellite imagery of an area and be able to detect even the most complexly disguised nuclear weapon silos/ submarines/ vehicles etc. Thus increasingly the likely success of a ‘disarming strike’ that would nullify enemy weaponry and render nuclear war ‘winnable’. This is referred to as ‘first-strike capability’ and if confidently achieved would dramatically change the face of nuclear strategy. This is a fear held by many intellectuals in the field, however those whose specialities fall closer to the software needed to create such an environment are less worried. They cite a lack of training data or the likelihood of an unacceptable amount of false positives that would make the technology practically useless; this is often accompanied by the belief in an AI plateau, claiming that interest in development in AI occurs in waves and that the current levels of research and funding are unsustainable and bound for stagnation (Geist and Lohn 2018).

 

While I have my doubts concerning the stagnation hypothesis I do agree that reconnaissance technology imbued with AI shouldn’t be our biggest concern. Partly because of evasive technology I believe would accompany such a development but also due to the limitations listed. Additionally, the nuclear posture of many major powers such as China and Russia have a distinct emphasis on the survivability of their arsenals which they pursue by various means, slowing the advance towards theoretical ‘first-strike capability’. However, this dimension of a lack of development doesn’t inhibit the danger of the other military application of artificial intelligence. In fact, according to the very same experts previously referenced it has the potential to usher in one of the most dangerous periods in history.

 

Decision-making AI could take multitudinous forms but its main function is self-evident; collating and assessing the current strategic situation and suggesting which action to take. Superficially this appears stabilising. The idea of using a machine that can rationalise the most logical outcome, simultaneously being quicker than any human. However we can quickly see the problems present in the idea when we look at how it differs from the reconnaissance AI. It failed due to a lack of training data, which means it would likely be unable to fulfill its aim of identifying enemy missiles. A decision-making AI would also suffer from the same issue.

 

There aren’t many nuclear engagements to analyse in order for the machine to theorise an ideal outcome. So how would it learn, or even operate so it is of any use? The first option is through simulated training data in order to train the AI in the tenets of nuclear strategy. This, once again, sounds like a good idea in the abstract but when employed by a nation-state against another would this simply not result in the creation of technology geared towards the engineering of a winnable nuclear war, regardless of the lives that are sacrificed? And if not this then how are we meant to program the ethics of machines whose main task is to make decisions concerning nuclear weaponry (Payne 2018)?

 

A potential solution to this dilemma would be removing automation from the highest level of decision-making, instead having an algorithm which acted as some sort of intelligence gatherer and threat-level indicator. Not necessarily advising strategies of engagement as much as it is guessing at the likelihood any engagement is going to occur whatsoever. This befalls the same issue of data, only this time from an outside source. In simulations of advisory AI there is often a looming threat of data poisoning attacks, cyber-terrorists or other governments supplying false data to artificially raise or lower the threat level (Fitzpatrick 2019). The risk of data-poisoning is present across all of these applications but it is in this example that it takes the spot of the most prevalent issue.

 

Now, while these are all large issues that pose existential threats, some optimistic readers may see all of these problems as surmountable. Overcome through improvements in security, design, or just simple caution. This is to an extent true, however, the true danger is in the pressure for competing nations to utilise these developments in their early and untested stages. Regardless of the problems they may pose. A military with AI used in these ways will always be quicker to respond and more nimble than one without. Not employing these developments as soon as possible will put you at a distinct disadvantage. Without adequate restraint, we may find ourselves in a situation where in a race to one-up each other we end up opening the door to these possibilities. When faced with an AI that is able to optimise your destruction you can either securitise, bolstering your defenses and arsenals, or ‘jump the ladder’ of escalation (Payne 2018). Engaging in a pre-emptive strike.

 

These threats may seem like the issues of the distant future, or perhaps it sounds too dystopian to even be conceivable. But the militarisation of AI is occurring and the ethics of the issue should be contemplated sooner rather than later. The Red Cross and OECD are already campaigning for greater regulation, but as an arms race looms it remains to be seen if we are already too late.

 

 

Author Biography

 

J. Aitkin is currently an Undergraduate Politics and International Relations student at the University of York and a member of the PSA.Image credit: Shutterstock

 

 

Resources

 

Allen, Gregory C. 2019. “Understanding China’s AI Strategy--Clues to Chinese Strategic Thinking on Artificial Intelligence and National Security, 6 February 2019.” Center for a New American Security.

https://www.cnas.org/publications/reports/understanding-chinas-ai-strategy

 

Fitzpatrick, Mark. 2019. “Artificial Intelligence and Nuclear Command and Control.” Survival 61 (3): 81–92.

https://www.iiss.org/blogs/survival-blog/2019/04/artificial-intelligence-nuclear-strategic-stability

 

Geist, Edward, and Andrew J. Lohn. 2018. “How Might Artificial Intelligence Affect the Risk of Nuclear War?” https://www.rand.org/pubs/perspectives/PE296.html?source=post_page---------------------------.

 

Imbrie, Andrew. 2019. “Mapping the Terrain: AI Governance and the Future of Power.” IISS. December 17, 2019. https://www.iiss.org/blogs/survival-blog/2019/12/mapping-the-terrain-ai-governance.

 

Payne, Kenneth. 2018. “Artificial Intelligence: A Revolution in Strategic Affairs?” Survival 60 (5): 7–32.

https://www.iiss.org/publications/survival/2018/survival-global-politics-and-strategy-octobernovember-2018/605-02-payne