The Challenges Ahead for Artificial Intelligence in the Military

By Amy Ertan

l2fiyx4krbea7f2bxf7e465q6y
AI Guidance in the Battlefield (Source)

Innovation in the military is key. The technologies that make forces faster, quieter, more informed or more precise, offer tactical, operational and strategic advantages against an adversary. Artificial Intelligence (AI) technology offers benefits across the military and is currently being developed and deployed around the world. As with all new technologies, there are security implications to these emerging innovations that, in the case of the military, may directly influence conflict and human life. 

This short piece covers three aspects of AI and the military: first, the categories of current AI applications in the military are explored; second, an overview is offered on the potential implications of using AI in these spaces; and finally, some recommendations are proposed on how to implement AI as a responsible tool that mitigates risk. One of the most important recommendations is for military institutions to engage with a wide range of stakeholders: AI is a dual-use tool (with applications spanning military use and civil life) and it is therefore essential for military institutions to constructively engage with relevant interdisciplinary research on the topic of AI.

AI In Conflict

‘Narrow AI’ – an artificial intelligence that is focus only on specific tasks, such as image recognition – is one of the AI technology currently available and is included in a range of military applications (discussed below). In February 2019, the US Department of Defense Artificial Intelligence Strategy envisioned AI as a tool “poised to change the character of the future battlefield”. Michael Horowitz classes three sorts of applications in a military environment:

  1. Where machines act without human supervision, for example, autonomous unmanned aerial systems (UAVs, also known as ’drones’), that may include systems with lethal ‘weaponised’ capabilities
  2. Where AI enables the processing and interpretation of large volumes of data. This shows particular promise in automating intelligence monitoring, from ‘pattern-of-life’ surveillance tasks to AI-driven image recognition.
  3. Where AI aids / conducts the command and control of war, for example through the use of decision-assistance ‘expert system’ platforms

AI is already proving an ability to outperform human agents in certain tasks. This varies from precision, speed, to the complex calculations needed for functions, demonstrated in examples such as enhanced missile targeting and swarm drone control. Significant research is already underway across military-corporate industries. The United States has pledged almost $2.3bn for the cause through DARPA and the Department of Defence’s Joint Artificial Intelligence Center (JAIC). The reasoning behind this investment comes from the want of institutions to minimising physical human presence on the battlefield and therefore save lives. Moreover, AI can automate manual and mundane tasks that military personnel currently undertake, such as reconnaissance and surveillance of the adversary. 

In cyberspace (the online world of interconnected information technology, including the internet), AI technology promises new cyber defensive and offensive capabilities. This can be in the form of automating reconnaissance, engaging in adaptive cyber-attacks, and simultaneously in improving cyber-defence, to name a few. Military-coporate interests are taking interest in AI for these purposes, exploring their potential applications. A clear example of this is in DARPA and Lockheed Martin’s joint project BLADE (‘Behavioural Learning for Adaptive Electronic Warfare’) – a project that uses AI to create a smart communications jammer that could prove to be an invaluable tool on the battlefield. 

The Implications of AI In the Military

In addition to discussing the opportunities AI may enable, these systems could lead to unexpected and undesirable consequences. These can be divided loosely into ‘intentional’ and ‘unintentional’ harms. 

Intentional Harms

Once deployed, AI may be used against its creators and associated allies. The 2018 joint ‘Malicious AI report’ predicts threats to digital, physical and political security with three key aspects:

  1. An expansion of existing threats: Once AI can automate or improve on human processes, launching an attack will require relatively less expertise, time and intelligence from the adversary. 
  2. An introduction of new threats: AI systems will create new weapons for the adversary. The systems themselves could be tools, or vectors, for attack.
  3. A change to the typical character of threats: The nature of the threat landscape will be changed by AI. Increased precision and information processing also leads to greater impact of misuse of these systems. It could lead to the exploitation of new flaws and exacerbate challenges relating to cyber attribution.

We need to consider the consequences of AI-supported systems being sold, borrowed, hijacked, or repurposed by other actors. The repurposing of military AI has already proved a relevant threat in the form of drone technology. Jack MacDonald highlights how non-state actors repurposed UAV (Unmanned Aerial Vehicle) technology in Ukraine to gain enhanced imaging surveillance, while in Iraq and Syria groups associated with Islamic State repurposed commercial drones that were then used to attack state military aircraft. These examples provide both evidence and a warning that the AI technologies that give the military access to advanced intelligence can also be used against them by adversaries, creating new vulnerabilities. In cyberspace, a precise AI-driven weapon could end up being sold to an adversary who will then also benefit from the anonymity that cyber-operations provide. This is a trend that is already being observed with the rise of ‘malware of a service’: a threat that the UK’s National Cyber Security Centre is concerned about. 

When it comes to intentional harm, human creativity should not be dismissed. Actors who cannot develop, purchase, or deploy AI-enabled technology will explore how to ‘game’ other states’ AI systems. These adversaries may undermine the integrity of AI systems to make them misclassify information, for example, or even modify the expected behaviour – which can have fatal consequences on the battlefield. This raises the need for deployed technology to be reviewed regularly; checking the functionality of the system as well as collecting feedback from a variety of human agents that use, or are affected by, the system in question. A review system would ideally include some form of formal ‘audit’ of the algorithms (building on emerging corporate services. Currently, there is no best practice audit framework that exists for a military context, limiting opportunities to identify and challenge an algorithm that may be vulnerable to attack (or faulty, or acting in a way that causes undesirable consequences). 

Unintentional Implications

Many books have been written solely about the issues relating to legality, ethics, trust and the long term, boomerang effect on society. Military innovation is often discussed in a vacuum. AI, however, is fundamentally a dual use tool and if military use were to occur it could lead to unintended consequences. It therefore requires careful consideration.

The first consideration that needs to be made is the impact AI use could have on human agents and the human-machine relationship. Shannon French (2019, forthcoming) has highlighted the risks of overreliance, automation bias and personnel de-skilling in the US military. If an AI system’s predictions become taken as more than just guidance, and human supervision over human activities on the battlefield wanes, overreliance could lead to fatal accidents. 

A second consideration is that complex AI systems can unintentionally obscure the rationale behind decisions. AI systems are becoming increasingly complex, with the introduction of deep-learning highlighting questions around explainability and the justification for a decision, as well as unpredictability. How will responsibility and accountability be reconciled when AI systems make military decisions? How should human agents react when an AI system makes a decision that the authority in the room disagree with? And when AI begins to operate autonomously in the battlefield, could the complexity of the system can lead to unpredictable actions and potentially to fatalities? It is increasingly conceivable that the future involves AI systems vs AI systems – speeding up conflict potentially beyond human comprehension. Military institutions have the difficult choice of determining the ideal level of machine autonomy for future conflict. The boiled down version of that choice is between a preference of greater human control over the battlefield versus the advantage of speed that comes from automated AI decision-making. 

A final consideration is the relationship AI would have with systematic and strategic direction. AI competition is global and fierce, with corporate and government aspirations in AI being pursued just as aggressively as military interests. In discourses on national security, academics have referred to an ‘AI arms race’. While the US remains the only state to release a defence-specific AI strategy, it isn’t entirely clear how the UK plans to innovate in this area, or how other state actors interpret US investment in this space. How states will invest in military AI technologies will be impacted by the strategies of other actors. For example, opting to prevent AI lethal weaponry in the battlefield may put a state at a relative disadvantage against more aggressive-minded adversaries. A security-driven reaction to this uncertainty would be for a state to develop their own AI weapons to prevent being caught off-guard, an action that could trigger an adversary to do the same. In rationalising this stockpiling for security purposes, AI technology invokes the security dilemmas, whereby in attempting to preserve their own security, states AI weaponry stockpiling leads to rising instability – despite this instability being undesirable to both sides. The ‘arms race’ stockpiling of military AI differs from kinetic arms races much in the way as ‘cyberweaponry’, in that virtual algorithms are ‘invisible’ – development cannot be monitored or attributed as easily as, for example, nuclear weaponry. Nonetheless, the pressures on state actors to keep up with a perceived adversary’s innovation may trump requests for a cautious approach to AI implementation. 

Recommendations to Overcome Emerging Challenges

Responsible AI implementation should consider future-planning as a key part of any project, and resist the temptation to immediately deploy a system based on outdated requirements (or requirements that do not reflect on the harms outlined above). Promoting a culture of responsibility and responsible implementation across the military is key. There is a need for a best practice model to be identified in order to guide the development of responsible AI systems that operates effectively and securely. Education and culture are important to responsible planning, developing and monitoring an AI system. Moreover, behavioural aspects of the human-machine relationship should not be ignored, as they often are when it comes to the robotics/ software development cycle. 

Attention should remain focused on how to mitigate the possible harms highlighted above. Checks and balances are needed to offset potential misuse or misunderstanding of AI technology and its limitations. Engaging with the wider research communities across disciplines will be highly beneficial in helping to anticipate the undesirable uses and risks of AI systems. 

But AI research and development is an incredibly fragmented landscape. Conversations are happening in silos. Often the socio-technical consequences, or longer-term impacts of system implementation are not part and parcel of the AI integration process. While there are (non-military) research institutes that focus specifically on such impact, for example AI Now, and behavioural/ human-machine researchers exist within military innovation departments, militaries have been slower to demonstrate a commitment to engaging with such research. In September 2019, US Department of Defence issued a hiring call for an ethicist to guide artificial intelligence deployment; time will tell whether this role will represent a move towards genuine cross-sector, interdisciplinary cooperation. 

This requirement for a diversity of perspectives represents a wider challenge for defence, government and society. There is a vast amount of research happening in cyber security, law, and in civic society that is not adequately shared – and there must be greater military engagement with these communities. Although AI presents some significant risks, there are opportunities to leverage interdisciplinary findings on what is, at its core, a dual-use tool that can also benefit civic society.


0-1

Amy Ertan is an interdisciplinary doctoral researcher within the EPSRC-funded Centre for Doctoral Training in Cyber Security at Royal Holloway, University of London. Her research interests centre around the security implications, unintended consequences, and strategic considerations of AI implementation in the military, particularly where AI may be ‘weaponised’. Amy studied PPE at the University of Oxford, before moving to Barclays where she now works part-time with the Strategic Cyber Intelligence team. Amy currently holds a Data Protection Fellowship at the Institute for Technology and Society in Rio de Janeiro, and is a 2019 FSISAC BCD Cybersecurity Scholarship recipient. You can find her on Twitter @AmyErtan

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s