UK MoD – Human-Machine Teaming

This week we begin our triple-bill of Joint Concept Notes from the UK Ministry of Defence. You can see them all here. These are basically the documents which lay out how the UK military will develop in the future, and what it’s priorities and aims are over the next few years.

The first Note we are looking at is that focussed upon Human-Machine Teaming, available here. This considers how machines will work alongside people in order to get the best out of both to create the optimum operating environment for military success.

Here’s what we thought:


 

I found this to be a really insightful paper, outlining the MoD’s position on AI and robotics, and in particular, the role of the human in relation to the machine. While there are too many topics covered to address in a short blog response, I found it interesting that the report highlights the potential for technology to shift the balance of power, and to allow minor actors to increasingly punch above their weight. This then ties in with the report’s others comments about use, and the need to adapt quickly to changing demands. In the example of a 2005 chess competition, the paper demonstrates how a team of American amateurs with weak computers won, beating superior players and more powerful computers, demonstrating the importance of the interface between the human and the machine (39–40). While computer power is certainly important, such power used poorly or by unskilled operators is not a guaranteed success, and so we should not take success against ‘weaker’ powers for granted.  

I was also particularly taken by a segment in Annex A at the end of the report in which the authors address the question of autonomy. Here, the report suggests that for the foreseeable future, no machine possesses ethical or legal autonomy (57), within the scope of the report’s own definition. The report then re-states the MoD’s position from September 2017 that ‘we do not operate, and do not plan to develop, any lethal autonomous weapons systems’ (58), which is an interesting remark, given the MoD’s own definition of autonomy as describing ‘elements with agency and independent decision-making power’ (57).  

 Mike Ryder, Lancaster University 

 


This concept note is a great overview of the major issues related to the employment of AI-based technologies alongside humans in conflict situations. Something the note mentions which I hadn’t given much through to is the potential revaluation of state power not in terms of GDP, but in terms of human capital and expertise relating to robotics and AI. Whilst in my work I usually consider AI in weapon systems, that mostly relates to tactical rather than strategic advantage. Whereas considering the impact of AI in a strategic sense is something I haven’t really thought about. As the note says (para.1.9), Russia and Singapore are nations that whilst they have a modest GDP in comparison to other states, have a high level of expertise in the underlying sciences fuelling AI and robotics. This has the potential to really change the way the world works, changing the landscape of power that has dominated the world since WWII. 

Something else which caught my eye was the mention of how manufacturers can limit defence capabilities (para.1.14). By creating systems using certain techniques and methods, they become locked into hat system and might not be open to analysis or further exploitation by the military. In my research on AI in weapons, this can be problematic if the military, in particular when new systems are being tested, want to know what the underlying code does and how it works. Not knowing this can have serious impacts on military effectiveness and legal compliance. 

Whilst the note is focussed upon human-machine teams, something that stood out to me in paras 2.8-2.14 is the large number of tasks that the MoD intends to automate. To me, this seems to be reducing the human role significantly. Perhaps, then, the ultimate goal of human-machine teaming is not to have humans and machines working in symbiotic teams, but to have humans managing large machines teams instead. 

What is quite striking about this report is the similarity it has in vision to papers produced by the US military about network-centric warfare and systems-of-systems approaches to fighting wars in the 1990s. On one level it does seem like the same vision of technological superiority in warfare is just being regurgitated. However, on another, perhaps the visions is in vogue again simply because we are close to having the technologies needed to make it a reality. 

Joshua Hughes, Lancaster University 


What do you think?