UK MoD – Human-Machine Teaming

This week we begin our triple-bill of Joint Concept Notes from the UK Ministry of Defence. You can see them all here. These are basically the documents which lay out how the UK military will develop in the future, and what it’s priorities and aims are over the next few years.

The first Note we are looking at is that focussed upon Human-Machine Teaming, available here. This considers how machines will work alongside people in order to get the best out of both to create the optimum operating environment for military success.

Here’s what we thought:


I found this to be a really insightful paper, outlining the MoD’s position on AI and robotics, and in particular, the role of the human in relation to the machine. While there are too many topics covered to address in a short blog response, I found it interesting that the report highlights the potential for technology to shift the balance of power, and to allow minor actors to increasingly punch above their weight. This then ties in with the report’s others comments about use, and the need to adapt quickly to changing demands. In the example of a 2005 chess competition, the paper demonstrates how a team of American amateurs with weak computers won, beating superior players and more powerful computers, demonstrating the importance of the interface between the human and the machine (39–40). While computer power is certainly important, such power used poorly or by unskilled operators is not a guaranteed success, and so we should not take success against ‘weaker’ powers for granted.  

I was also particularly taken by a segment in Annex A at the end of the report in which the authors address the question of autonomy. Here, the report suggests that for the foreseeable future, no machine possesses ethical or legal autonomy (57), within the scope of the report’s own definition. The report then re-states the MoD’s position from September 2017 that ‘we do not operate, and do not plan to develop, any lethal autonomous weapons systems’ (58), which is an interesting remark, given the MoD’s own definition of autonomy as describing ‘elements with agency and independent decision-making power’ (57).  

 Mike Ryder, Lancaster University 


This concept note is a great overview of the major issues related to the employment of AI-based technologies alongside humans in conflict situations. Something the note mentions which I hadn’t given much through to is the potential revaluation of state power not in terms of GDP, but in terms of human capital and expertise relating to robotics and AI. Whilst in my work I usually consider AI in weapon systems, that mostly relates to tactical rather than strategic advantage. Whereas considering the impact of AI in a strategic sense is something I haven’t really thought about. As the note says (para.1.9), Russia and Singapore are nations that whilst they have a modest GDP in comparison to other states, have a high level of expertise in the underlying sciences fuelling AI and robotics. This has the potential to really change the way the world works, changing the landscape of power that has dominated the world since WWII. 

Something else which caught my eye was the mention of how manufacturers can limit defence capabilities (para.1.14). By creating systems using certain techniques and methods, they become locked into hat system and might not be open to analysis or further exploitation by the military. In my research on AI in weapons, this can be problematic if the military, in particular when new systems are being tested, want to know what the underlying code does and how it works. Not knowing this can have serious impacts on military effectiveness and legal compliance. 

Whilst the note is focussed upon human-machine teams, something that stood out to me in paras 2.8-2.14 is the large number of tasks that the MoD intends to automate. To me, this seems to be reducing the human role significantly. Perhaps, then, the ultimate goal of human-machine teaming is not to have humans and machines working in symbiotic teams, but to have humans managing large machines teams instead. 

What is quite striking about this report is the similarity it has in vision to papers produced by the US military about network-centric warfare and systems-of-systems approaches to fighting wars in the 1990s. On one level it does seem like the same vision of technological superiority in warfare is just being regurgitated. However, on another, perhaps the visions is in vogue again simply because we are close to having the technologies needed to make it a reality. 

Joshua Hughes, Lancaster University 

What do you think?

In relation to autonomous weapon systems, how much human control is ‘meaningful’? 

This week we consider what level of human control over killer robots is meaningful. This has been a topic of great discussion at the UN as part of the deliberations about whether or not these systems should be banned. Indeed, Paul Scharre has just written an interesting blog on this very subject, see here. 


Here’s what we think: 


It’s great that this question should come up on TTAC21 as it’s something I’m particularly interested in at the moment. From my position, human control isn’t really very ‘meaningful’ and hasn’t been for a long time. If anything drone pilots don’t so much represent a lack of control so much as highlighting for us the lack of control, or lack of human agency, that’s been present in the military for a very long time. I mean even go so far back as the Second World War and already technology was starting to take over many of the duties of actually ‘waging war’. Skip on a few years and you get to the nuclear bomb, wherein one single individual ‘presses the button’, though in reality the decision to use the bomb was made many years before and by a great many people. At what point is the single decision to press the red button meaningful? I argue not at all, if the weapon exists alongside the common will to use it. If not pilot A pressing the button, then the military can simply send pilot B or pilot C. And while we’re at it, we better make sure it lands where we tell it to. Better get a machine to do the job… 


Mike Ryder, Lancaster University 


This question really is an important one. Despite studying international law, perhaps it is more important than the legal questions over AWS. I think the approach which Paul Scharre suggests, that if we had a technologically perfect autonomous weapon system what role would we still want humans to play is a great one. I think it is the question which will lead the international community towards whatever answer they come to in relation to meaningful human control. 

For me, I’m coming to the conclusion that unless an instance of combat is of a high intensity and military personnel from your own side or civilians are going to die without immediate action and the speed of decision-making that only an AWS will have, then it would always be preferable to have a human overseeing lethal decisions, if not actually making them. Whilst the legal arguments can be made convincingly for both no automation and full automation of lethal decision-making, I cautiously argue that where technology has the required capabilities then lethal decision-making by an AWS could be lawful. Ethically however, I would prefer a higher standard which would include humans in the decision-making process. But, ethically desirable is more than ‘meaningful’ and this is why I think Scharre has gotten the jump on the Campaign to Stop Killer Robots; reaching a ‘meaningful’ level of human involvement is a minimum threshold, but ethically desirable can go as high as anybody wants. Of course, this then makes it harder to discuss and so may tied up the CCW discussions for longer – although I hope it will be worth it. 

For me, ‘meaningful’ comes down to a human deciding that characteristics XYZ make an individual worthy of targeting. In an international armed conflict, that might be them wearing the uniform of an adversary. In a non-international armed conflict, it may be that they have acted in such a way to make them an adversary (I.e. directly participating in hostilities). But, that human decision can still be pre-determined and later executed by a machine. The temporal and physical distance does not alter the decision that XYZ characteristics mean that the potential target becomes a definitive target. Others will disagree with my conception of ‘meaningful’, and I hope it will generate discussion, but this is also why I favour Scharre’s method of moving forward. 

Joshua Hughes, Lancaster University 

Shaw – Robot Wars: US Empire and Geopolitics in the Robotic Age

Here’s our second article under discussion this month, Robot Wars: US Empire and Geopolitics in the Robotic Age by Ian Shaw. This work follows on from his great book Predator Empire, which is not only a well argued piece on the technology-based containment of the globe by the US, but also includes magnificent accounts of the history of target killing amongst other things.


Here’s what we thought of his article:

This reading group has been going for almost nine months now, and in that time it’s fair to say we’ve read a fair bit on drone warfare and autonomous weapons. From all of our reading thus far, I’m not sure that this article actually says anything specifically new about the field, or indeed offers any sort of radical insight. As is typical for a piece grounded (forgive the pun) in the Geographical and Earth Sciences, the paper is awash with ‘topographies’ and ‘spaces’, and yet all of this when drone warfare has been around for quite some time. And of course, let us not forget that battlefields are constantly shifting spaces, and this is not the first shift in the ‘landscape’ of warfare, as the invention of the tank, the aeroplane and the submarine have already gone to show. In this sense then, I’m not really sure how much this paper is adding to our understanding of drones, or drone warfare – nor indeed empire and geopolitics.

The one thing I did find interesting however, in a non-TTAC21 specific context, was this notion of robots as ‘existential actors’ (455), and autonomy then as an ‘ontological condition’. Again, though I don’t think this is anything new per se, I find it interesting that now we are starting to see a shift in the language around drones, as other disciplines are slowly getting to grips with the impact of drones on our conception of space and the relationship between the human and the machine.

Mike Ryder, Lancaster University

I thought this article was interesting, and I liked to reconceptualization of various aspects of targeted killing, modern war, and robotic conflict into abstract geopolitical ideas. However, The part I found most interesting was Shaw’s use of Deleuze’s notion of the dividual, where life is signified by digital information, rather than something truly human. As Shaw himself notes, in signature strikes by remote-controlled drones, the targets are dividuals who simply fit a criteria of a terrorist pattern of life, for example. With future autonomous weapons, killing by criteria is likely to be the same, but a lethal decision-making algorithm is likely to determine all targets based on criteria, whether something simple like an individuals membership of an enemy armed forces, or working out if patterns of life qualify an individual as a terrorist. In this sense, no only do the targets become dividuals, as they are reduced to data points picked up by sensors, but also those deploying autonomous weapons become dividuals as their targeting criteria and therefore their political and military desires become algorithmic data also. It seems that one of the effects of using robotics is not only the de-humanising of potential targets, but also the de-humanising of potential users.

Joshua Hughes, Lancaster University

UPDATE: added 11th March 2019, written earlier.

I second Mike’s criticisms—the author uses a tremendous amount of verbiage to ultimately say very little. Buried beneath all the talk of human-machine teaming ‘actualiz[ing] a set of virtual potentials and polic[ing] the ontopolitical composition of worlds’ and ‘aleatory circulations of the warscape’ are three predictions about a potential future world order. First, the author suggests that swarms of autonomous military drones will make ‘mass once again…a decisive factor on the battlefield’. Secondly, they describe the co-option of the US’ global network of military bases into a planetary robotic military presence called ‘Roboworld’, which aims ‘to eradicate the tyranny of distance by contracting the surfaces of the planet under the watchful eyes of US robots’. Finally, the employment of AWS will fundamentally change the nature of the battle space as, ‘[r]ather than being directed to targets deemed a priori dangerous by humans, robots will be (co-)producers of state security and non-state terror’, issuing in an ‘age of deterritorialized, agile, and intelligent machines’.

Josh has already mentioned about the idea of people being targeted on dividual bases, but I found the above mention of ‘deterritorisalisation’, along with the phrase ‘temporary autonomous zone of slaughter’ particularly interesting, owing to the latter phrase’s anarchist pedigree. The author’s comments about the ‘ontological condition’ of robots notwithstanding, AWSes are unlikely to be considered citizens of their respective nations any time soon. As they fight one another at those nations’ behest, but without any personal stake in the outcomes, we see a form of conflict that is perhaps fundamentally not as new as it is often made out to be, but rather a modern re-incarnation of the mercenary armies of the past or, even, of some sort of gladiatorial combat.

Ben Goldsworthy, Lancaster University

What do you think?

Should robots be allowed to target people? Based on combatant status?

Here is our second question this month on autonomous weapon systems. Due to space reasons in the title I did paraphrase it slightly. Here is the full question which went out to all network members:

If the technology within a lethal autonomous weapon system can comply with the law of armed conflict, should they be allowed to target people? Should they be able to target people based on their membership of a group, for example, membership of an enemy military, or a rebel group? 

Here’s what we thought:

This question poses a massive moral and ethical dilemma, and not just for autonomous weapon systems (AWS). Membership of any organisation, including notably, the State, has always been problematic, but in a ‘traditional’ military setting, we tend to work around this by drawing a clear distinction between those in uniform and those not. Of course this construct is undermined as soon as you introduce the partisan, or the non-uniformed fighter, and as we have seen in recent years, terrorist organisations seek to avoid marking their members completely. So there is the problem of identification to start with… But then things get more tricky when you come to question the terms of membership, or the consent given by any ‘member’ of an organisation to be a part of said organisation, and quite what that membership entails.

Take citizenship for example: we don’t formally ‘sign up’, but we are assumed to be a part of said organisation (i.e. the State) so would be targets of the ‘group’ known as the State in the terms set by this question. Take this argument one step further and you could have say, ‘Members of the TTAC21 reading group’. On first glance, members of our reading group might be ‘legitimate’ targets, however each of our ‘members’ has different levels of consent and participation within the group. Some for example have come along to meetings in person, or have Skyped in for an hour or two. Meanwhile others have provided comment for the blog, while others are yet to contribute anything. Are each of these members ‘members’ of the same level? How and why can, or indeed should, we compare any one member to another? And let’s not forget the question of motivation. Some of us are members because we are actively working in the field, while some of us have different level of interest or motivation. Does that then mean that each of us should be tarred with the same brush and classified in the same way when it comes to targeting members of our specific group?

This question is far more complex than it seems!

Mike Ryder, Lancaster University


This question really gets to the nub of why some people are concerned with autonomous weapon systems. If something is possible, should we do it? At the recent Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the UN in November 2017, Paul Scharre put it something like this: If we could have a perfectly functioning autonomous weapon system in the future, where would we still want humans to make decisions?

It seems that most people do want human control over lethal decision-making, although some are willing to delegate this to a machine if it were to become a military necessity. However, many are dead-set against any such delegation. I think a major aspect of this is trust. Are we willing to trust our lives to machines? Many people are already doing so in prototype and beta-testing self-driving cars, and in doing so are also putting the lives of nearby pedestrians in the ‘hands’ of these self-driving cars. For many, this is unnerving. Yet, we put our lives in the hands of fellow drivers every time we go out on the road. We all know this, but are all comfortable with this fact. Perhaps we will not be happy to delegate our transport to machines until we can trust them. I think if self-driving cars were shown to be functioning perfectly, people would begin to trust them.

With lethal autonomous systems, the stakes are much higher.  A self-driving car may take the wrong turn, an autonomous weapon may take the wrong life. This is obviously a huge issue, that people may never become comfortable with. But, here we are hypothetically considering those which would function perfectly. I still think it will come down to whether people will trust a system to make the correct decision.  And yet, there are still issues around whether a machine could ever comprehend every possible situation it could be in. An often used example is an enemy soldier who has fallen asleep on guard duty. The law of armed conflict would allow combatants to kill this sleeping soldier simply for being a member of the enemy side. Yet, it is difficult for us to accept when there is the possibility of capture. Here, this would not be a legal requirement under the law of armed conflict, but may be a moral desire. If programming of autonomous weapons can go beyond the law to take ethical decisions into account as well, trust in the lethal decision-making capability of machines may grow resulting in society being ok with machines performing status-based targeting.

Joshua Hughes, Lancaster University


UPDATE: This entry added 04/03/2019

As Mike has said, the issue here boils down to how we would define ‘membership’, and the way it would be determined in the field. An autonomous weapon system would require some form of machine learning in order to delineate between valid and non-valid targets based on the evidence it can gather in each case. Machine learning can either be supervised, where categories are provided and the algorithm attempts to determine which one best covers a given item, or unsupervised, where the algorithm groups items based on whichever characteristics it finds best distinguishes them, and the categories emerge dynamically from this process of classification. Both methods are fraught with peril when applied to social media advertising, let along the application of lethal force.

Take a supervised training regime, where the AWS would be provided with a list of criteria that would authorise the use of force, such as a list of proscribed organisations and their uniforms to compare against, or a dataset of enemy combatants’ faces to perform facial recognition on. The applications of lethal force would be only as good as the intel, and the experience of US no-fly lists shows just much faith one should have in that. If the model is insufficiently precise (e.g. ‘apply lethal force if target is holding a weapon’), then all of a sudden a child with a toy gun is treated as an attacking Jihadi, much to the consternation of its former parents. In an effort to avoid these false-positives, one may be tempted to go too far the other way, handicapping the rapid analytical and decision-making powers that are often cited as an advantage of AWSes with over-restrictive classifiers. If a potential threat emerges that does not fit into any preordained model, such as a non-uniformed combatant, it will be ignored—a false-negative.

An unsupervised training regime would just as dangerous, if not more so. As Shaw points out in his discussion of ‘dividuals’, this would represent a sea change in legal norms governing force. Not only would decisions be made based solely on the aggregate behaviour of a target, without oversight or appreciation of wider context, but we would be offloading a moral responsibility to display the reasoning behind such actions to opaque algorithms. Unsupervised training is also prone to misclassification—consider the work of Samim Winiger—and intentional manipulation—as in the case of the Microsoft AI who was reduced to a Holocaust-denying Trump supporter within a day of being released onto Twitter. Perhaps in the future, we can all look forward to a new Prevent strategy aimed at countering the growing threat of AI radicalisation.

Ben Goldsworthy, Lancaster University

What do you think?

Leveringhaus – Autonomous weapons mini-series: Distance, weapons technology and humanity in armed conflict

This week we are considering Distance, weapons technology and humanity in armed conflict from the Autonomous Weapons mini-series over on the Humanitarian Law & Policy blog from the International Committee of the Red Cross. In it, the author discusses how distance can affect moral accountability, with particular focus on drones and autonomous weapons. Please take a look yourself, and let us know what you think in the comments below.


This blog offers interesting insight into concepts of ‘distance’ in warfare. In it, the author distinguishes between geographical distance and psychological distance, and also then brings in concepts of causal and temporal distance to show the complex inter-relations between the various categories.

One of the key questions raised in the article is: ‘how can one say that wars are fought as a contest between military powers if killing a large number of members of another State merely requires pushing a button?’ The implication here, to me at least (as I have also suggested in my comments in other blogs), is a need to reimagine or reconstruct the concept of ‘warfare’ in the public consciousness. We seem stuck currently in a position whereby memories of the two world wars linger, and the public conceive of war as being fought on designated battlefields with easily recognisable sides.

While I agree with much of what the author says, where this article falls down I think is in the conclusion that ‘the cosmopolitan ideal of a shared humanity is good starting point for a wider ethical debate on distance, technology, and the future of armed conflict.’ While I agree with the author’s stance in principle, his argument relies on both sides in any given conflict sharing the same ethical framework. As we have seen already with suicide bombings and other acts of terrorism, this is no longer an ‘even’ battlefield – nor indeed is it a battle fought between two clearly delineated sides. While such disparities exist, I find it hard to believe any sort of balance can be struck.

Mike Ryder, Lancaster University



I found this piece, and its discussion of different types of distance both interesting and illuminating. I’ve spoken with a number of students recently about distance, and how that affects their feelings regarding their own decision-making, and the consequences of it. I found it really interesting that a large proportion of students were quite accepting of the idea that moral distance makes one feel less responsible for something that happens. But, many of the same students also wanted people held responsible for their actions regardless of that moral distance. So this gives us a strange situation where people who feel no responsibility should be held responsible. I don’t think this position is unusual. In fact, I think most people around the world would agree with this position, despite it being rather paradoxical.

It is clear that from a moral perspective, an accountability gap could be created. But, as ethics and morals are flexible and subjective, one could also argue that there is no moral accountability gap. Fortunately, law is more concrete. We do have legal rules on responsibility. We’ve seen that a number of autonomous vehicle manufacturers are going to take responsibility for their vehicles in self-driving modes. However, it is yet to be seen if autonomous weapon system manufacturers will follow this lead.

Joshua Hughes, Lancaster University

Update added 25/02/2019, written earlier

This short article explores the impact of the introduction of autonomous weapon systems on the bases of distance, be that geographical, psychological, causal or temporal distance. Contemporary drone warfare is given as an example of a the way in which a new technology allows war to be conducted with an increased geographical distance, but that the incidence of PTSD amongst such pilots shows that the same is not true of the psychological distance. Leveringhaus focuses on the issues posed by the increase of causal distance in assigning blame for breaches of international humanitarian law. We are unlikely see drones in the dock at the Hague any time soon, but who will be brought before the courts in the event of an AWS-committed war crime? The programmer of the software? This poses a challenge to the entire ethical framework of respect for individual rights, part of which is the promise ‘to hold those who violate these rights responsible for their deeds.’

Ben Goldsworthy, Lancaster University

Let us know what you think

Do previous instances of weapons regulation offer any useful concepts for governing lethal autonomous weapon systems?

Here is our first question on lethal autonomous weapon systems this month. If you have any thoughts about answers, let us know in the comments.

The question for me at least is whether or not we can draw parallels between regulation of the human and regulation of the machine. The problem here is that there are no clear and simple ways of holding a machine to account, so the question of responsibility and therefore regulation become problematic. We can hold a soldier to account for misusing a gun – we cannot do the same for a  machine. For one thing, they do not know, and cannot experience the concept of human death, so how can we even hold them to the same level of accountability when they cannot even understand the framework on which modern human ethics is built?   

Mike Ryder, Lancaster University 


Recently, I read Steven Pinker’s The Better Angels of our Nature. In it he considers why violence has declined over centuries. One part of it looks at weapons of mass destruction. For Pinker, the main reason chemical, biological and nuclear weapons are not used regularly is not because of international law concerns around high levels of collateral damage, but more because it would break a taboo on using them. Pinker suggests that the taboo is so powerful that using weapons of mass destruction are not even in the minds of military planners when considering war plans. Autonomous weapons have the potential to be as impactful as weapons of mass destruction, but without the horrendous collateral damage concerns. Would this create an equal taboo based on the human unease at delegating lethal decision-making? I think a taboo would be created, but the likely reducing in collateral damage would make any taboo weaker. Therefore taboo is unlikely to restrict any future use of autonomous weapons. 

In terms of treaty-based regulation, having been at the meetings of experts on lethal autonomous weapon systems at the UN, I think any meaningful ban on these weapons is unlikely. However, in recent years a number of informal expert manuals have been created on air and missile warfare, naval warfare, and cyber warfare. They have generally been well received, and their recommendations followed. I could imagine a situation in the future where similar ‘road rules’ are developed for autonomous weapons, interpreting the requirements of the law of armed conflict and international human rights law for such systems. This could result in more detailed regulation, as there is less watering down of provisions by states who want to score political points rather than progress talks. We will have to wait and see if this will happen. 

Joshua Hughes, Lancaster University 


Let us know what you think

Haas and Fisher – The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order

This week we begin our discussions of autonomous weapon systems. Following on from the discussions of the Group of Governmental Experts at the UN last November, more talks are taking place in February and April this year. For those not aware, an autonomous weapon system is that which can select and engage targets without human intervention – think a drone with the brain of The Terminator.

First, we are looking at ‘The evolution of targeted killing practices: Autonomous weapons, future conflict, and the international order’ by Michael Carl Haas and Sophie-Charlotte Fischer from Contemporary Security Policy, 38:2 (2017), 281–306. Feel free to check the article out and let us know what you think in the comments below.

Here’s what we thought:


I enjoyed this article, and the ways in which it seeks to engage with the future applications of AWS in what we might describe as ‘conventional’ wars with the use of targeted killings or ‘assassinations’ by drone likely to become more common.

From my own research perspective I am particularly interested in the authors’ approach to autonomy and autonomous thinking in machines (see 284 onwards). I agree with the authors that ‘the concept of “autonomy” remains poorly understood’ (285), but suggest that perhaps here the academic community has become too caught up in machinic autonomy. If we can’t first understand human autonomy, how can we hope to apply a human framework to our understanding of machines? This question to me, seems to be one that has been under-represented in academic thinking in this area, and is one I may well have to write a paper on!

Finally, I’d like to briefly mention the question of human vs machinic command and control. I was interested to see that the authors suggest AWS might not become ubiquitous in ‘conventional’ conflicts when we consider the advantages and disadvantages of their use for military commanders (297). To me, there is a question here of at what point does machinic intelligence or machine-thinking ‘trump’ the human? Certainly our technology as it stands to date still puts the human as superior in many types of thinking, yet I can’t believe that it will be too long before computers start to totally outsmart humans such that this will even remain a question.  There is also then the question of ‘training cost’. In a drawn out conflict, what will be easier and cheaper to produce: a robot fighter who will be already pre-programmed with training and so on, or the human soldier who requires an investment of time and resources, and who may never quite take on his or her ‘programming’ to the same level as the machine. Something to think about certainly…

Mike Ryder, Lancaster University


I quite liked this piece, as it is common to hear fellow researchers of autonomous weapons say that such systems will change warfare but then provide no discussion of how this will happen. Fortunately, this paper does just that. I particularly liked the idea that use of autonomous systems for ‘decapitation’ strikes against senior military, political, or terrorist leaders/influencers could not only reduce collateral damage overall, and the number of friendly deaths, but also the level of destruction a conflict could have in general. Indeed, I’ve heard a number of people suggest that present-day drones offer a chance at ‘perfect’ distinction, in that they are so precise that the person aimed at is almost always the person who dies with often little collateral damage. It is usually poor intelligence analysis that results in the wrong person being targeted in the first place that is responsible for the unfortunately high number of civilian deaths in the ‘drone wars’. Use of AI could rectify this, but also the use of autonomous weapons could reduce the need for substantial intelligence analysis if they were one day capable of identifying combatant status of ordinary fighters, and of identifying specific high-level personalities through facial or iris recognition. If this becomes possible, autonomous weapons could have the strategic impact of a nuclear bomb against the enemy fighters, without causing much collateral damage.

Joshua Hughes, Lancaster University

UPDATE: added 18th March 2019, written earlier

This article presents predictions on the impact of autonomous weapons on the future of conflict. Building on a ‘functional view’ of autonomy that distinguishes degrees of autonomy across different functional areas, such as ‘health management’, ‘battlefield intelligence’ and ‘the use of force’, the authors discuss the issues and incentives of applying different degrees to different functions. They also detail the US’ ongoing drone campaigns before extrapolating the trends seen within into a future of greater weapon autonomy. First, they see an increased focus on ‘leadership targeting’, believing that ‘autonomous weapons would be a preferred means of executing counter-leadership strikes, including targeted killings.’ Secondly, they propose such tactics as a necessary response to the resurgence of ‘hybrid warfare’, with ‘[a]ttacking leadership targets in-theatre…be[ing] perceived as a viable and effective alternative to an expansion of the conflict into the heartland of an aggressive state opponent’. The authors conclude with their belief that ‘advanced Western military forces’ “command philosophies” will militate against the employment of autonomous weapons, which require surrendering human control, in some types of targeted killing scenarios.

I found the article to have a rather unexpected utopian takeaway. Where a previous author proposed that a shift to swarm warfare would make ‘mass once again…a decisive factor on the battlefield’, this paper’s predict the development of a more scapel-like approach of targeted leadership killings. The thought of generals and politicians being make immediately responsible for their military adventures, rather than however many other citizens (and auxiliaries) they can place between them and their enemies, seems like a rather egalitarian development of statecraft. It reminded me, of all things, of the scene in Fahrenheit 9/11 in which the director asks pro-war congressmen to enlist their own children in the Army and is met with refusal. It’s easier to command others to fight and die on you and your government’s behalf, but the advent of the nuclear age presented the first time in which the generals had just as much ‘skin in the game’ as everyone else, and nukes remain unused. Perhaps this future of leadership targetting by tiny drones can achieve the same result, but without taking the rest of us along for the apocalyptic ride. The risk of a small quadcopter loaded with explosives flying through one’s office window seems like it would be a strong incentive for peacemaking, a potentially welcome by-product of the reduction of the ‘tyranny of distance’ (or, rather, the obviation of insulation) that the earlier author had discussed.

Ben Goldsworthy, Lancaster University

Let us know what you think in the comments below

Autonomy in Future Military and Security Technologies: Implications for Law, Peace, and Conflict

Three members of our group, along with other colleagues, took part in an international workshop at the Universitat de Barcelona in February 2017 titled ‘Sense and Scope of Autonomy in Emerging Military and Security Technologies’. Coming out of this, a compendium of research papers has been put together in order offer a contribution to discussions at the Group of Governmental Experts meeting on Lethal Autonomous Weapon Systems at the United Nations Office at Geneva 13th-17th November 2017.

This compendium of articles is due to be published by the Richardson Institute at Lancaster University, UK. Due to technical reasons, the report is provisionally being hosted here in order that delegates at the GGE, and those interested in the subject of lethal autonomous weapon systems, may read the works whilst discussions in Geneva are taking place.

The compendium contains:

Formal presentation of the compendium

Milton Meza-Rivas, Faculty of Law at the University of Barcelona, Spain.

Some Insights on Artificial Intelligence Autonomy in Military Technologies

Prof. Dr Maite Lopez-Sanchez, Coordinator, Interuniversity Master in Artificial Intelligence, University of Barcelona, Spain

Software Tools for the Cognitive Development of Autonomous Robots

Dr. Pablo Jiménez Schlegl, Institute of Robotics & Industrial Informatics, Spanish National Research Council, Polytechnic University of Catalonia, Spain

What is Autonomy in Weapon Systems, and How Do We Analyse it? – An International Law Perspective

Joshua Hughes, University of Lancaster Law School and the Richardson Institute, Lancaster University, UK

Legal Personhood and Autonomous Weapons

Dr Migle Laukyte, Department of Private Law, University Carlos III of Madrid.

A Note on the Sense and Scope of ‘Autonomy’ in Emerging Military Weapon Systems and Some Remarks on the Terminator Dilemma

Maziar Homayounnejad, Dickson Poon School of Law, King’s College London, UK


The compendium is available here: Richardson Institute – Autonomy in Future Military and Security Technologies Implications for Law, Peace, and Conflict

A courtesy translation of the introduction which presents the articles is available here (in Spanish): Translation of the compendium presentation text in Spanish

Why bother with super-soldiers when we could just use machines?

Here’s our final comments in our theme of super-soldiers. A number of people have wondered what is the point of super-soldiers? And indicate that either a large number of conventional soldiers or machines could do create the same effects. This is interesting because it shows that conventional soldiers might not have the capabilities for the future of conflict, but also that neither conventional nor super-soldiers are likely to be good enough for future conflict, which machines may be.

So, here are our thoughts on this question:

In regards to whether States should bother with super soldiers when machines could be used instead, I will consider this in relation to machines completely replacing soldiers (ignoring whether this is feasible or not).  Super soldiers, it can be said, would provide States with the best of both worlds. Super soldiers would possess capabilities that exceed regular soldiers but would still maintain the ‘human connection traditionally associated with war’ (even if it is recognised that the human connection is diminished by human enhancement).  Sawin acknowledges the concern that a lack of human connection could lead to ‘rogue killing machines at the centre of a battlefield’. It may well be the case that States will eventually seek killer robots as a replacement for regular soldiers but in the mean time, super soldiers provide a midway point by possessing machine like qualities without the perceived greater risk of killer robots.  Furthermore, the utilisation of super soldiers does not necessarily mean that machines will not be used in the future. It could be perceived that the development of super soldiers is just another step in the move towards killing machines.

Liam Halewood, Liverpool University 

To adjust this question slightly, I might suggest why bother with soldiers at all, when we could just use machines? With the increasing ‘robotisation’ of the armed forces, and indeed civilian life, we have something of a crisis emerging in society today where the human is becoming more like the machine, and the machine is becoming more like the human. Where will this stop? Why do we even try to make the machine more human in the first place?

Ultimately, I think, the ‘super-soldier’ will come about whether it is funded by the military or not. As a society, we have been working for many years now to alter the human condition – to extend and improve the quality of human life through the application of science and technology. Whether we like it or not, the human soldiers of the next century will most likely be relatively ‘super’ compared to the soldiers we were sending into the trenches in the First World War.

But then why not just send in the machines in the first place? I think here the question becomes one of what we foresee the purpose of war being in the future. Are the wars of territory now long past? If so then will we be in a position again where we need humans for humanitarian or ‘hearts and minds’ purposes when a robot is so much more effective at killing? When we start to consider the future implications of war in space, again it would seem the robot would be a preferable option. But then how can one sue for peace with a robot? Can a robot ever adapt to a new environment as well as a human?

Mike Ryder, Lancaster University

When we talk about machines in warfare, we usually talk of autonomous weapon systems, or killer robots. There are generally two camps. Those which think they are an affront to ethics and want them to be banned, or those who see the utility of them as an incredibly useful weapon in future warfare. Nobody really is pushing for the total absence of human-beings in lethal decisions-making because they think it is a good thing. Indeed, most people how are seen as being in the ‘pro’ camp are usually arguing that there is nothing explicitly unlawful about them, rather than that they will be a good idea.

This leads us to the main point, that often in warfare there are tricky decisions to be made. No programmer or manufacturer of an autonomous weapon could ever imagine all scenarios, even if said manufacturer only employed veterans. These situations often rely upon human judgment, and sometimes they get it wrong. But, I was at a lecture recently where an ex-army officer said that these tricky situations was one of the reasons that an officer class exists, to take such decisions on behalf of their subordinates and suffer the consequences for them.

There will be some decisions that are black and white, such as ‘he is wearing an enemy uniform, he is my enemy, therefore I can target him’. This wouldn’t require a referral to a higher authority, whether the entity making the decision is human or machine. But, where there are a number of civilians around and the level of military advantage which could be gained compared to the collateral damage which could be expected in unclear, a trickier decision appears. Referral to a human here would be really useful for avoiding unfortunate incidents with large or unnecessary collateral damage. But, to have a system require attention and for a human to jump into being brought up to speed straight away and immersed in the situation may take too long for decisions to be made. For example, an enemy may escape before a decision is made, or before a school bus comes into the expected blast radius. Thus, here cognitively-enhanced humans would be a great addition if they could comprehend complex situations quickly and make decisions quicker. One f the main reasons for potentially using autonomous weapons is the increased speed at which they can operate, enhanced humans would also increase the speed of operations, without necessarily loosing the human touch to complex decisions and scenarios.

Joshua Hughes, Lancaster University.

What do you think?

Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it – Harris

Here, we discuss ‘Autonomous Weapons and International Humanitarian Law OR Killer Robots are Here. Get Used to it’ by Shane Harris, Temple International and Comparative Law Journal, 2016, Vol.30(1), pp.77-83.

It’s available here.

Essentially, Harris argues two things:

(1) It is inevitable that human beings will build weapons systems capable of killing people on their own, without any human involvement or direction; and (2) It is conceivable that human beings could teach machines to recognize and distinguish when the use of lethal force complies with international humanitarian law.

We al dig into it in our own individual ways, and we have a few different views on this subject. So, hopefully we will start a lively debate.

If you do have any comment, please leave them at the bottom.

Without further a do, here’s Mike:

This article is very much a summation of ‘where we are at’ when it comes to autonomous weapon systems, and the author places killer robots as an inevitability (77), and one that we should perhaps embrace as machines are far more reliable than humans at snap decision making (83).

However I do fundamentally disagree with the notion that robots could (and indeed should) be taught international law, and so kill only when legal to do so. The issue here is one of interpretation and the article would seem to fail to take into account the fact that most modern-day enemies do not mark themselves distinctly as combatants, as their unknowability is the primary advantage that they are able to exercise against a vastly superior military threat. The distinction here is never so clear-cut.

There is also, in my mind, the issue of reciprocity and the expectations associated with combat. Here, war seems to be defined in strictly Western terms, where there is a law of war as such, agreed upon by both sides. But again, terrorists don’t adhere to this structure. With no attributability, there is no stopping a terrorist dressed as a civilian carrying out an atrocity, and no way a robot could interpret that ‘civilian’ as a terrorist within the structures of a strict legal framework. While I do not dispute that robots can theoretically be made more ‘reliable’ than humans, the question for me is what exactly does ‘reliable’ mean, and should the law ever be seen as a computer program?

Mike Ryder, Lancaster University


I will start off by saying I always like a good controversial article that goes against established conventions. As is obvious from the name already, that is what this article tries to do. However, I do not think it is succeeding and does not live up to its potential.

My problem with the article is not in what he claims, but how he supports it. I think, completely outside a moral or legal judgement, I agree with his two hypotheses he sets out in the start: 1) It is inevitable that human beings will build weapons systems capable of killing people on their own, without any human involvement or direction; and (2) It is conceivable that human beings could teach machines to recognize and distinguish when the use of lethal force complies with international humanitarian law.

However, he fails to actually provide arguments for these theses. The article is very short (7 pages, with a large part of these pages made up by sources), and 4 of these pages are descriptions of historical programmes. While there are many lessons to be learnt from historical military innovation, the most important lessons from history is that you cannot generalize from the past to predict the future with certainty. This is not support for a strong statement that it is “inevitable” that they would be developed. His argumentation that it will be conceivable to develop systems that could comply with IHL is supported by mentioning two military R&D programmes and that many technologists would answer “let us try.” Again, that is not any support for his argument that it would be conceivable, and does not provide any insight into the state of technology. Additionally, the small amount of different sources, and the quality of some of his sources, do not help. It is a shame he could not provide a solid backing for his statements, because I actually agree with it – and this is also what I have been working on myself. However, this article does not provide sufficient proof for that. Then I have not even started about the shoddy argumentation and generalizations in the last section, and the US-centrism.

He is not the only one generalizing about the development of autonomous weapon systems without taking nuances into account, as that is seen more often in the debate, unfortunately. However, the entire departing point of his article are these 2 hypotheses, so in this case a solid argument is actually needed. He ignores the established literature on what is needed for defence innovation. I would recommend the article “The Diffusion of Drone Warfare? Industrial, Organizational and Infrastructural Constraints” by Gilli and Gilli (2016) as a rebuttal of his arguments, but with more solid material to support their point of view.
Maaike Verbruggen, Stockholm International Peace Research Institute (SIPRI)


Harris’ article has 2 arguments: 1, humans will build autonomous weapon systems (AWS); 2, it is conceivable that AWS could comply with the law of armed conflict. I completely agree with him.

Firstly, I think AWS will be built, whether they are as independent as a Terminator who decides everything apart from mission parameters, or, as Harris suggests, an advanced drone that can initiate individual attacks when authorised. The fact is that less people want to join militaries, and we are on the verge of what could be very unpredictable and very dangerous times.  Add to that, the public being far more resistant to seeing troops die on foreign soil, and any country that feels a need to use force extraterritorially doesn’t have many options if they are going to maintain their place in the world. AWS could be the answers to a lot of problems, if the ethical issues of using them in the first place do not outweight their usefulness.

Second, I think the idea that legal rules cannot be converted to algorithms that machine could understand is ridiculous, Arkin already shows this is possible in his book Governing Lethal Autonomous Robots. The issue really goes beyond the rules. It is, frankly easy to programme a system with ‘IF civilian, THEN do not shoot’, for example. The difficulty is recognising what a civilian is. An international armed conflict, where the enemy wears an identifying uniform is clearly less problematic, an AWS that recognises the enemy uniform could fire. A non-international armed conflict between state and non-state actor is trickier – how to identify a militant or terrorist when they dress like civilians? There are suggestions in the literature of nanotechnology sensors identifying metallic footprints, but this doesn’t help AWS if in an area where civilians carry guns for status or personal protection. It seems, the only real identifying feature of enemies hiding amongst civilians is hostile intent. A robot detecting emotion is clearly difficult – but this is being worked on. Perhaps, waiting for hostile action would be better – If an AWS detects somebody firing at friendly forces, that person has self-identified as an enemy and a legitimate target, and an AWS firing at them would cause no legal issues regarding distinction.

Regarding proportionality, Schmitt and Thurner suggest that this could be turned into an algorithm by re-purposing collateral damage estimation technologies to give one value that could be weighed against military advantage which could be calculated by commanders assigning values to enemy objects and installations. In terms of precautions in attack, most of these obligations would, I think, fall on commanders, but perhaps a choice in munitions could be delegated to an AWS – for example, if a target is chosen in a street, an AWS could select a smaller munition, to avoid including civilians in the possible blast radius.

So, it is certainly not inconcievable that AWS could comply with the law of armed conflict. If fact, I think they probably could do. But massive increases in technology are likely to be required before this is possible.

Joshua Hughes, Lancaster University

As a complete novice to the debates on Autonomous Weapons Systems I enjoyed this article. However, I also completely agree with the criticisms that other group members have made about the article e.g. that some of the arguments are poorly supported. Nonetheless, as a short article that intends to provoke discussion I believe the article is successful and provides a good starting point for people like myself that are not so familiar with the topic.

Liam Halewood, Liverpool John Moores University

As always, if you’re interested in joining just check the Contact tab.