In this section we’re exploring the ethics of unmanned systems. I try to be rigorous about calling them unmanned systems instead of ‘drones’ for a couple of reasons:
- It’s more accurate: When people think of drones they immediately think of remotely piloted planes, which are only a fraction of how unmanned systems are employed by the military and others. The moral implications are not constrained to Predators; and,
- It facilitates moral reflection: The term ‘drones’ is used rather pejoratively in some circles in a way that undermines ethical engagement by closing, rather than opening the discussion. #METC is about discussing these issues rigorously and professionally, and the terms we use can either hinder or help us in that pursuit.
That said, twitter is all about keystroke economy, so I’m sure many of you will continue to use ‘drones’ when what you mean is unmanned systems. I simply hope that from now on whenever you see or type ‘drone’ your brain flashes ‘unmanned system’. (And if any of you would like to have a chat about the gendered use of ‘unmanned’, please let me know. I’m happy to do so as soon as you pay off my mortgage. Until then, keep it to yourself.)
We’ll be covering a few different topics over the next two nights:
- Unmanned systems are simply a particular type of platform. While there are particular elements of unmanned systems that require specific moral and legal guidance much of our current moral and legal restraints govern their employment adequately. Remember, we’re talking about the systems themselves in this block; we discussed the targeting question in our previous two sessions. The issue is what is morally unique about having a pilot outside the cockpit / person away from the IED / etc.; most of the questions concerning lethality and the other sorts of things unmanned systems can do have already been figured out because we’ve already been doing them in combat.
- Even ‘unmanned systems’ is an inaccurate term; ‘remotely piloted’ is most precise for aviation assets. Who cares? We do, because while many moral questions have already been addressed with existing manned platforms, there are moral implications to having combatants physically removed from the battlespace. These questions explode in Technicolor when we shift the conversation from ‘unmanned’ to autonomous. Our reading from Ronald Arkin explores this issue.
- While it is important to focus on the limitations and moral challenges of unmanned systems, we will also explore the ways in which unmanned systems are morally preferable to their manned counterparts (All the pilots scream a hearty “HELL NO!” but there are strong moral arguments in their favor, as BJ Strawser lays out in his reading for this section).
- We talked about accountability when discussing targeted killings, but unmanned systems raise a similar issue – our moral commitments are socially created. The employment of unmanned, potentially autonomous systems raise important questions for how the United States (or any democracy) undergoes the process of public will formation with respect to moral questions. To phrase it as a question, how do we as a society think responsibly about the limits we place on ourselves in war?
We’ll run a poll tonight on how we want to run class tomorrow with the State of the Union. I think they’re pretty important to watch, especially as they contribute to topic area #4, but we’ll figure it out.