Which Ethics?
A funny thing happened on the way to the blog. The question of which school of normative ethics best suits Personified Systems or Artificial General Intelligences or robots became a whole lot less clear to me than when I first started writing my next installment. As a result, this posting is a good deal later than I intended and is now the first in a four-part series, a series in which I will be re-evaluating the analysis that I did a few months ago when this project started.
Determining how the behavior of personified systems can best be made trustworthy depends upon a number of factors. First of all, we need to decide what approach to take, which style of normative ethics best suits personified systems. Beyond that, though, there is the question of how the behavior of systems will evolve over time as they become more intelligent, more autonomous, and more person-like. An approach well suited to mere personified systems may not work nearly as well for AGIs that are capable of autonomous moral agency, and vice versa. If possible, it would be desirable that the behavior and people's expectations of autonomous systems span their evolution over time. It is, therefore, worth evaluating approaches to normative ethics both in terms of what is suitable today, and how well they adapt over time.
The Three Norms
In this post, I will give a brief overview of the schools of normative ethics referred to in my "A Matter of Semantics: Norms & Agents" posting and give a few pointers to related outside reading and viewing, for those who would like to research along with me. It will be followed by individual posts regarding each of the three schools that will look at the work being done in that area, and discussing its suitability and short and long term prospects.
As described in my "Semantics" posting, normative ethics is generally broken into three broad schools:
- Deontology
- Consequentialism
- Virtue Ethics
Deontology: Driven by rules
Deontology seeks to guide our actions through the application of a set of moral rules or principles. One of the more highly regarded deontological systems is Kant’s “Categorical Imperative”, the principle of universality: “Act only in accordance with that maxim through which you can at the same time will that it become a universal law.” Perhaps even more widely applied are the systems of divine commandments of the various world religions, such as the Ten Commandments, Christ’s two great commandments, and so on.
There are, of course, many other deontological systems. They have in common the focus on establishing a rule or set of rules for judging acts to be moral or immoral.
On the surface, deontological systems would seem to be well suited to controlling the behavior of computer systems, systems which themselves consist of bodies of computer code. It might be argued, in fact, that such systems are nothing but detailed specifications of rules controlling the behavior of the system. So, if they are already just implementations of systems of rules, why not add rules of ethics?
The problem that emerges, though, is that what computers excel at is combining very simple rules defined in very precise terms. Actual deontological systems depend upon understanding far more complex and nuanced terms. When Kant tells us to act in a way such that we would wish that all other people act that way, it is a very simple statement, but the implications are profound. How do we wish others to act? How would an autonomous system know what that is?
Similarly, the Ten Commandments are often said to tell us "Thou shalt not kill" and yet we have war, self defense, and executions. Looking more closely at the commandment, we find that it is more accurately translated as "Thou shall do no murder", and "murder" is defined as unjust killing, while self defense, lawfully declared war, and executions passed down as sentences by lawful courts are considered "just". How do we define that?
One system of deontological rules that is often cited with regard to artificial systems are Isaac Asimov's "Three Laws of Robotics", but what this overlooks is that Asimov intentionally designed his rules so that they would be incomplete and contradictory, because their purpose was to set up the conflicts that made good stories. He would never have advocated using them in the real world, for all the reasons shown in his stories and many more that he didn't get around to writing.
There are a number of advocates of a deontological approach to causing autonomous systems to behave ethically. Selmer Bringsjord of the Rensselaer Polytechnic Institute and his associates, for instance, created a language and a system for expressing the knowledge, goals, and behavior of autonomous systems that they call the "Cognitive Deontic Event Calculus" (CDEC).
The husband and wife team of computer specialist Michael Anderson and philosopher Susan Anderson have been working on methodologies based on the deontological “prima facie duty” theories of W.D. Ross. Recently, they have been exploring the application of Machine Learning to derive such duties.
Useful links on deontology, both specific to autonomous systems and in general:
- Selmer Bringsjord and his associates' "Deontic Cognitive Event Calculus" system.
- The Andersons' paper, "Machine Ethics: Creating an Ethical Intelligent Agent".
- The Stanford Encyclopedia of Philosophy has a good article on Deontology, and of course, many others.
- The University of Tennessee's Internet Encyclopedia of Philosophy's article on W.D. Ross includes a discussion of his ethical system and prima facie duties.
Consequentialism: Means to an end
The second approach is to consider not rules of behavior for the actor, but the consequences of their actions to others and to themselves. Consequentialist systems determine the ethics of an action by the results that it brings about. Examples of consequentialist ethical systems are utilitarianism, which calls for maximizing human wellbeing, and hedonism, which maximizes human pleasure.
Prof. Alan Winfield of the University of the West of England, Bristol, and his students and colleagues have done a good deal of work on what he calls the "Consequence Engine", an approach for, first off, making robots safe to be around, but which he hopes will also enable them to act ethically. His design uses a secondary system that is capable of simulating the robot and its environs, which is used to test out various alternatives, ruling out those choices that would result in harm to humans. It then turns over the remaining set of actions to the robot's main control system to pick the preferred action. In essence, his Consequence Engine's job is to insure that the famous dictum to "first, do no harm" is followed.
The problem with this approach, as Prof. Winfield himself points out in many of his talks, is that simulating the robot, its environment, and its actions is hard. Since, as he points out, there are many kinds of harm to humans that must be prevented, there are many aspects of the world that must be simulated, in order for the engine's predictions to be fully effective. Still, by creating a framework that can be implemented with greater and greater detail and fidelity to the real world, this approach provides an incremental mechanism that can be tuned and improved over time, rendering a consequentialist robot, AI or AGI ethics at least plausible.
Useful links:
- Prof. Winfield's blog posts and videos are worth reading and viewing.
- The BBC has a short page discussing Consequentialism.
- The BBC article, in turn, refers to a far lengthier article at UTM's Internet Encyclopedia of Philosophy.
Virtue Ethics: Virtual character
The third approach to ethics is to consider the character of the actor rather than the effects of the action or the duties and rules that determine it. Virtue-based ethics goes back to Aristotle. With its emphasis on character and judgement, Virtue ethics is often thought of as the least likely fit for autonomous systems. Still, I, like several before me, see this as a plausible model for both short term low-intelligence personified systems and eventual highly intelligent AGIs. I will explore this in the last installment of this series of four posts addressing "Which Ethics?" Until then, here are a few references as food for thought.
Anthony Beavers writing in Patrick Lin et al.'s book, "Robot Ethics: The Ethical and Social Implications of Robotics" covers the role of virtue ethics in its section on "Moral Machines and the Threat of Ethical Nihilism". Beavers cites, among others, an early and influential paper, James Gips' "Towards the Ethical Robot", in which he covers all three schools of normative ethics. Of Virtue ethics, Gips writes:
“The virtue-based approach to ethics, especially that of Aristotle, seems to resonate well with the modern connectionist approach to AI. Both seem to emphasize the immediate, the perceptual, the non-symbolic. Both emphasize development by training rather than by the teaching of abstract theory.”
Philosopher Daniel Hicks has written an article "Virtue Ethics for Robots", in which he criticizes both Deontological and Utilitarian (consequentialist) principle-based systems for not fully dealing with moral dilemmas, appropriate ethical responses to them and the lack of what he calls "tactical creativity" for dealing with specific situations. Hicks is not, specifically, an expert in AI and machine ethics, and refers repeatedly to the notion that robots “follow their programming”. It is not clear that he understands, as Gips did, the extent to which Machine Learning systems create their own programming.
Shannon Vallor of Santa Clara University has written a paper, "The Future of Military Virtue: Autonomous Systems and the Moral Deskilling of the Military", in which she considers the role of autonomous systems in combat, and what she calls the "moral deskilling" that can result. Her conclusion is that we should, perhaps, restrict the deployment of automated methods of warfare to appropriate contexts, and work to increase the ethical skills of the human combatants.
The area of the ethics of autonomous military systems is a particularly tough one. On the one hand, if you are going to trust robots to use lethal force, there are good reasons to, as the military does, insist upon explicit and provable ethical rules, rules such as Bringsjord's CDEC. The price of error is very high. On the other hand, virtue, at least as much as deontological rules have always driven the world's militaries, and should be considered.
Useful links:
- Anthony Beavers' article "Moral Machines and the Threat of Ethical Nihilism".
- James Gips' 1991 paper, "Towards the Ethical Robot".
- Daniel Hicks' article, "Virtue Ethics for Robots".
- Shannon Vallor's paper "The Future of Military Virtue".
- On a lighter note, there is a Youtube video in which two "Robots Discuss Virtue Theory".