Which Ethics? — Deontology

December 10, 2015 | Jim Burrows

Deontology is an approach to Normative Ethics that is based upon the premise that what a person ought to do is determined by some set of rules or principles. It includes codes of ethics like The Ten Commandments or principles such as Kant's Categorical Imperative (“act only in accordance with that maxim through which you can at the same time will that it become a universal law.”, see the Stanford Encyclopedia of Philosophy for more on the CI, Kant's Moral Philosophy and Deontology in general). Deontological ethics have been associated with automated systems since at least the advent of Isaac Asimov's "Three Laws of Robotics". 

It is, perhaps, worth noting that Asimov's Three Laws themselves are not a particularly good system in the real world, as Asimov himself pointed out upon occasion. He formulated them not to control actual robots, but as a source of conflict to drive the plots of his story. Each story in which they featured was centered around some oversight, self-contradiction, or conflict among the Three Laws. It was believable that people would attempt to control robots some day using a mechanism such as the Three Laws, and that the set that they chose, while they might appear adequate, would be sufficiently flawed so as to offer all the conflict that he would need to write a large number of stories.

Bringsjord, et al.

One of the strongest contemporary advocates of a deontological approach to "roboethics" and regulating the behavior of AIs, robots and other automated systems is Selmer Bringsjord. Bringsjord comes from the older "logicist" school of AI. He describes this perspective (and calls for it to become an independent discipline separate from all other AI efforts) in a paper entitled "The Logicist's Manifesto". In it, he categorizes the Logicist perspective as having three attributes. Logicist AIs (LAIs) are:

Based on this top-down, logic-driven understanding of what personhood is, and how artificial persons can be created, Bringsjord and his colleagues have created a language and system called "Deontic Cognitive Event Calculus" (DCEC). This is a system that allows propositions of the sort that he mentions—"X knows/believes/intends/is obliged to", and so forth—and allows them to be combined such that a statement that would be expressed in English as "If you come across a wounded soldier, you are obliged to add getting him to a MedEvac unit to your system of goals and continue to act upon those goals" can be expressed algorithmically.

Using this system, they claim to have made a number of major advances toward creating persons, and even persons with ethical reasoning and motivations. Here are three videos illustrating some of their claimed accomplishments: "Self awareness", "solving moral dilemmas", and "Akrasia in robots":

Please note that I have referred to them as "claimed accomplishments" because there are other researchers and theorists who might explain what they show in quite different terms. With the caveat that I don't share Bringsjord's Logicist perspective, allow me to explain what each of these is.

In the first video, we see the experiment that was announced as demonstrating the rudiments of "self-awareness" in a robot. The demonstration is loosely based on the "Three Wise Men" puzzle that has been around for a long time and is described in the "Logicist's Manifesto". Three robots have been told that two of them have been "given a dumb pill", which will render them unable to speak, and the third was "given a placebo". They are then asked "Which pill did you receive". One of them says, "I don't know," hears himself, then corrects himself to say "Now I know". This, they argue, demonstrates awareness of self.

In the second video, a collection of goals and methodologies (and some other unspecified instructions) results in a demonstration of how priorities drive decision making and new strategies emerge when the priorities are equal.

The third video is said to demonstrate "akrasia" or working against your own self interest. Two robots, tagged with large red or blue dots that identify their faction, enact a play in which one of them is a prisoner and the other a guard. They then "hurt" or "refrain from hurting" each other by whether they intentionally collide. One really must know the details of the scenario and setup in order to see how self interest, self-awareness and akrasia are manifested in this little play.

Regardless of whether these videos actually demonstrate the philosophical points that they are trying to make, the DCEC language and system is proving to be a convenient and concise way to model some types of behavior and control systems. Given that, deontology should not be written off as an approach to the normative ethics of personified systems. Its applicability to full AIs may, however, be more controversial, at least in the case of the work of Bringsjord and his team.

Besides his intentionally provocative Logicist's Manifesto, many of his other writings are, if not controversial, at least somewhat provocative. In their semi-formal paper, "Piagetian Roboethics via Category Theory: Moving Beyond Mere Formal Operations to Engineer Robots Whose Decisions are Guaranteed to be Ethically Correct", for instance, Bringsjord et al. tie their work to Jean Piaget's fourth stage of logical development, but leave out all references to the developmental nature of Piaget's work other than to say that a provably correct ethical robot would have to be based on a yet to be developed fifth or higher stage. This is in keeping with the Logicists' top-down approach. 

At the heart of their notion of a stage of logical reasoning more mature than Piaget's fourth, adult, reasoning stage is one based upon category theory, which they say will allow the robot to derive its own codes of conduct. This is needed, they say, because 4th stage reasoning is inadequate. They give the following example:

"Imagine a code of conduct that recommends some action which, in the broader context, is positively immoral. For example, if human Jones carries a device which, if not eliminated, will (by his plan) see to the incineration of a metropolis, and a robot (e.g., an unmanned, autonomous UAV) is bound by a code of conduct not to destroy Jones because he happens to be a civilian, or be in a church, or at a cemetery ... the robot has just one shot to save the day, and this is it, it would be immoral not to eliminate Jones."

This example is in keeping with uses for automated systems that Bringsjord sees as vital, as outlined in an opinion piece that he wrote—"Only a Technology Triad Can Tame Terror"—for the Troy Record, and which is referred to in a presentation that he gave for the Minds & Machines program at RPI in 2007. In that piece he concluded that the only protection that we can have against terrorism and mass shootings such as the one at Virginia Tech is to build a triad of technologies:

"Our engineers must be given the resources to produce the perfected marriage of a trio: pervasive, all-seeing sensors; automated reasoners; and autonomous, lethal robots. In short, we need small machines that can see and hear in every corner; machines smart enough to understand and reason over the raw data that these sensing machines perceive; and machines able to instantly and infallibly fire autonomously on the strength of what the reasoning implies."

Which-Ethics-—-Deontology-Img-1

Given that he believes this sort of technology is necessary, it is easy to see why Bringsjord and company insist upon a system of logically provable ethics. The effort to create fully Logicist AIs that are controlled by an explicit system such as the one that implements their DCEC language has been the main focus of their work for several years. Perhaps the best introduction to their work is their article "Toward a General Logicist Methodology for Engineering Ethically Correct Robots", published in the July/August 2006 issue of the IEEE Intelligent Systems journal, which can be downloaded from the thumbnail to the right.

The home page for the Rensselaer Artificial Intelligence and Reasoning (RAIR) Laboratory contains details on many of their projects, including the DCEC system, and Psychometric Artificial General Intelligence World (PAGI World) (“pay-guy”), a simulation environment used for AI and AGI testing (and seen in the second video above).

One criticism of this approach is that Logicist AI is incapable of creating the artificial persons that they are seeking, that LAI is not a valid path to a true Artificial General Intelligence (AGI). One example of this stance is an answer that Monica Anderson recently posted on Quora to the question "What are the main differences between Artificial Intelligence and Machine Learning?" She wrote, in part:

“Machine Learning is the only kind of AI there is.

“AI is changing. We are now recognizing that most things called "AI" in the past are nothing more than advanced programming tricks. As long as the programmer is the one supplying all the intelligence to the system by programming it in as a World Model, the system is not really an Artificial Intelligence. It's "just a program".

“Don't model the World; Model the Mind.

“When you Model the Mind you can create systems capable of Learning everything about the world. It is a much smaller task, since the world is very large and changes behind your back, which means World Models will become obsolete the moment they are made. The only hope to create intelligent systems is to have the system itself create and maintain its own World Models. Continuously, in response to sensory input.

“Following this line of reasoning, Machine Learning is NOT a subset of AI. It really is the ONLY kind of AI there is.”

She closes her reply with the caveat,

“I really shouldn't confuse things but strictly speaking, Deep Learning is not AI either. We are currently using Supervised Deep Learning, which is another (but less critical) programmer's cheat since the "supervision" is a kind of World Model. Real AI requires Unsupervised Deep Learning. Many people including myself are working on this; it is possibly thousands of times more difficult that Supervised Learning. But this is where we have to go.

“Deep Learning isn't AI but it's the only thing we have that's on the path to True AI.”

This is, in ways, similar to an argument that I made a few months ago in an essay titled, "AI, a “Common Sense” Approach". By "common sense", I was referring not to the current plain language meaning of the phrase, but Aristotle's definition of the "common sense" as the human internal faculty that integrates the perceptions of the five senses—sight, hearing, touch, smell and taste—into a coherent view of the world. This is not accomplished through binary or formal logic, but rather through a system of pattern matching and learning mechanisms of the sort being explored in Machine Learning research.

Anderson and Anderson

Which-Ethics-—-Deontology-Img-2

Another team that takes a deontological approach, but one that is more consistent with a Machine Learning approach, is the husband and wife team of computer researcher Michael and philosopher Susan Leigh Anderson. (Susan's early work appears under her maiden name.) 

A good introduction to their work can be found in the Scientific American article "Robot be Good", which can be downloaded from the thumbnail to the right. Another early roadmap article, "Machine Ethics: Creating an Ethical Intelligent Agent", can be found in the Winter 2007 issue of AI Magazine, and there is a workshop paper entitled "Prima Facie Duty Approach to Machine Ethics and Its Application to Elder Care" presented at the 2011 AAAI Conference on AI, 

Whereas Bringsjord and company have advocated a deontological system of "Divine Command" logic (see the "Introducing Divine-Command Robot Ethics" paper on their site or their chapter in the book, "Robot Ethics"), the Andersons have adopted a "prima facie duty" system.

Their early work, which was at least partially successful, was based upon a consequentialist system of ethics, specifically Jeremy Bentham's "Hedonistic Act Utilitarianism". While they were successful, their work convinced them that a more complex system that relied upon and could be explained in terms of deontological principles was needed, and so they turned to W.D. Ross's notion of prima facie duties. Ross's theory is fundamentally deontological (see his entry in the Internet Encyclopedia), although it may utilize some consequentialism-based principles.

Prime facie duty theory holds that there is not one unifying ethical principle such as Bentham's utilitarian principle or Kant's categorical imperative. Rather it relies on a number of duties that it holds to be real and self-evident (using "prima facie" to mean "obviously true on their face, when first seen"). In Ross's initial version there were 7 such duties, which he later reformulated as just 4).


Ross acknowledged that there are multiple duties, forcing us at times to determine their importance in each specific case, and prioritize them accordingly. In order to be useful for autonomous systems, a deciding principle and consistent methodology for choosing a course of action needed to be developed. The Andersons, utilizing John Rawls’ “reflective equilibrium” approach, designed, built and tested a number of experiments. These included:

The GenEth generator can be used to derive the guiding principles from a set of test cases and evaluations of them by trained ethicists. These principles can then be fed into systems such as EthEl and loaded on a Nao robot or other system.

The description of events used in GenEth, EthEl etc. are tuples representing specific attributes and their impact on both "acceptable" and "unacceptable" judgements. As such, these systems are either lab curiosities or simply development tools for creating the sort of system that Monica Anderson (not related to Michael and Susan) called "just a program" in the Quora answer cited above. However, the Andersons hope to go further. They are already using Machine Learning to create the principle external to the robot that will use them. Their next step will be to incorporate that learning into the robot itself. [Update: I have greyed out the previous statement as it appears that I have read too much into a passage they wrote. Unsupervised Machine Learning is not a goal of theirs.]

The Andersons' work has, as can be seen above, all been in the area of medical ethics, and as such, it is important to them that they be able to create systems that not only behave properly, but also can explain why they did a certain thing, what principles they were following. This is clearly easier in a Logicist AI environment than in more biologically-derived Machine Learning paradigms, especially the sort of unsupervised deep learning that many expect to be the path towards full AGI. The Andersons' approach is phasing in Machine Learning. It will be interesting to see how far they can go down that path while maintaining the type of transparency that their current system provides.

Looking Forward

Which-Ethics-—-Deontology-Img-3

In the end, the Andersons' work has become what you might call a "hybrid/hybrid" approach. In terms of the computer science and technology involved, the programming of the robots themselves takes a largely traditional Logicist AI approach, but [Update: supervised] Machine Learning is certainly key to their methodology. Moreover, from an ethical perspective, they have chosen an approach that is firmly based in the duties of deontology, but which incorporates consequentialist aspects as well. In the next blog posting, "Which Ethics? — Consequentialism", I will look at a more purely Consequentialist approach. After that we will consider virtue ethics, and I will ask if there isn't an overall hybrid approach that allows the unification of many of these divergent systems.