Which Ethics? — Virtue Ethics

January 4, 2016 | Jim Burrows

Having looked at deontological and consequentialist ethics brings us to the third major class of normative ethical systems, and perhaps the oldest, Virtue Ethics. Whereas deontology and consequentialism provide rules for behavior based on principles and consequences, respectively, Virtue Ethics focuses on the character of the actor. It asks, and attempts to answer, "What sort of a person ought I to be, and how would such a person act?" 

Shortly after I started this effort, I tentatively concluded that Virtue Ethics was a more promising approach to making Personified Systems—systems that we deal with more as we do with other people than as we do with tools—trustable and worthy of trust. I then spent many weeks looking into which virtues, and how we might operationalize and implement them. In the last few weeks, I have been revisiting the suitability of the three classes of normative ethics. (See "Which Ethics? — Deontology" and "Which Ethics? — Consequentialism".) In doing so, I found a nearly 25-year-old article by James Gips of Boston College entitled "Towards the Ethical Robot" that gives an excellent overview of the whole subject.

After the inevitable Asimov Three Laws citation, Gips starts rather precisely with the issue that I have regarding "personified systems", as our systems start behaving more like people: "[W]e want our robots to behave more like equals, more like ethical people". We have thousands of years interacting with other people, and have a detailed set of expectations as to how they should act; systems that act like people will inevitably be judged in the context of those expectations. This is what led me to ask what are those expectations, and how can non-intelligent systems that merely act and interact in human-like ways be made to live up to them?

I'd love to say that Gips agrees with much of what I have been doing and writing for the last several months, but given that he wrote it all a quarter of a century ago, I'm pretty much forced to say that it is I who agree with him. Having given an overview of Deontological and Consequentialist ethics, Gips writes:

“On what type of ethical theory can automated ethical reasoning be based?

“At first glance, consequentialist theories might seem the most "scientific", the most amenable to implementation in a robot. Maybe so, but there is a tremendous problem of measurement. How can one predict "pleasure", "happiness", or "well-being" in individuals in a way that is additive, or even comparable ?

“Deontological theories seem to offer more hope. The categorical imperative might be tough to implement in a reasoning system. But I think one could see using a moral system like the one proposed by Gert as the basis for an automated ethical reasoning system. A difficult problem is in the resolution of conflicting obligations. Gert's impartial rational person advocating that violating the rule in these circumstances be publicly allowed seems reasonable but tough to implement.

“The virtue-based approach to ethics, especially that of Aristotle, seems to resonate well with the modern connectionist approach to AI. Both seem to emphasize the immediate, the perceptual, the non-symbolic. Both emphasize development by training rather than by the teaching of abstract theory. Paul Churchland writes interestingly about moral knowledge and its development from a neurocomputational, connectionist point of view in "Moral Facts and Moral Knowledge", the final chapter of [Churchland 1989].”

Since the systems of the day were not yet up to voice and face recognition, natural human-machine dialogs, driving cars, and so forth, he did not extend his thinking to merely "personified" systems, but I will add that in the case of programmer driven systems, the design approach of software architects and implementers asking at each juncture, "What would a trustworthy system do?" or "What would be the candid (or discreet, etc.) thing for the system to do?" extends the value of virtue ethics to the sub-AI world. 

Gips gives a good summary of various systems of virtues:

“Plato and other Greeks thought there are four cardinal virtues: wisdom, courage, temperance, and justice. They thought that from these primary virtues all other virtues can be derived. If one is wise and courageous and temperate and just then right actions will follow.

“Aquinas thought the seven cardinal virtues are faith, hope, love, prudence, fortitude, temperance, and justice. The first three are "theological" virtues, the final four "human" virtues.

“For Schopenhauer there are two cardinal virtues: benevolence and justice.”

To this I would add what is often taken in pop culture to be the exemplar of virtue, the Eagle Scout, who according to the twelve points of the Scout's Law is summed up as follows:

“A scout is trustworthy, loyal, helpful, friendly, courteous, kind, obedient, cheerful, thrifty, brave, clean, and reverent.”

While they do not write specifically about "virtues", the adherents to the "Moral Foundations" school of thinking started by social psychologist Jonathan Haidt, explain human morality in terms of the evolution of 5 or six underlying psychological values or judgments (from “MoralFoundations.org”):

And tentatively:

It is fairly easy to see how each of these ties to the virtues of the Eagle Scout, for instance.

One thing worth noting is that Aristotle conceived of virtue as a middle ground between two vices, one of excess and the other of deficiency. Thus for him courage is the mean between recklessness and cowardice, generosity between wasteful excess and stinginess, and so forth. On the other hand, many such as the Moral Foundations theorists see the world in more black and white, good and bad, dichotomies. This idea of virtues as a Golden Mean is reflected in many ethical systems, such as Taoism, Buddhism's Middle Way, Confucius's Doctrine of the Mean, and so forth. On the other hand dualistic ethics has dominated the Abrahamic and other Middle Eastern religions such as Zoroastrianism.

Gips' observation that Virtue Ethics "seems to resonate well with the modern connectionist approach to AI" seems particularly pertinent today given the recent explosive growth in Machine Learning technologies. This leads us to what may be the major shortcoming of a Virtue approach to machine ethics: the area of accountability, that is of the system's ability to explain why it took certain actions.

The rules regulating a Deontological system that uses a language such as Bringsjord's DCEC can readily be mapped to English, as can the specific consequences that caused Winfield's Consequence engine to reject a specific alternative. However, given the opacity of the reasoning created by sophisticated Deep Learning systems, it may well be that the attributes that Gips cites as matching connectionist or ML, that it “emphasize[s] the immediate, the perceptual, the non-symbolic [and] development by training rather than by the teaching of abstract theory” run substantial risk of making it hard for the system to explain its actions.

Still, it seems to me that Virtue might be amenable to, for instance, the ML and principle-generating methodology of the Anderson's GenEth process, as discussed in “Which Ethics? — Deontology”. GenEth describes various events in terms of features which are present or absent to varying degrees, measured as numbers that range, for instance, from -2 to +2. Its intent is for professional ethicists to be able to train the system by giving a number of cases and their judgement as to the proper course. An analogous approach that allows the system to recognize the applicability of the various virtues to situations would seem to make sense.

If deontological rules, prima facie duties, and consequences are to be laid out explicitly, then it would appear to be unavoidable that the body of such rules will become huge. A given sophisticated system might operate with thousands, likely many thousands, of duties, obligations, rules, etc. An advantage of Virtue ethics might well be to offer an organizing principle for these rules.

If each learned or generated rule is created in the context of a ruling virtue, then the system might be able to give explanations of its choices and actions by saying that it was motivated by "loyalty, specifically, a rule arising from the case where…" and citing the training test case or cases that was the major contributor to a specific learned pattern. I do not claim that tracking such information will be easy, but explaining behavior based on ML-generated (especially Deep Learning) patterns is inherently difficult in and of itself, and so if the language of virtues can help to organize the ethical reasoning, it would be a great help.

Given all of this, I conclude that Virtue ethics is applicable from the least intelligent personified systems up through to the hypothetical future Artificial General Intelligences sophisticated enough to act as Autonomous Moral Agents. But, even, more, I can see arguments for believing that a hybrid system, combining Virtue ethics with some of the best work being done in Deontological and Consequentialist Machine Ethics, could be extremely powerful. I will address this idea in my next couple of blog postings.