Which Ethics? — Pulling it all together
What type of normative ethics is best suited to Personified Systems, systems that are acting more and more like people? What kind of ethics works for simple software with more human user interfaces? For sophisticated AIs, and for the ultimate Artificial General AIs of the future?
In the last three installments, I've given an overview of what's already been done and written about in terms of the three main categories of normative ethics (Deontology, Consequentialism and Virtue ethics) and how each might apply to the systems of the future. The question now is which of these or which combination of them should we use.
This is a question that could form the basis for a major research program, or several such, and I cannot hope to settle it during a short sabbatical, in a simple blog. Still, I think that over the last few months I've gotten a real handle on the problem. It seems to me that the Andersons' use of W.D. Ross's Prima Facie duties framework is highly promising, being a pluralist, basically deontological, system that allows for the inclusion of utilitarian or other consequentialist principles. To that, I would add that since the number of duties can potentially grow quite large, a virtue-based ethic can serve as an organizing and simplifying principle.
Each of the efforts that we've examined in the last few installments provides important tools and insights. Bringsjord and his team at RPI provide us with a formal language, their Deontic Cognitive Event Calculus (DCEC, see the DCEC Formal Specification and “On Deep Computational Formalization of Natural Language”), that allows the expression of propositions about knowledge, beliefs, obligations and the like.
The Andersons with their GenEth provide a Machine Learning-based mechanism for analyzing and generalizing the reasoning of professional ethicists regarding ethical dilemma test cases. Both of these systems allow the principles that drive logicist programmer-written software systems to be expressed both formally and in clear English translations, which is important as an aid to accountability.
Winfield's Consequence Engine architecture offers an on-board decision-making approach: its duplicate-system control process operates in an internal simulation of the system and the world around it and evaluates the results so as to preclude unacceptable actions. (See “Towards an Ethical Robot”.) This contrasts with the approaches of Bringsjord and the Andersons, both of which externalize the ethical reasoning (See Winfield's blog post regarding “Popperian creatures”).
Bringsjord's ethics are strictly deontological, and Winfield's consequentialist. The Andersons' use of Ross's prima facie duties begins to pull these traditions together. It uses a systematic approach to developing and adopting a deciding principle for prioritizing and selecting among the applicable duties. Some of these duties may be more consequentialist rather than purely deontological. This is really a very sophisticated approach (essentially John Rawls’ “reflective equilibrium”) for an autonomous system to take, but if personified systems are to truly participate in human society, their behavior will need to be both sophisticated and explicable.
Balancing the sophistication, especially if it is based upon Machine Learning, with the ability to clearly explain the reasoning behind the system's actions will require serious efforts to simplify, both because the large number of duties, consequences and principles is bound to grow, and because Machine Learning, especially Deep Learning can result in systems that are not always readily understood from outside.
How this is all accomplished depends on the architecture and developmental methodologies used to create specific systems. At one end of the spectrum, we have systems that are created using traditional software engineering methodologies, with human programmers making the decisions and coding the behaviors into the system's software. For them, the virtues provide a way to organize their thinking as they approach each design or implementation decision. They may ask themselves "To whom is the system being loyal at this time? How are the user's, society's and our (the manufacturer's) interests being traded off?" or "Is the user's information being handled with discretion at this point?" or "Are we being candid here?" and, in general, "What would a virtuous system do?"
At the other end of the spectrum, one can envision a sophisticated AGI guided by what amounts to an artificial conscience, trained by professional ethicists, evaluating its actions and choices according to internalized principles that are organized and explicable according to a system of virtues and specific duties. The capabilities of current state of the art systems are somewhere in between, and can draw upon the theories and practices of all of the efforts described in the last few installments.
When I started this effort, I was focused almost entirely upon the role and function of virtue ethics in guiding and improving the behavior of personified systems. As I have been evaluating the possibility of actually including deontological and consequentialist principles and prima facie duties, a more complex picture has begun to emerge. I continue to think, as Gips did a quarter century ago, that Virtue Ethics parallels the workings of connectionist models of natural cognition and the latest developments in machine learning, and can serve a very important function. I am even beginning to see what may be the gross outline of an approach to integrating normative ethics into the workings of future full AGIs such that they might be able to act as autonomous moral agents. I will take this up in my next installment.
For now, let us conclude that the answer to "Which ethics?" is "a hybrid approach featuring prima facie duties based upon both deontological and consequentialist principles, organized according to principles derived from the decisions of professional ethicists, and reflecting a set of core virtues". While it is not a simple answer, two or three millennia of philosophy have taught us nothing if not that ethics is a complex matter. The good news is that the precision and formality required to incorporate anything into an automated system forces us to clarify our thinking, and approaching machine ethics may aid us in refining our own human ethics.