A Matter of Semantics: Norms & Agents
In order to talk about making personified systems behave in trustworthy ways, which is my purpose for this blog, we need to lay out norms for their behavior. In the tradition of human ethics, there are three main schools of "Normative Ethics": "Deontology", "Consequentialism," and "Virtue Ethics". This posting will attempt to define those terms, and lay the groundwork for relating them to "Personified Systems" and "Artificial General Intelligences" (AGIs) that qualify as fully "Autonomous Moral Agents" (AMAs).
Normative Ethics - The study of ethics has several sub-disciplines. The one we are most concerned with in this effort is "Normative Ethics", defined as systems of norms for behavior. Normally, these norms are thought of as controlling human behavior, but in our case, we are looking to define the norms of partially autonomous systems created by humans. There are at least three domains in which these norms might apply: the behavior of the people creating the systems, norms built into the systems, and norms created by the systems, once they are capable of it. It is the second of these, the norms that control the behavior of systems that are not themselves AMAs, that we are most concerned with, though the other two senses will be referred to occasionally.
Personified Systems - While I've defined it in the Welcome post, this being an "A Matter of Semantics" posting, let me be sure to define "personified system" here. As I have been using the term, a system is "personified" if we interact with it more the way that we would with another person rather than as a tool, but the system itself is not a person, AGI or AMA (see below). The examples that I usually give are virtual assistants such as Siri, Cortana and Alexa (also known as Amazon Echo) which we talk to, autonomous systems that drive our cars and planes, care takers and the like. The blog post "Roadmap, Part 1" has a fuller list.
Artificial General Intelligence - An artificial general intelligence, is a system that is fully capable of performing any cognitive function that a human being could perform. This is sometimes referred to as "strong AI", to distinguish it from specialized systems that can perform only some functions, such as play chess or "Jeopardy", or refine the results of a user's search query. In many ways, it could be argued that an AGI should be considered to be a "person" rather than merely a "personified system" that behaves somewhat like a person.
Autonomous Moral Agent - An autonomous moral agent (AMA) is an entity or system capable of acting (as an agent) on its own (autonomously) and making its own moral judgements. AMAs can be held responsible for their actions. Since moral judgements are a human cognitive function, presumably a fully realized AGI would also be capable of acting as an AMA, while personified systems would lack that ability.
Deontology - A deontological normative system is one that is based on rules of behavior that define the actor's moral duty or obligations. These range from systems of laws such as the "Ten Commandments" to Kant's "Categorical Imperative". The focus is upon the actions of the agent and the rules that drive it.
Consequentialism - Consequentialism, rather than focusing on the actions, motives and duties of the actor, focuses on the results of those actions, on their effects upon others and upon the actor. One of the most common consequentialist systems is Utilitarianism, but there are many others, differing on how they judge the outcomes.
Virtue Ethics - Finally, Virtue ethics focus on the character of the actor, and the virtues that their character and behavior embody.
As will come out in other blog postings, while rules of deontology and analysis of Consequentialism have attracted most of the attention of computer professionals, my own thinking on the matter suggests to me that virtue ethics is the most appropriate for personified systems, and may hold the most promise even for AGIs and AMAs. More on that to come…