The Roadmap, part 2

November 5, 2015 | Jim Burrows

Introduction

In this article, I will outline what I have been and am continuing to do on my own, during my current “sabbatical”.

Short term

In my previous article, I laid out my grand plan for conducting a program of research into building trustworthy personified systems if such an effort were performed by a group or department within a major enterprise or institution. While that would be an amazing opportunity, it is not one that is immediately on the horizon. Still, the question is of interest to me, and so I have been looking into it armed with what I have, the experience of my career and education. 

One of the most important things that I learned when I was working in software human factors and usability research is that real world testing and research is more powerful and effective than relying on mere human expertise. Nonetheless, expertise and experience have their place and so I have been applying mine to this question for the last few months. What follows is a summary of what I have been doing and continue to do in this field.

Business: Choose a focus

Since my researches are not tied to a specific project or charter from a specific organization, I have been somewhat freewheeling in focus. In general, I have been focusing on consumer software systems such as virtual assistants and voice recognition systems, and not on hardware intensive and specialist systems such as military uses, robots, drones, and the like. 

There is perhaps one exception to this: the question of autonomous vehicles. Like robots and drones, these are hardware systems, but in the short term, they are being introduced as a number of smaller assistive technologies—systems for parking the car or changing lanes on the highway or avoiding collisions. They are thus, in a way, virtual assistants, but assistants that add a new dimension of their own: the potential to be responsible for actual physical harm (or its avoidance).

Sociology and psychology

Sociology and psychology come into this process in a couple of fundamental ways. They inform us as to the origins and mechanisms of human morality and trust, and tell us how people view personified systems in general and interact with the sorts of systems that exist today. 

I have been relying on two sources in the area of how people view personified systems: the experiments that I did a couple of decades back in “natural man/machine dialogs”, and my own informal survey of how robots and computers have been presented in mass media over the last 6 or 7 decades.

During the 1980s I was a member of an R&D team at Digital Equipment Corp. that was variously known as Human Engineering Research, Software Human Engineering and Software Usability Engineering. We did some of the basic foundational research on the topics of man/machine dialogs and software usability. One of our many findings was that people talked to machines in a way that was similar to, but distinct from, the way that they addressed each other, something we referred to as “Natural Man/Machine Dialogs” and studied with a number of simulated systems, including the so-called “Wizard of Oz” setup, whereby a hidden researcher provided the flexible intelligence that software lacked. We then built systems that behaved the way that the people expected.

One of the things that became clear to me personally at that point was that people’s expectations derived in good part from the portrayal of robots and computers in popular fiction; these in turn depended upon the expectations of the folk who were writing and creating the media representations. This process has clearly been continuing over the last 3 or so decades, in a process of iterative creative refinement, one that also interacts with people’s experiences with computers, robots, and other automated systems, and which has contributed to the design of those systems.  Fiction and technology have been shaping each other.

Since trust is, in part, dependent upon living up to people’s expectations, being aware of the expectations set in our fiction and in aspirational technical visions such as the classic Knowledge Navigator videos from Apple, can contribute important context. 

In order to understand how to make systems be and be perceived as trustworthy, we need to understand the role, mechanism, and history of trust in human society. An excellent source on this, which I consulted at the start of my project, is Bruce Schneier’s “Liars and Outliers”. Schneider, a technology and security expert by trade, brings a very technical and detailed viewpoint to the subject.

While it does not have an explicitly defined role in my longer term roadmap, I have also been doing a survey of the area of the social psychology of morality. This is a field that has grown considerably since my days as a social psych major and serves as one of several foundations for the next area, philosophy and ethics. As we personify systems, and begin to evaluate their behavior using terms like “trustworthiness”, “loyalty”, and “discretion”, it becomes important to understand what causes us to perceive, or fail to perceive, these qualities in others—what is the user’s theory of mind, and what adds to and detracts from our perceptions of others as “moral”, “immoral”, and “amoral”. This is a topic that would naturally be covered by any team that assayed the long term program, but one that I was somewhat behind on as I started this project.

Philosophy and Ethics

In discussing this project with friends and colleagues, naturally, the topic of Artificial Intelligence and when robots and other autonomous systems would be true autonomous moral agents came up. When it did, during these discussions, I realized that the reasons that I have always had low expectations of AI had become clear enough that I could write about them clearly. The result is a short essay, not properly part of this project on the topic of AI and “Common Sense” in the original Aristotelian or medieval sense.  It can be found here

Given that Artificial General Intelligence—AGI—intelligence on the order of our own, which is capable of allowing artificial systems to become true autonomous moral agents, is still well in the future (traditionally 25 years from “now” for the current value of “now”), we need to address not the question of how to endow personified systems with morals and ethical judgement, but rather, how to get them to behave in a manner that is congruent with human norms and ethics. That, in turn, leaves us with the question of what system of morals they should align with.

There are three broad schools of normative ethics: deontology, ethics based upon rules of behavior; consequentialism, based upon the outcome, or expected outcome of our actions; and virtue ethics, based upon the nature and character of the actor. My tentative conclusion is that absent fully human-level AGI, both deontology and consequentialism depend upon an analysis and understanding of the nature and consequences of one’s actions that cannot be achieved by a mere personified system. I have therefore focused my efforts on finding a suitable set of virtues for virtual assistants and autonomous vehicles.

After surveying systems of virtues from Eagle Scouts to war fighters to the professional standards of human professionals, as well as butlers and valets, I’ve come up with a system of virtues that I am broadly categorizing either as “functional” virtues or aspects of trustworthiness. They are:

  1. Functional or utilitarian virtues
    1. Helpfulness
    2. Obedience
    3. Friendliness
    4. Courtesy
  2. Aspects of Trustworthiness
    1. Loyalty
    2. Candor
    3. Discretion
    4. “Propriety”

The first group are attributes that are familiar to software developers, UI/UX specialists, and usability engineers. They are very similar to the attributes of good, usable, approachable user interfaces. The second group consist of trustworthiness and four subsidiary virtues that it comprises. It is these that I have been focusing my attention on.

Operational

In order to endow systems with these virtues, system architects and software developers need clear definitions of what each one means. They can, then, ask themselves each time that they are deciding how a system should behave, “What would a virtuous person do?” By focusing on emulating the behaviors that are commensurate with trustworthiness and its subsidiary virtues, engineers can create systems that will integrate as smoothly as possible into society. Thus, the next phase in my persona project is defining, in operational terms, what I mean by each of these. This is the main set of issues that I am now addressing in my project. The following list captures my current thinking on them. The operational definitions are still a work in progress.

Loyalty: The central question with regard to loyalty is “Whose interests are being served?” Does the system primarily serve the needs of the user or someone else? There are, depending upon the precise nature of the system in question, many whose interests might be served: the user, the owner or the manufacturer of the system, society at large, and so on. The user and owner might be different in the case of a company-issued smartphone, or a companion or care-giving system for children, the infirm, or the elderly.

The classic example in this area is from the area of autonomous vehicles: the so-called “Trolley Problem”. Does one take positive action to kill one person in order to save five? In the area of autonomous vehicles, the question arises, should the auto be willing to sacrifice the passengers in order to save a greater number of bystanders? Should it take no positive action that results in death, but rather allow deaths by inaction because positive action exposes the owner, manufacturer, or designer to greater liability? Should the actions of an auto depend upon whether it is acting as a private chauffeur, a commercial limo driver or the driver of a municipal transit vehicle?

There are similar questions regarding the voice data of systems such as Siri, Alexa, and Cortana. The data, once analyzed and parsed in order to distinguish among homophones, becomes information or even knowledge about the user. As such, it is of value in the information economy, allowing searches to be improved or ads to be targeted and thus more valuable. Should the user’s desire for privacy outweigh the system operator or manufacturer’s economic interests? Does it make a difference if the virtual assistant is owned by the enterprise that the user works for rather than to the user?

For both personified systems and human beings, loyalty is not a simple one-dimensional attribute; rather, each of us is loyal to family, friends, employer, nation, and so on. Loyalty, operationally, is a matter of priorities. Do the interests of one party or group outweigh those of another? 

Designers and implementors of personified systems ought to identify one or more explicit hierarchies for their system and then stick with it, evaluating at each major decision point how the priorities come into play. I say “one or more” because different customers may want sets of loyalties and priorities or want to choose between them. Offering a feature that allows for the customization of priorities is, of course, quite a lot more work.

The complexity of the issue of loyalty leads to the next virtue.

Candor: Personified and autonomous systems need to be candid with their users, both with regard to their loyalties, the trade-offs and circumstances that cause the interests of others to supersede those of the user. It may be just fine for the system to put the interests of one’s employer or society at large above that of the user, but only if the user is aware of and can expected this. Making such trade-offs “behind the user’s back” is not compatible with being trustworthy.

An early list of virtues included “honesty”, but I modified this to “forthrightness” and then “candor” to reflect the need in the context of trustworthiness for the focus to be on what the user understands rather than what the system says. It is not sufficient to tell the truth in a nine-page terms-of-service written in legalese. If a system is to be worthy of trust, it must be candid, explaining things in a manner that is easily understood, and allowing the user to request clarification or details. Developers and designers should be asking, “Have I made it clear to the user what to expect?” not merely with regard to questions of loyalties, but the behavior of the system as a whole.

Discretion: The issue of tradeoffs between conflicting loyalties brings us to the question of discretion. One of the reasons that the trustworthiness of personified systems is becoming a pressing issue is that in order to understand what we are asking of them, virtual assistants and voice recognition interfaces need to analyze and parse what we say, and tend to rely on the power of cloud-based servers to do it. This means exfiltrating data—our spoken utterances—to the cloud and then analyzing it not merely in terms of the sounds made, but in terms of the semantic content. Knowledge of what the topic at hand is helps them distinguish homonyms and the like.

 A side effect of this analysis is that detailed knowledge about what we have said and what we are interested in now resides on the system’s servers. This semantically tagged information may have considerable economic value. We have long spoken of the “information economy”, but in recent years this has become very literally true. Substantial portions of our economy now trade not in value represented by money, but in information. Information has taken the role of currency. We buy many of the services that the Internet offers by paying not in money but information about ourselves. Because of this, it becomes increasingly difficult, if not impossible, to opt out of the trade in personal information. It’s nearly as hard as avoiding using money.

If personified systems are going to be loyal to us, have access to intimate, private, and personal information about us, and engender any sense of trust, then it becomes important that they be able to distinguish information that needs to remain confidential from that which can be used to pay our way in the information economy. They need the discretion to maintain our confidences. Only by recognizing that some information about us is more crucial to our interests can they properly make the trade-offs between our interests and those of others; only then can they manifest loyalty. 

Propriety: While trustworthiness’s first three constituent virtues, loyalty, candor, and discretion, are intimately intertwined, the fourth is a bit more independent and complex. I am currently calling it “propriety”, but as the process of operationalizing it progresses, I may find a better label for the concept or a better formulation of the principle in question.

At first glance, it would appear obvious that a personified system should be friendly and emotionally engaging, but upon further consideration,  it seems wiser for them to maintain a bit of emotional distance of the sort that we associate with a “professional demeanor”, for a variety of reasons, depending upon the exact role that it performs. For instance, systems that are companions for children should not compete with real people for a child’s affection and attention. Children need to develop people skills, and that means dealing with real people. An automated system might be able to offer some support to a child in that endeavor, but it should not supplant other people.

This applies not merely to children but to anyone that these systems interact with. Except in very special circumstances—autistic children who need an “almost human” to practice on, a prisoner denied the company of folks from the outside world, and so on—artificial people are a poor substitute for a real person, and they should not replace genuine emotional bonds. 

Similarly, since as we have noted, it is important that intimate and confidential information not be traded or given away by the system, one of the best ways to protect against that is to not encourage the human user to confide too much in the system. Thus, an automated system should, in general maintain a pleasant, polite, and positive relationship with its user, but also maintain the cool and professional demeanor of a human butler or valet, a therapist or care giver, a nanny or teacher, rather than trying to be the user’s friend, lover, or confidant.

Technical

Once operational definitions are available for these virtues, they will need to be turned into a set of guidelines and design elements that can serve as a starting point or template for design and implementation of systems. Some of these occur to me as I work to operationalize the virtues, and I am beginning to start collecting some. So long as this is a small one-man theoretical exercise, this list will remain substantially incomplete, but as a long-time system and software architect, I cannot help but contemplate and accumulate a few.