Welcome to Personified Systems

October 26, 2015 | Jim Burrows

What's a "trustworthy personified system"? Every day we see new systems emerging that we interact with as if they were people. We talk with them, take their advice; they read our gestures and facial expressions, help drive our cars, amuse our children and care for the aged and infirm. If they are going to integrate with our society and begin to act as if they are persons, we need them to behave as trusted members of society, not as outsiders, outliers or outlaws. How do we create that trust? That's what I hope to discuss here.

Introduction

Welcome to my blog. I’m Jim Burrows, a computer professional who prides himself in doing things that he’s never done before. Having recently done the same thing twice in a row (building secure and private communications systems), I became convinced that I need to shift my focus to something new. Therefore, I’ve spent the last couple of months trying to figure out what the next big thing we’re all going to need is, so that I can work on that. I’ve concluded that it is well-behaved or trustworthy systems, especially “personified systems.” Having said that, I probably ought to explain what that means, and why I think it is important.

I am using the term “personified systems” to address a number of converging and interrelated trends. In simplest terms, what I am calling personified systems are computer and network based systems that are powerful and ubiquitous enough that we deal with them not so much as computers or even as tools, but as if they were people or beings that interact with and assist us

There are at least three factors that contribute to us regarding them as “personified systems”: how they present themselves to us, how much they “know” and the jobs that they do. In terms of presentation and interaction, they may talk with us, respond to text commands in more or less natural language, or watch us and our gestures. Systems with voice recognition like Siri, Alexa, Cortana, “Hello Barbie”, the NAO robot, the CogniToys green dinosaur, and such are all personified systems in this sense. They also, generally, exhibit another of the defining attributes: they all rely on semantic analysis in order to understand what we are saying, and that analysis is almost always done “in the cloud” on remote servers. This means that these network distributed systems have not merely simple data, but information that has been turned into knowledge (a distinction I will describe soon). Finally, personified systems act as virtual assistants or active agents acting on our behalf. They may be personal assistants, nannies, caretakers, medical assistants, or full or part time drivers in our planes, cars and trucks. They have job descriptions more than defined uses.

What none of these systems are today is full-fledge human-scale artificial intelligences, what these days are generally called AGIs—Artificial General Intelligences. They are not conscious or self aware; they cannot pass the Turing Test. Rather, they mimic some aspects of genuine persons without actually being persons. They have no intentions or motivations. They are just systems that exhibit some characteristics of persons. They are, thus, “personified.” They may talk and communicate like us, or move and act like us, they may turn mere data into not only information but knowledge and so begin to understand. We are entrusting them with our information and with knowledge about our businesses, homes, and children—and in the case of autonomous vehicles, even with our physical safety.

This brings us to the second half of the introductory phrase that I used above: systems that are “well-behaved or trustworthy”. When computers were seen merely as things, as tools, we spoke of them in terms like “well-constructed”, “reliable”, “robust” or conversely “buggy” or “broken.” As they become personified, more normative language starts to be useful. We may think of them as “trustworthy”, “well-behaved”, “helpful”, “friendly”, “funny”, “smart” or “dumb”. This isn’t just a matter of language. It is tied to how we think about them, how we expect them to act, what we are willing to confide in them, how we and they behave.

What’s Next?

After thinking about this for many weeks, I’ve come to the conclusion that what we need to do in this area is to define a set of virtues that personified systems, and computerized systems in general, ought to exhibit, virtues that guide us in their construction and which should be reflected in their behavior. I have started to construct a list of those virtues, based on the roles that these systems are and will be playing, and it seems to me that they are all aspects of the general virtue of Trustworthiness. This is a bit arbitrary, but it is at least an initial model that future work and thinking can be built around.

I am currently breaking Trustworthiness into Loyalty, Candor, Discretion and what I am at least for now calling “Propriety.” These are the four major normative or “ethical” virtues. Beyond them, I have identified four “utilitarian” or “functional” virtues: Helpfulness, Obedience, Friendliness and Courtesy. Together, these nine virtues are at the heart of the work that I have done so far. 

In the next couple of blog postings, I will outline the roadmap that I would follow if I were to launch this effort as a major research and development project, describe the nine virtues that I just listed in greater detail and more operational terms, and outline how I see this work relating to what I expect the future of AI, AGI and related efforts to be. After that, we will see where this work goes and what work it spawns for me and others. 

I hope you’ll come along for the ride.