The Roadmap, part 1

October 29, 2015 | Jim Burrows

Introduction

In this article, I will outline what I would do if designing the behavior of trustworthy personified systems was a major research and development effort within a large well-funded R&D organization. In my next article, I will outline what I have been and am continuing to do on my own, during my current “sabbatical”.

Long term

To do the job properly, to identify what principles of system behavior should be adopted and then begin to implement them is a major R&D effort that crosses a great many of the disciplines that I have studied and practiced during my career. It is certainly well beyond the scope of a short, one-man project. In this section I will outline the types of research and development activities that would be involved. In the real world, any enterprise or institution that mounted an R&D project or program in this space would probably scope the effort differently, and while it is quite possible to envision that scope being larger, it is more likely that it would be narrower than what I am describing here. Still, it seems worthwhile to look at it in a large generalized context, at least until I can identify a plan to realize some piece of this personally.

Business: Choose a focus

“Personified systems”, as I have defined them, cover a huge array of existing and emerging products. Any actual R&D project is likely to address only a subset of them, chosen according to the business and research requirements of the enterprise or institution doing the research and/or development. 

Some of the types of systems that are included in the general category are:

Sociology and Psychology

There are a number of social and psychological issues that need to be addressed either through the expertise of the participants in the team or through explicit research. These questions come in the areas both of the broad background and context in which the systems operate and in the specific interactions of the individual systems being studied and developed in the area of focus. Areas that need to be covered are:

Societal expectations: How do people think of personified systems, robots and the like. What are our expectations of them?

Natural man/machine dialogs: Given that background, how do people speak to and interact with machines. In many ways this is similar to how we interact with each other, but research shows that knowing something to be a machine alters how we speak to and interact with it. Part of this is due to the different capabilities of machines which are not yet fully intelligent. In part some of it is due to the expectations that society, fiction and media set. Finally, some of it is because of the different role that artificial systems play. 

Impact upon us: for the foreseeable future, personified systems and AGIs will serve a sub-human role in society. This is likely to be so even for AGIs until they not only are, but are accepted by society and the law as autonomous moral agents deserving of rights. This role as “sub-human” will have an impact on us. As we treat them as both persons or person-like, but at the same time as inferior, it is likely to have impact on how we deal with other humans. Will it be a pressure upon us to go back to thinking of some people as sub-human persons or will it clarify the line between all humans, who are full persons, and non-humans who are not?

Social psychology of morality: Substantial research has been done in both the neurophysiology and the biological and social foundations of morality. Work on this project needs to be well grounded in these aspects of human morality and behavior in order to understand how artificial systems can be integrated into society.

Jonathan Haidt’s Social Intuitionist model and Moral Foundations theory, if valid and accurate, may provide a valuable grounding in understanding the human morality that we are attempting to integrate autonomous systems into. On the other hand, Kurt Gray’s critique of the specific foundations and Haidt’s work, as well as his own theories regarding our formation of a theory of mind and the role of our perception of membership in the “Mind Club”, provide alternative clues in how to integrate personified systems into people’s moral and social interactions. 

Philosophy and Ethics

The next step, based upon the roles and capabilities of the systems in the area of focus, and upon the expectations, needs and desire of the users, is to decide upon a suitable model of normative ethics, and then to flesh it out. There are three major classes of normative ethics: deontological, that is to say rule-based, consequentialist, focusing on the results and impact of actions, and virtue-based, focusing on the character of the actor.

I have suggested that given the demands of understanding all of the deontological rules that might apply to a given action, and of predicting all of the consequences of a given action, a virtue-based system is most suitable for autonomous systems, at least until they are highly sophisticated artificial general intelligences fully capable of being autonomous moral agents (AMAs), and perhaps even once they have attained that level. This, however, is by no means certain. There are researchers such as Selmer Bringsjord and his colleagues at the Rensselaer AI & Reasoning Lab, who have done considerable work in developing and using what they describe as a “Deontic Cognitive Event Calculus” system. The possibilities of such a system should not be dismissed without a rigorous examination and analysis.

Having chosen one of these three major paradigms, a more detailed system of rules, or principles will need to be developed. Again, the area of focus will have a significant impact upon which specific virtues, rules or principles are chosen, and the priority relationships between them, but I expect that there is a great overlap between the requirements of different areas.

For virtually all of the focus areas, there are existing human professions with their own rules of conduct, ethical standards and requisite virtues. Designing a specific normative system will call upon those bodies of work, along with general ethical considerations and the peculiarities of establishing a normative system for a non-human, and in terms of both cognitive and ethical realms, sub-human actors. Even when truly autonomous moral agents emerge and are generally recognized it seems likely that their natures will still be different enough that there will be difference between normative system controlling their behavior and that of humans.

One area of study that will need to be addressed is the impact upon us as humans of dealing with sub-human actors, and the normative systems that apply to both them and us. We are barely getting to the point where our social systems no longer recognize classes of supposedly inferior humans, and we have not gotten very far in considering the ethical roles of animals. As machines begin to be personified, as we begin to speak to them, and even converse, the impact upon our own consciences and behavior of dealing and interacting with non-persons or semi-persons with limited or different ethical roles will need to be monitored.

Operational

Having chosen a particular normative ethical system and set of principles, rules or virtues, the set will need to be operationalized, and the priorities and relationships between them will need to be clearly defined. As existing systems fall far short from the analytic capabilities and judgements required to apply the rules and principles to specific actions, the bulk of that analysis will have to be done by researchers and developers and turned into specifications and descriptions of the required behavior of the systems. This is a major undertaking.

Technical

Once the rules have been operationalized, they will have to be embodied in the design and implementation of the actual software systems involved, both the analytic systems (most of which reside in the cloud today) and the more active agents and applications which reside in mobile, desktop and IoT devices. Since the handling and exfiltration of sensitive information is a major issue in the trustworthiness of these systems, special care will have to be taken in the design of the distributed aspects of the system so as to control the domains to which various pieces of information are exposed.

[Continued in next installment]