AI in the News - #1

November 3, 2015 | Earl Wajenberg


This column is for posting links to current news stories about AI and the ethics of how it is designed and used.

Here are the first round of current links:

Trusting AIs with professional decisions (New Scientist)
Automation long ago hit blue-collar jobs.  Now it is poised to start taking white-collar jobs.  Aside from the alarm spreading among white-collar workers, there's the issue of how much authority to give these expert systems.  Is anybody really checking the diagnosis of the autodoc or the sentencing of the autojudge?

Assisted AI - surely a theater for ethics (New Scientist)

AI taketh away and AI giveth.  In the same issue of the same magazine, there's a story about the new white-collar jobs: pinch-hitting for AIs and training them to do better next time.


Will you let Goggle write your letters?
(Phys.org)

Google will be offering to write replies to your email.  You just hit Smart Reply and it will generate three likely drafts.  You ought to read through them before picking, but then you ought to have read the incoming message, too.  Did you?


To give a larger sample on our first posting, here are a couple of stories from last month:

Therapy robots (Phys.org)
A teddy bear or a dog can help anyone with emotional problems.  A robotic toy can help even more, if it's programmed to act as if it has the same emotional problems as the child.  In fact, because they're not really people, much less grownups, robots may be easier to open up to.

OK Google accountability (NPR)
Okay, Google, what commands did I give you three months ago, on the second Thursday?  You know that, do you?  As in, you keep a record of everything I say?...

 

And here are a couple from last quarter:

How personal an assistant? (NPR)
OK Google takes that dossier it has on you and makes inferences from it.  Ask it for a restaurant and it'll try to pick one you like.  Soon, it'll read your mail and pick things from it that you need to put on your calendar (in its opinion).  Siri, on the other hand, is keeping her distance, as a deliberate design choice by Apple.  This is the kind of choice that falls under Jim's AI virtue of Propriety.

Ethical robots (Nature)

This is a general article on the topic, quoting several workers in the field and featuring an actual simulation of Asimov's First Law of Robotics.