Interview
02 Nov 2018

Bias and the art of robot development

The new technological age of robotics 2.0 is coming. As robots begin to work with people and develop relationships with them, what biases and ethical concerns are surfacing? An interview with Andra Keay.

Zuzanna Ziomecka
Zuzanna Ziomecka Gazeta Wyborcza, Poland
Bias and the art of robot development - NewsMavens
Japanese robot, PixaBay

Andra Keay is Managing Director of Silicon Valley Robotics, which supports the innovation and commercialization of robotics technologies. She is also the founder of Robot Launchpad for startups, and cofounder of Robot Garden, a new robotics hackerspace. 

Zuzanna Ziomecka: When I think about robots I think about Harmony -- the sex robot which is said to be available for consumers some time this year. Can you please tell me that there are women in this industry that can make technology work for the other half of the global population as well? 

Andra Keay: Honestly, I think sex robots are over-engineering. There are better, simpler solutions to hand -- pardon the pun -- that people are already using. Meanwhile, yes, there are women in the industry and they tend to be much more focused on pragmatic [issues]. In healthcare or in agriculture, or in a home robot that will be able to relate to everybody in the house.

Women in robotics tend to be really good at looking for robots that can solve problems for people.

They get very involved in learning what people who are doing a job need to make their job better. This is the sort of robotics that I like to see come into the world, because they augment humans and make our lives easier. 

ZZ: What is on the frontier of robotics technology today? 

We’ve had 50 years of robotics 1.0. We’ve done a lot with a very simple robot -- the robot arm -- and it's been doing what they call the 3 Ds -- dull, dirty and dangerous jobs. But it has to be kept in a factory and in a cage because it’s also dumb. It doesn’t know when people are people are around, and cannot adapt its action. What we’re seeing now is the start of robotics 2.0. This entails robots that are capable of some social interaction and they can be soft, which means they can be engineered to be adaptive to people.

A collaborative robot is capable of sensing when someone is nearby and slowing down to take avoidance behaviors, like with self-driving cars. The big leap forward is based on perception technologies that allow obstacle avoidance. 

ZZ: So, we’ve reached a point in the development of these technologies where it becomes important to discuss ethics. Self-driving car manufacturers are faced with decisions about who’s life their car will value more: their passengers, the people in the car they are about to collide with, or the pedestrians on the sidewalk nearby. How should we be making these decisions? 

It’s great that we’re having these conversations about ethics, but sometimes I find the scenarios very far-fetched. And that particular trolley problem is not actionable at all. It’s not going to be possible to make such a clear ethical condition in our programming.

The problem will be more along the lines of the car being unable to sufficiently differentiate objects in a shadow.

At this point, people are always in the [decision-making] loop -- even if [the car is] self-driving or autonomous, the technology always falls under somebody’s control.

We need to pay more attention to recognizing whose control it should be in at any given time and then to proceed from that point onwards. 

ZZ: Who is having conversations about ethics in the industry and what’s come out of them so far? 

There was a series of ethics workshops conducted in 2010 in the UK, that developed the EPSRC guidelines. I like them because there’s only 5 and they cover everything that we need. Here’s a very simplified version.

The first one is: let’s not build fully autonomous killing robots -- that’s just a bad idea and it’s been approved by the United Nations, with the exception of two countries, one of them being the United States, so enough about that one. [laugh]

The second principle is, don’t make robots that break the law. I think it’s disingenuous that people build technologies to say -- it’s neutral, or -- there are no laws or regulations that apply to it, because it’s new. That’s just being willfully ignorant and trying to get away with not thinking it through.

The follow-on from that, the third rule, is: don’t build faulty products. There are consumer regulations where if we buy something and it doesn’t work as advertised, we have the right to get out money back or make complaints against the company. Somebody who’s serious about making a real business knows that they have to spend a lot of time testing, that they have to build things that keep working, otherwise they will kill their business and pose a danger to people.

The last two guidelines seem simpler, but are actually the harder ones -- we must not make products that manipulate people. New [robot] capabilities could be used to manipulate us in ways that we’re not yet aware of or can protect against. For example, we’ve made the transition from people calling us doing sales calls to robo calls. And that’s all right when you know that it’s a robo call. Recently, however, Google demonstrated a voice assistant that was capable of putting in a lot of the pauses and the things we expect to have in a real conversation -- the whole cadence of the conversation sounds real. In that instance we can’t see the caller so how will we be able to tell it’s a robot?

And that comes to the fifth principle, which is that a robot should be transparent in how it operates and identifiable. We should understand what it is, what it’s doing, why it’s doing it, who owns it or what it’s license number is. Pretty much every other major technology that rolls out in society has to have a registration or a license. This is an essential part of creating something that has a regular role [to play] in society. The key point there is that we need to translate from robot into human and that’s the heart of the ethical argument for me. We can’t just take something like that trolley problem and put it into robots, because robots do not speak that language, and they do not operate that way. But if we understand how robots operate, then we can translate [their actions and communications] appropriately into the human language. And that includes laws and the social norms. 

ZZ: This brings me to another sticky point. Not all social norms are conscious. There’s a problem with programmers embedding their unconscious biases into AI. What ideas does the industry have about addressing this? 

You’ve hit on the real problem. We have bias in our algorithms, we have bias in the data that feeds them, we have bias in how we’ve developed them and it’s unconscious.

The very technology of deep learning reinforces [our] biases and the problem with that is that we [then] perpetuate stereotypes. That is the biggest social problem that we face with robotics.

It’s a lot harder to change a robot than it is to change the AI, to change a voice. If our robots have taken on an appearance that is feminine or child-like -- because that’s what people respond to and seem to want -- then it becomes difficult to change. 

ZZ: Can you give an example of bias in AI? 

I’m seeing a very, very strong gender bias emerging in our voice assistants. Every voice assistant is becoming a female voice, because people respond well to a female voice in a service role. The worst part is that it’s not hard to change a voice, but when we start designing around those characters and embody them in robots, it’s going to be much harder to change. We’re going to be embodying the bias.

What worries me is that people find the female voice more acceptable as an assistant, and they find the male voice more acceptable as an instructor.

Which means, generally, that we respond better to having women help us and we respond better to having men tell us what to do. That perpetuates the “women are nurses, men are doctors” kind of scenarios that we thought we had grown out of. 

ZZ: What is the industry doing about this? 

The industry is only just waking up to it, as smart phone voice assistants and voice assistants in home devices are becoming common. There are groups of people looking at this, in Europe for example. But when I suggested to a group of people in the US last night that maybe this is an area where we need regulation, they didn’t think that was going to be actionable at all.

But I personally think that the answer might be to have regulations requiring all artificial voices and artificial agents to be non-gendered. Some people will argue that it is not right, because there will be occasions when gender is appropriate but I think that the stereotyping will outweigh any appropriateness of having gendered agents. 

ZZ: It’s interesting that you mentioned regulation and that there was definite pushback on the concept. We have seen what happens to unregulated technology when social media became an arena for political manipulation. How do we keep the same from happening with AI? 

Precisely. It’s very telling that these questions are being asked by a European and laughed at by a gathering of Americans. Yet it doesn’t mean that they’re not necessarily worried, either. But the concept of regulation in America generally comes with very negative overtones. For example the FAA has taken more than 7 years to come up with appropriate regulations for the operation of commercial drones which killed the commercial drone industry in America. You can see why there is skepticism about the role of regulation.

But I think we can see -- looking at Facebook -- what happens when you think that a technology is neutral and isn't going to have an impact on society.

Europe has a more cautious approach to technologies and the way they interact both with the individual and the social unit. Maybe this will give us a chance to see two different ways of growing technologies. Maybe people will choose the approach that is clearly delivering better results. 

inbox_large_illu Created with Sketch.
Tired of the news media’s prevailing male perspective? We are too.

Get our newsletters composed exclusively by female journalists from all over Europe.

WITH FINANCIAL SUPPORT FROM:
SUPPORTED BY:

Project #Femfacts co-financed by European Commission Directorate-General for Communications Networks, Content and Technology as part of the Pilot Project – Media Literacy For All

The information and views set out on this website are those of the author(s) and do not necessarily reflect the official opinion of the European Union. Neither the European Union institutions and bodies nor any person acting on their behalf may be held responsible for the use which may be made of the information contained therein.

STRATEGIC PARTNERS:
NewsMavens
NewsMavens is a media start-up within Gazeta Wyborcza, Poland's largest liberal broadsheet published by Agora S.A. NewsMavens is currently financed by Gazeta Wyborcza and Google DNI Fund.
Is something happening in your country that Newsmavens should cover?
CORE TEAM
Zuzanna Ziomecka
Zuzanna Ziomecka EDITOR IN CHIEF
Lea Berriault-Jauvin
Lea Berriault Managing Editor
Jessica Sirotin
Jessica Sirotin EDITOR
Ada Petriczko
Ada Petriczko EDITOR
Gazeta Wyborcza, Agora SA Czerska 8/10 00-732, Warsaw Poland
The e-mail addresses provided above are not intended for recruitment purposes. Messages concerning recruitment will be deleted immediately. Your personal data provided as part of your correspondence with Zuzanna,Lea, Jessica and Ada will be processed for the purpose of resolving the issue you contacted us about. The data provided in your email is controlled by Agora S.A. with its registered office in Warsaw Czerska 8/10 Street (00-732). You can find more information about the processing and protection of your personal data at https://newsmavens.com/transparency-policy
System.Threading.Tasks.Task`1[System.Threading.Tasks.VoidTaskResult];