back to index

Kate Darling: Social Robotics | Lex Fridman Podcast #98


small model | large model

link |
00:00:00.000
The following is a conversation with Kate Darling, a researcher at MIT,
link |
00:00:04.480
interested in social robotics, robot ethics, and generally how technology intersects with society.
link |
00:00:11.040
She explores the emotional connection between human beings and lifelike machines,
link |
00:00:15.680
which for me is one of the most exciting topics in all of artificial intelligence.
link |
00:00:21.360
As she writes in her bio, she is a caretaker of several domestic robots,
link |
00:00:26.240
including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti.
link |
00:00:33.600
She is one of the funniest and brightest minds I've ever had the fortune to talk to.
link |
00:00:37.840
This conversation was recorded recently, but before the outbreak of the pandemic.
link |
00:00:42.240
For everyone feeling the burden of this crisis, I'm sending love your way.
link |
00:00:46.720
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
00:00:51.280
review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter
link |
00:00:56.800
at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any
link |
00:01:03.440
ads in the middle that can break the flow of the conversation. I hope that works for you and
link |
00:01:08.000
doesn't hurt the listening experience. Quick summary of the ads. Two sponsors,
link |
00:01:13.760
Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to
link |
00:01:19.040
Masterclass at masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex
link |
00:01:27.120
Pod. This show is sponsored by Masterclass. Sign up at masterclass.com slash Lex to get a discount
link |
00:01:35.440
and to support this podcast. When I first heard about Masterclass, I thought it was too good to
link |
00:01:40.720
be true. For $180 a year, you get an all access pass to watch courses from, to list some of my
link |
00:01:47.680
favorites. Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and
link |
00:01:53.520
communication, Will Wright, creator of SimCity and Sims, love those games, on game design,
link |
00:02:00.240
Carlos Santana on guitar, Garry Kasparov on chess, Daniel Nagrano on poker, and many more.
link |
00:02:07.680
Chris Hadfield explaining how rockets work and the experience of being launched into space alone
link |
00:02:12.720
is worth the money. By the way, you can watch it on basically any device. Once again,
link |
00:02:18.960
sign up on masterclass.com slash Lex to get a discount and to support this podcast.
link |
00:02:25.040
This show is sponsored by ExpressVPN. Get it at expressvpn.com slash Lex Pod to get a discount
link |
00:02:33.120
and to support this podcast. I've been using ExpressVPN for many years. I love it. It's easy
link |
00:02:39.600
to use, press the big power on button, and your privacy is protected. And, if you like, you can
link |
00:02:45.840
make it look like your location is anywhere else in the world. I might be in Boston now, but I can
link |
00:02:50.960
make it look like I'm in New York, London, Paris, or anywhere else. This has a large number of
link |
00:02:56.240
obvious benefits. Certainly, it allows you to access international versions of streaming websites
link |
00:03:01.520
like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I
link |
00:03:08.640
use it on Linux. Shout out to Ubuntu, 2004, Windows, Android, but it's available everywhere else too.
link |
00:03:17.760
Once again, get it at expressvpn.com slash Lex Pod to get a discount and to support this podcast.
link |
00:03:26.240
And now, here's my conversation with Kate Darling.
link |
00:03:31.040
You co taught robot ethics at Harvard. What are some ethical issues that arise
link |
00:03:35.920
in the world with robots?
link |
00:03:39.840
Yeah, that was a reading group that I did when I, like, at the very beginning,
link |
00:03:44.400
first became interested in this topic. So, I think if I taught that class today,
link |
00:03:48.800
it would look very, very different. Robot ethics, it sounds very science fictiony,
link |
00:03:54.800
especially did back then, but I think that some of the issues that people in robot ethics are
link |
00:04:01.840
concerned with are just around the ethical use of robotic technology in general. So, for example,
link |
00:04:06.880
responsibility for harm, automated weapon systems, things like privacy and data security,
link |
00:04:11.760
things like, you know, automation and labor markets. And then personally, I'm really
link |
00:04:19.200
interested in some of the social issues that come out of our social relationships with robots.
link |
00:04:23.760
One on one relationship with robots.
link |
00:04:25.920
Yeah.
link |
00:04:26.640
I think most of the stuff we have to talk about is like one on one social stuff. That's what I
link |
00:04:30.160
love. I think that's what you're, you love as well and are expert in. But a societal level,
link |
00:04:35.200
there's like, there's a presidential candidate now, Andrew Yang running,
link |
00:04:41.360
concerned about automation and robots and AI in general taking away jobs. He has a proposal of UBI,
link |
00:04:48.640
universal basic income of everybody gets 1000 bucks as a way to sort of save you if you lose
link |
00:04:55.440
your job from automation to allow you time to discover what it is that you would like to or
link |
00:05:02.960
even love to do.
link |
00:05:04.560
Yes. So I lived in Switzerland for 20 years and universal basic income has been more of a topic
link |
00:05:12.160
there separate from the whole robots and jobs issue. So it's so interesting to me to see kind
link |
00:05:19.360
of these Silicon Valley people latch onto this concept that came from a very kind of
link |
00:05:26.560
left wing socialist, kind of a different place in Europe. But on the automation labor markets
link |
00:05:37.120
topic, I think that it's very, so sometimes in those conversations, I think people overestimate
link |
00:05:44.720
where robotic technology is right now. And we also have this fallacy of constantly comparing robots
link |
00:05:51.280
to humans and thinking of this as a one to one replacement of jobs. So even like Bill Gates a few
link |
00:05:57.680
years ago said something about, maybe we should have a system that taxes robots for taking people's
link |
00:06:03.920
jobs. And it just, I mean, I'm sure that was taken out of context, he's a really smart guy,
link |
00:06:10.480
but that sounds to me like kind of viewing it as a one to one replacement versus viewing this
link |
00:06:15.920
technology as kind of a supplemental tool that of course is going to shake up a lot of stuff.
link |
00:06:21.520
It's going to change the job landscape, but I don't see, you know, robots taking all the
link |
00:06:27.440
jobs in the next 20 years. That's just not how it's going to work.
link |
00:06:30.800
Right. So maybe drifting into the land of more personal relationships with robots and
link |
00:06:36.240
interaction and so on. I got to warn you, I go, I may ask some silly philosophical questions.
link |
00:06:43.280
I apologize.
link |
00:06:43.920
Oh, please do.
link |
00:06:45.040
Okay. Do you think humans will abuse robots in their interactions? So you've had a lot of,
link |
00:06:52.560
and we'll talk about it sort of anthropomorphization and this intricate dance,
link |
00:07:00.640
emotional dance between human and robot, but there seems to be also a darker side where people, when
link |
00:07:06.320
they treat the other as servants, especially, they can be a little bit abusive or a lot abusive.
link |
00:07:13.520
Do you think about that? Do you worry about that?
link |
00:07:16.400
Yeah, I do think about that. So, I mean, one of my main interests is the fact that people
link |
00:07:22.960
subconsciously treat robots like living things. And even though they know that they're interacting
link |
00:07:28.000
with a machine and what it means in that context to behave violently. I don't know if you could say
link |
00:07:35.200
abuse because you're not actually abusing the inner mind of the robot. The robot doesn't have
link |
00:07:42.000
any feelings.
link |
00:07:42.640
As far as you know.
link |
00:07:44.000
Well, yeah. It also depends on how we define feelings and consciousness. But I think that's
link |
00:07:50.400
another area where people kind of overestimate where we currently are with the technology.
link |
00:07:54.080
Right.
link |
00:07:54.320
The robots are not even as smart as insects right now. And so I'm not worried about abuse
link |
00:08:00.320
in that sense. But it is interesting to think about what does people's behavior towards these
link |
00:08:05.840
things mean for our own behavior? Is it desensitizing the people to be verbally abusive
link |
00:08:13.680
to a robot or even physically abusive? And we don't know.
link |
00:08:17.360
Right. It's a similar connection from like if you play violent video games, what connection does
link |
00:08:22.400
that have to desensitization to violence? I haven't read literature on that. I wonder about that.
link |
00:08:32.080
Because everything I've heard, people don't seem to any longer be so worried about violent video
link |
00:08:37.520
games.
link |
00:08:38.080
Correct. The research on it is, it's a difficult thing to research. So it's sort of inconclusive,
link |
00:08:46.720
but we seem to have gotten the sense, at least as a society, that people can compartmentalize. When
link |
00:08:53.680
it's something on a screen and you're shooting a bunch of characters or running over people with
link |
00:08:58.320
your car, that doesn't necessarily translate to you doing that in real life. We do, however,
link |
00:09:04.160
have some concerns about children playing violent video games. And so we do restrict it there.
link |
00:09:09.680
I'm not sure that's based on any real evidence either, but it's just the way that we've kind of
link |
00:09:14.400
decided we want to be a little more cautious there. And the reason I think robots are a little
link |
00:09:19.040
bit different is because there is a lot of research showing that we respond differently
link |
00:09:23.280
to something in our physical space than something on a screen. We will treat it much more viscerally,
link |
00:09:29.280
much more like a physical actor. And so it's totally possible that this is not a problem.
link |
00:09:38.160
And it's the same thing as violence in video games. Maybe restrict it with kids to be safe,
link |
00:09:43.280
but adults can do what they want. But we just need to ask the question again because we don't
link |
00:09:48.560
have any evidence at all yet. Maybe there's an intermediate place too. I did my research
link |
00:09:55.840
on Twitter. By research, I mean scrolling through your Twitter feed.
link |
00:10:00.800
You mentioned that you were going at some point to an animal law conference.
link |
00:10:04.560
So I have to ask, do you think there's something that we can learn
link |
00:10:07.840
from animal rights that guides our thinking about robots?
link |
00:10:12.320
Oh, I think there is so much to learn from that. I'm actually writing a book on it right now. That's
link |
00:10:17.120
why I'm going to this conference. So I'm writing a book that looks at the history of animal
link |
00:10:22.400
domestication and how we've used animals for work, for weaponry, for companionship.
link |
00:10:27.280
And one of the things the book tries to do is move away from this fallacy that I talked about
link |
00:10:33.920
of comparing robots and humans because I don't think that's the right analogy. But I do think
link |
00:10:39.680
that on a social level, even on a social level, there's so much that we can learn from looking
link |
00:10:43.920
at that history because throughout history, we've treated most animals like tools, like products.
link |
00:10:49.360
And then some of them we've treated differently and we're starting to see people treat robots in
link |
00:10:53.200
really similar ways. So I think it's a really helpful predictor to how we're going to interact
link |
00:10:57.920
with the robots. Do you think we'll look back at this time like 100 years from now and see
link |
00:11:05.440
what we do to animals as like similar to the way we view like the Holocaust in World War II?
link |
00:11:13.360
That's a great question. I mean, I hope so. I am not convinced that we will. But I often wonder,
link |
00:11:22.480
you know, what are my grandkids going to view as, you know, abhorrent that my generation did
link |
00:11:28.480
that they would never do? And I'm like, well, what's the big deal? You know, it's a fun question
link |
00:11:33.600
to ask yourself. It always seems that there's atrocities that we discover later. So the things
link |
00:11:41.200
that at the time people didn't see as, you know, you look at everything from slavery to any kinds
link |
00:11:49.600
of abuse throughout history to the kind of insane wars that were happening to the way war was carried
link |
00:11:56.480
out and rape and the kind of violence that was happening during war that we now, you know,
link |
00:12:05.360
we see as atrocities, but at the time perhaps didn't as much. And so now I have this intuition
link |
00:12:12.880
that I have this worry, maybe you're going to probably criticize me, but I do anthropomorphize
link |
00:12:20.800
robots. I don't see a fundamental philosophical difference between a robot and a human being
link |
00:12:31.600
in terms of once the capabilities are matched. So the fact that we're really far away doesn't,
link |
00:12:39.200
in terms of capabilities and then that from natural language processing, understanding
link |
00:12:43.600
and generation to just reasoning and all that stuff. I think once you solve it, I see though,
link |
00:12:48.800
this is a very gray area and I don't feel comfortable with the kind of abuse that people
link |
00:12:53.920
throw at robots. Subtle, but I can see it becoming, I can see basically a civil rights movement for
link |
00:13:01.120
robots in the future. Do you think, let me put it in the form of a question, do you think robots
link |
00:13:07.040
should have some kinds of rights? Well, it's interesting because I came at this originally
link |
00:13:13.520
from your perspective. I was like, you know what, there's no fundamental difference between
link |
00:13:19.040
technology and like human consciousness. Like we, we can probably recreate anything. We just don't
link |
00:13:24.800
know how yet. And so there's no reason not to give machines the same rights that we have once,
link |
00:13:32.640
like you say, they're kind of on an equivalent level. But I realized that that is kind of a
link |
00:13:38.080
far future question. I still think we should talk about it because I think it's really interesting.
link |
00:13:41.600
But I realized that it's actually, we might need to ask the robot rights question even sooner than
link |
00:13:47.680
that while the machines are still, quote unquote, really dumb and not on our level because of the
link |
00:13:56.160
way that we perceive them. And I think one of the lessons we learned from looking at the history of
link |
00:14:00.560
animal rights and one of the reasons we may not get to a place in a hundred years where we view
link |
00:14:05.360
it as wrong to, you know, eat or otherwise, you know, use animals for our own purposes is because
link |
00:14:11.440
historically we've always protected those things that we relate to the most. So one example is
link |
00:14:17.920
whales. No one gave a shit about the whales. Am I allowed to swear? Yeah, no one gave a shit about
link |
00:14:26.640
freedom. Yeah, no one gave a shit about the whales until someone recorded them singing. And suddenly
link |
00:14:31.200
people were like, oh, this is a beautiful creature and now we need to save the whales. And that
link |
00:14:35.840
started the whole Save the Whales movement in the 70s. So as much as I, and I think a lot of people
link |
00:14:45.360
want to believe that we care about consistent biological criteria, that's not historically
link |
00:14:52.400
how we formed our alliances. Yeah, so what, why do we, why do we believe that all humans are created
link |
00:15:00.880
equal? Killing of a human being, no matter who the human being is, that's what I meant by equality,
link |
00:15:07.120
is bad. And then, because I'm connecting that to robots and I'm wondering whether mortality,
link |
00:15:14.480
so the killing act is what makes something, that's the fundamental first right. So I am currently
link |
00:15:21.200
allowed to take a shotgun and shoot a Roomba. I think, I'm not sure, but I'm pretty sure it's not
link |
00:15:29.280
considered murder, right. Or even shutting them off. So that's, that's where the line appears to
link |
00:15:36.640
be, right? Is this mortality a critical thing here? I think here again, like the animal analogy is
link |
00:15:44.080
really useful because you're also allowed to shoot your dog, but people won't be happy about it.
link |
00:15:49.440
So we give, we do give animals certain protections from like, you're not allowed to torture your dog
link |
00:15:56.960
and set it on fire, at least in most states and countries, but you're still allowed to treat it
link |
00:16:04.160
like a piece of property in a lot of other ways. And so we draw these arbitrary lines all the time.
link |
00:16:11.920
And, you know, there's a lot of philosophical thought on why viewing humans as something unique
link |
00:16:22.320
is not, is just speciesism and not, you know, based on any criteria that would actually justify
link |
00:16:31.040
making a difference between us and other species. Do you think in general people, most people are
link |
00:16:38.640
good? Do you think, or do you think there's evil and good in all of us? Is that's revealed through
link |
00:16:49.040
our circumstances and through our interactions? I like to view myself as a person who like believes
link |
00:16:55.760
that there's no absolute evil and good and that everything is, you know, gray. But I do think it's
link |
00:17:03.600
an interesting question. Like when I see people being violent towards robotic objects, you said
link |
00:17:08.080
that bothers you because the robots might someday, you know, be smart. And is that why?
link |
00:17:15.600
Well, it bothers me because it reveals, so I personally believe, because I've studied way too,
link |
00:17:21.280
so I'm Jewish. I studied the Holocaust and World War II exceptionally well. I personally believe
link |
00:17:26.640
that most of us have evil in us. That what bothers me is the abuse of robots reveals the evil in
link |
00:17:35.440
human beings. And it's, I think it doesn't just bother me. It's, I think it's an opportunity for
link |
00:17:44.320
roboticists to make, help people find the better sides, the angels of their nature, right? That
link |
00:17:53.920
abuse isn't just a fun side thing. That's a, you revealing a dark part that you shouldn't,
link |
00:17:59.600
that should be hidden deep inside. Yeah. I mean, you laugh, but some of our research does indicate
link |
00:18:07.360
that maybe people's behavior towards robots reveals something about their tendencies for
link |
00:18:12.400
empathy generally, even using very simple robots that we have today that like clearly don't feel
link |
00:18:16.720
anything. So, you know, Westworld is maybe, you know, not so far off and it's like, you know,
link |
00:18:27.360
depicting the bad characters as willing to go around and shoot and rape the robots and the good
link |
00:18:32.080
characters is not wanting to do that. Even without assuming that the robots have consciousness.
link |
00:18:37.520
So there's a opportunity, it's interesting, there's opportunity to almost practice empathy.
link |
00:18:42.080
The, on robots is an opportunity to practice empathy.
link |
00:18:47.840
I agree with you. Some people would say, why are we practicing empathy on robots instead of,
link |
00:18:54.320
you know, on our fellow humans or on animals that are actually alive and experienced the world?
link |
00:18:59.920
And I don't agree with them because I don't think empathy is a zero sum game. And I do
link |
00:19:03.840
think that it's a muscle that you can train and that we should be doing that. But some people
link |
00:19:09.200
disagree. So the interesting thing, you've heard, you know, raising kids sort of asking them or
link |
00:19:20.400
telling them to be nice to the smart speakers, to Alexa and so on, saying please and so on during
link |
00:19:28.000
the requests. I don't know if, I'm a huge fan of that idea because yeah, that's towards the idea of
link |
00:19:34.080
practicing empathy. I feel like politeness, I'm always polite to all the, all the systems that we
link |
00:19:39.120
build, especially anything that's speech interaction based. Like when we talk to the car, I'll always
link |
00:19:44.480
have a pretty good detector for please to, I feel like there should be a room for encouraging empathy
link |
00:19:51.280
in those interactions. Yeah. Okay. So I agree with you. So I'm going to play devil's advocate. Sure.
link |
00:19:58.400
So what is the, what is the devil's advocate argument there? The devil's advocate argument
link |
00:20:02.320
is that if you are the type of person who has abusive tendencies or needs to get some sort of
link |
00:20:08.560
like behavior like that out, needs an outlet for it, that it's great to have a robot that you can
link |
00:20:14.640
scream at so that you're not screaming at a person. And we just don't know whether that's true,
link |
00:20:19.760
whether it's an outlet for people or whether it just kind of, as my friend once said,
link |
00:20:23.520
trains their cruelty muscles and makes them more cruel in other situations.
link |
00:20:26.880
Oh boy. Yeah. And that expands to other topics, which I, I don't know, you know, there's a,
link |
00:20:36.320
there's a topic of sex, which is weird one that I tend to avoid it from robotics perspective.
link |
00:20:42.960
And most of the general public doesn't, they talk about sex robots and so on. Is that an area you've
link |
00:20:50.080
touched at all research wise? Like the way, cause that's what people imagine sort of any kind of
link |
00:20:57.920
interaction between human and robot that shows any kind of compassion. They immediately think
link |
00:21:04.160
from a product perspective in the near term is sort of expansion of what pornography is and all
link |
00:21:10.640
that kind of stuff. Yeah. Do researchers touch this? Well that's kind of you to like characterize
link |
00:21:16.000
it as though there's thinking rationally about product. I feel like sex robots are just such a
link |
00:21:20.880
like titillating news hook for people that they become like the story. And it's really hard to
link |
00:21:27.760
not get fatigued by it when you're in the space because you tell someone you do human robot
link |
00:21:32.480
interaction. Of course, the first thing they want to talk about is sex robots. Yeah, it happens a
link |
00:21:37.040
lot. And it's, it's unfortunate that I'm so fatigued by it because I do think that there
link |
00:21:42.320
are some interesting questions that become salient when you talk about, you know, sex with robots.
link |
00:21:48.880
See what I think would happen when people get sex robots, like if it's some guys, okay, guys get
link |
00:21:54.240
female sex robots. What I think there's an opportunity for is an actual, like, like they'll
link |
00:22:03.360
actually interact. What I'm trying to say, they won't outside of the sex would be the most
link |
00:22:09.440
fulfilling part. Like the interaction, it's like the folks who there's movies and this, right,
link |
00:22:15.120
who pray, pay a prostitute and then end up just talking to her the whole time. So I feel like
link |
00:22:21.280
there's an opportunity. It's like most guys and people in general joke about this, the sex act,
link |
00:22:27.360
but really people are just lonely inside and they're looking for connection. Many of them.
link |
00:22:32.400
And it'd be unfortunate if that connection is established through the sex industry. I feel like
link |
00:22:40.880
it should go into the front door of like, people are lonely and they want a connection.
link |
00:22:46.480
Well, I also feel like we should kind of de, you know, de stigmatize the sex industry because,
link |
00:22:54.000
you know, even prostitution, like there are prostitutes that specialize in disabled people
link |
00:22:59.440
who don't have the same kind of opportunities to explore their sexuality. So it's, I feel like we
link |
00:23:07.920
should like de stigmatize all of that generally. But yeah, that connection and that loneliness is
link |
00:23:13.200
an interesting topic that you bring up because while people are constantly worried about robots
link |
00:23:19.360
replacing humans and oh, if people get sex robots and the sex is really good, then they won't want
link |
00:23:23.840
their, you know, partner or whatever. But we rarely talk about robots actually filling a hole where
link |
00:23:29.680
there's nothing and what benefit that can provide to people. Yeah, I think that's an exciting,
link |
00:23:37.120
there's a whole giant, there's a giant hole that's unfillable by humans. It's asking too much of
link |
00:23:43.120
your, of people, your friends and people you're in a relationship with in your family to fill that
link |
00:23:47.280
hole. There's, because, you know, it's exploring the full, like, you know, exploring the full
link |
00:23:54.640
complexity and richness of who you are. Like who are you really? Like people, your family doesn't
link |
00:24:02.560
have enough patience to really sit there and listen to who are you really. And I feel like
link |
00:24:06.800
there's an opportunity to really make that connection with robots. I just feel like we're
link |
00:24:11.760
complex as humans and we're capable of lots of different types of relationships. So whether that's,
link |
00:24:18.720
you know, with family members, with friends, with our pets, or with robots, I feel like
link |
00:24:23.360
there's space for all of that and all of that can provide value in a different way.
link |
00:24:29.040
Yeah, absolutely. So I'm jumping around. Currently most of my work is in autonomous vehicles.
link |
00:24:35.520
So the most popular topic among the general public is the trolley problem. So most, most,
link |
00:24:45.760
most roboticists kind of hate this question, but what do you think of this thought experiment?
link |
00:24:52.720
What do you think we can learn from it outside of the silliness of
link |
00:24:56.000
the actual application of it to the autonomous vehicle? I think it's still an interesting
link |
00:25:00.320
ethical question. And that in itself, just like much of the interaction with robots
link |
00:25:06.880
has something to teach us. But from your perspective, do you think there's anything there?
link |
00:25:10.960
Well, I think you're right that it does have something to teach us because,
link |
00:25:14.320
but I think what people are forgetting in all of these conversations is the origins of the trolley
link |
00:25:19.840
problem and what it was meant to show us, which is that there is no right answer. And that sometimes
link |
00:25:25.600
our moral intuition that comes to us instinctively is not actually what we should follow if we care
link |
00:25:34.240
about creating systematic rules that apply to everyone. So I think that as a philosophical
link |
00:25:40.800
concept, it could teach us at least that, but that's not how people are using it right now.
link |
00:25:48.160
These are friends of mine and I love them dearly and their project adds a lot of value. But if
link |
00:25:54.000
we're viewing the moral machine project as what we can learn from the trolley problems, the moral
link |
00:25:59.680
machine is, I'm sure you're familiar, it's this website that you can go to and it gives you
link |
00:26:04.720
different scenarios like, oh, you're in a car, you can decide to run over these two people or
link |
00:26:10.640
this child. What do you choose? Do you choose the homeless person? Do you choose the person who's
link |
00:26:15.280
jaywalking? And so it pits these like moral choices against each other and then tries to
link |
00:26:21.520
crowdsource the quote unquote correct answer, which is really interesting and I think valuable data,
link |
00:26:29.040
but I don't think that's what we should base our rules in autonomous vehicles on because
link |
00:26:34.160
it is exactly what the trolley problem is trying to show, which is your first instinct might not
link |
00:26:39.840
be the correct one if you look at rules that then have to apply to everyone and everything.
link |
00:26:45.680
So how do we encode these ethical choices in interaction with robots? For example,
link |
00:26:50.800
autonomous vehicles, there is a serious ethical question of do I protect myself?
link |
00:26:58.960
Does my life have higher priority than the life of another human being? Because that changes
link |
00:27:05.280
certain control decisions that you make. So if your life matters more than other human beings,
link |
00:27:11.600
then you'd be more likely to swerve out of your current lane. So currently automated emergency
link |
00:27:16.960
braking systems that just brake, they don't ever swerve. So swerving into oncoming traffic or
link |
00:27:25.520
no, just in a different lane can cause significant harm to others, but it's possible that it causes
link |
00:27:31.840
less harm to you. So that's a difficult ethical question. Do you have a hope that
link |
00:27:41.680
like the trolley problem is not supposed to have a right answer, right? Do you hope that
link |
00:27:46.480
when we have robots at the table, we'll be able to discover the right answer for some of these
link |
00:27:50.960
questions? Well, what's happening right now, I think, is this question that we're facing of
link |
00:27:58.480
what ethical rules should we be programming into the machines is revealing to us that
link |
00:28:03.600
our ethical rules are much less programmable than we probably thought before. And so that's a really
link |
00:28:11.280
valuable insight, I think, that these issues are very complicated and that in a lot of these cases,
link |
00:28:19.360
it's you can't really make that call, like not even as a legislator. And so what's going to
link |
00:28:25.200
happen in reality, I think, is that car manufacturers are just going to try and avoid
link |
00:28:31.440
the problem and avoid liability in any way possible. Or like they're going to always protect
link |
00:28:36.000
the driver because who's going to buy a car if it's programmed to kill someone?
link |
00:28:40.320
Yeah.
link |
00:28:41.520
Kill you instead of someone else. So that's what's going to happen in reality.
link |
00:28:47.040
But what did you mean by like once we have robots at the table, like do you mean when they can help
link |
00:28:51.680
us figure out what to do?
link |
00:28:54.720
No, I mean when robots are part of the ethical decisions. So no, no, no, not they help us. Well.
link |
00:29:04.880
Oh, you mean when it's like, should I run over a robot or a person?
link |
00:29:08.560
Right. That kind of thing. So what, no, no, no. So when you, it's exactly what you said, which is
link |
00:29:15.760
when you have to encode the ethics into an algorithm, you start to try to really understand
link |
00:29:22.640
what are the fundamentals of the decision making process you make to make certain decisions.
link |
00:29:28.000
Should you, like capital punishment, should you take a person's life or not to punish them for
link |
00:29:34.960
a certain crime? Sort of, you can use, you can develop an algorithm to make that decision, right?
link |
00:29:42.480
And the hope is that the act of making that algorithm, however you make it, so there's a few
link |
00:29:49.680
approaches, will help us actually get to the core of what is right and what is wrong under our current
link |
00:29:59.600
societal standards.
link |
00:30:00.720
But isn't that what's happening right now? And we're realizing that we don't have a consensus on
link |
00:30:05.600
what's right and wrong.
link |
00:30:06.560
You mean in politics in general?
link |
00:30:08.240
Well, like when we're thinking about these trolley problems and autonomous vehicles and how to
link |
00:30:12.880
program ethics into machines and how to, you know, make AI algorithms fair and equitable, we're
link |
00:30:22.320
realizing that this is so complicated and it's complicated in part because there doesn't seem
link |
00:30:28.080
to be a one right answer in any of these cases.
link |
00:30:30.640
Do you have a hope for, like one of the ideas of the moral machine is that crowdsourcing can help
link |
00:30:35.680
us converge towards, like democracy can help us converge towards the right answer.
link |
00:30:42.080
Do you have a hope for crowdsourcing?
link |
00:30:43.920
Well, yes and no. So I think that in general, you know, I have a legal background and
link |
00:30:49.520
policymaking is often about trying to suss out, you know, what rules does this particular society
link |
00:30:55.440
agree on and then trying to codify that. So the law makes these choices all the time and then
link |
00:31:00.000
tries to adapt according to changing culture. But in the case of the moral machine project,
link |
00:31:06.080
I don't think that people's choices on that website necessarily reflect what laws they would
link |
00:31:12.240
want in place. I think you would have to ask them a series of different questions in order to get
link |
00:31:18.480
at what their consensus is.
link |
00:31:20.720
I agree, but that has to do more with the artificial nature of, I mean, they're showing
link |
00:31:25.680
some cute icons on a screen. That's almost, so if you, for example, we do a lot of work in virtual
link |
00:31:32.800
reality. And so if you put those same people into virtual reality where they have to make that
link |
00:31:38.720
decision, their decision would be very different, I think.
link |
00:31:42.720
I agree with that. That's one aspect. And the other aspect is it's a different question to ask
link |
00:31:47.840
someone, would you run over the homeless person or the doctor in this scene? Or do you want cars to
link |
00:31:55.360
always run over the homeless people?
link |
00:31:57.120
I think, yeah. So let's talk about anthropomorphism. To me, anthropomorphism, if I can
link |
00:32:04.320
pronounce it correctly, is one of the most fascinating phenomena from like both the
link |
00:32:09.760
engineering perspective and the psychology perspective, machine learning perspective,
link |
00:32:14.480
and robotics in general. Can you step back and define anthropomorphism, how you see it in
link |
00:32:23.280
general terms in your work?
link |
00:32:25.360
Sure. So anthropomorphism is this tendency that we have to project human like traits and
link |
00:32:32.160
behaviors and qualities onto nonhumans. And we often see it with animals, like we'll project
link |
00:32:38.800
emotions on animals that may or may not actually be there. We often see that we're trying to
link |
00:32:43.760
interpret things according to our own behavior when we get it wrong. But we do it with more
link |
00:32:49.120
than just animals. We do it with objects, you know, teddy bears. We see, you know, faces in
link |
00:32:53.680
the headlights of cars. And we do it with robots very, very extremely.
link |
00:32:59.200
You think that can be engineered? Can that be used to enrich an interaction between an AI
link |
00:33:05.200
system and the human?
link |
00:33:07.120
Oh, yeah, for sure.
link |
00:33:08.480
And do you see it being used that way often? Like, I don't, I haven't seen, whether it's
link |
00:33:17.600
Alexa or any of the smart speaker systems, often trying to optimize for the anthropomorphization.
link |
00:33:26.560
You said you haven't seen?
link |
00:33:27.920
I haven't seen. They keep moving away from that. I think they're afraid of that.
link |
00:33:32.400
They actually, so I only recently found out, but did you know that Amazon has like a whole
link |
00:33:38.080
team of people who are just there to work on Alexa's personality?
link |
00:33:44.480
So I know that depends on what you mean by personality. I didn't know that exact thing.
link |
00:33:50.480
But I do know that how the voice is perceived is worked on a lot, whether if it's a pleasant
link |
00:33:59.920
feeling about the voice, but that has to do more with the texture of the sound and the
link |
00:34:04.080
audio and so on. But personality is more like...
link |
00:34:08.640
It's like, what's her favorite beer when you ask her? And the personality team is different
link |
00:34:13.120
for every country too. Like there's a different personality for German Alexa than there is
link |
00:34:17.520
for American Alexa. That said, I think it's very difficult to, you know, use the, really,
link |
00:34:26.800
really harness the anthropomorphism with these voice assistants because the voice interface
link |
00:34:34.000
is still very primitive. And I think that in order to get people to really suspend their
link |
00:34:40.000
disbelief and treat a robot like it's alive, less is sometimes more. You want them to project
link |
00:34:47.520
onto the robot and you want the robot to not disappoint their expectations for how it's
link |
00:34:51.040
going to answer or behave in order for them to have this kind of illusion. And with Alexa,
link |
00:34:57.920
I don't think we're there yet, or Siri, that they're just not good at that. But if you
link |
00:35:03.280
look at some of the more animal like robots, like the baby seal that they use with the
link |
00:35:08.720
dementia patients, it's a much more simple design. It doesn't try to talk to you. It
link |
00:35:12.960
can't disappoint you in that way. It just makes little movements and sounds and people
link |
00:35:17.760
stroke it and it responds to their touch. And that is like a very effective way to harness
link |
00:35:23.280
people's tendency to kind of treat the robot like a living thing.
link |
00:35:28.880
Yeah. So you bring up some interesting ideas in your paper chapter, I guess,
link |
00:35:35.520
Anthropomorphic Framing Human Robot Interaction that I read the last time we scheduled this.
link |
00:35:40.400
Oh my God, that was a long time ago.
link |
00:35:42.160
Yeah. What are some good and bad cases of anthropomorphism in your perspective?
link |
00:35:49.280
Like when is the good ones and bad?
link |
00:35:52.000
Well, I should start by saying that, you know, while design can really enhance the
link |
00:35:56.400
anthropomorphism, it doesn't take a lot to get people to treat a robot like it's alive. Like
link |
00:36:01.360
people will, over 85% of Roombas have a name, which I'm, I don't know the numbers for your
link |
00:36:07.360
regular type of vacuum cleaner, but they're not that high, right? So people will feel bad for the
link |
00:36:12.160
Roomba when it gets stuck, they'll send it in for repair and want to get the same one back. And
link |
00:36:15.840
that's, that one is not even designed to like make you do that. So I think that some of the cases
link |
00:36:23.280
where it's maybe a little bit concerning that anthropomorphism is happening is when you have
link |
00:36:28.560
something that's supposed to function like a tool and people are using it in the wrong way.
link |
00:36:32.000
And one of the concerns is military robots where, so gosh, 2000, like early 2000s, which is a long
link |
00:36:44.160
time ago, iRobot, the Roomba company made this robot called the Pacbot that was deployed in Iraq
link |
00:36:51.840
and Afghanistan with the bomb disposal units that were there. And the soldiers became very emotionally
link |
00:36:59.040
attached to the robots. And that's fine until a soldier risks his life to save a robot, which
link |
00:37:08.800
you really don't want. But they were treating them like pets. Like they would name them,
link |
00:37:12.560
they would give them funerals with gun salutes, they would get really upset and traumatized when
link |
00:37:17.280
the robot got broken. So in situations where you want a robot to be a tool, in particular,
link |
00:37:23.760
when it's supposed to like do a dangerous job that you don't want a person doing,
link |
00:37:26.960
it can be hard when people get emotionally attached to it. That's maybe something that
link |
00:37:32.960
you would want to discourage. Another case for concern is maybe when companies try to
link |
00:37:39.840
leverage the emotional attachment to exploit people. So if it's something that's not in the
link |
00:37:45.520
consumer's interest, trying to like sell them products or services or exploit an emotional
link |
00:37:51.200
connection to keep them paying for a cloud service for a social robot or something like that might be,
link |
00:37:57.200
I think that's a little bit concerning as well.
link |
00:37:59.680
Yeah, the emotional manipulation, which probably happens behind the scenes now with some like
link |
00:38:04.720
social networks and so on, but making it more explicit. What's your favorite robot?
link |
00:38:12.000
Fictional or real?
link |
00:38:13.280
No, real. Real robot, which you have felt a connection with or not like, not anthropomorphic
link |
00:38:23.360
connection, but I mean like you sit back and say, damn, this is an impressive system.
link |
00:38:32.080
Wow. So two different robots. So the, the PLEO baby dinosaur robot that is no longer sold that
link |
00:38:38.960
came out in 2007, that one I was very impressed with. It was, but, but from an anthropomorphic
link |
00:38:45.440
perspective, I was impressed with how much I bonded with it, how much I like wanted to believe
link |
00:38:50.080
that it had this inner life.
link |
00:38:51.760
Can you describe PLEO, can you describe what it is? How big is it? What can it actually do?
link |
00:38:58.160
Yeah. PLEO is about the size of a small cat. It had a lot of like motors that gave it this kind
link |
00:39:06.400
of lifelike movement. It had things like touch sensors and an infrared camera. So it had all
link |
00:39:11.440
these like cool little technical features, even though it was a toy. And the thing that really
link |
00:39:18.800
struck me about it was that it, it could mimic pain and distress really well. So if you held
link |
00:39:24.160
it up by the tail, it had a tilt sensor that, you know, told it what direction it was facing
link |
00:39:28.240
and it would start to squirm and cry out. If you hit it too hard, it would start to cry.
link |
00:39:34.080
So it was very impressive in design.
link |
00:39:38.240
And what's the second robot that you were, you said there might've been two that you liked.
link |
00:39:43.680
Yeah. So the Boston Dynamics robots are just impressive feats of engineering.
link |
00:39:49.760
Have you met them in person?
link |
00:39:51.280
Yeah. I recently got a chance to go visit and I, you know, I was always one of those people who
link |
00:39:55.280
watched the videos and was like, this is super cool, but also it's a product video. Like,
link |
00:39:59.920
I don't know how many times that they had to shoot this to get it right.
link |
00:40:02.800
Yeah.
link |
00:40:03.360
But visiting them, I, you know, I'm pretty sure that I was very impressed. Let's put it that way.
link |
00:40:10.000
Yeah. And in terms of the control, I think that was a transformational moment for me
link |
00:40:15.520
when I met Spot Mini in person.
link |
00:40:17.840
Yeah.
link |
00:40:18.640
Because, okay, maybe this is a psychology experiment, but I anthropomorphized the,
link |
00:40:26.160
the crap out of it. So I immediately, it was like my best friend, right?
link |
00:40:30.880
I think it's really hard for anyone to watch Spot move and not feel like it has agency.
link |
00:40:35.760
Yeah. This movement, especially the arm on Spot Mini really obviously looks like a head.
link |
00:40:44.160
Yeah.
link |
00:40:44.400
That they say, no, wouldn't mean it that way, but it obviously, it looks exactly like that.
link |
00:40:51.440
And so it's almost impossible to not think of it as a, almost like the baby dinosaur,
link |
00:40:57.120
but slightly larger. And this movement of the, of course, the intelligence is,
link |
00:41:02.560
their whole idea is that it's not supposed to be intelligent. It's a platform on which you build
link |
00:41:08.480
higher intelligence. It's actually really, really dumb. It's just a basic movement platform.
link |
00:41:13.520
Yeah. But even dumb robots can, like, we can immediately respond to them in this visceral way.
link |
00:41:19.920
What are your thoughts about Sophia the robot? This kind of mix of some basic natural language
link |
00:41:26.640
processing and basically an art experiment.
link |
00:41:31.040
Yeah. An art experiment is a good way to characterize it. I'm much less impressed
link |
00:41:35.920
with Sophia than I am with Boston Dynamics.
link |
00:41:37.840
She said she likes you. She said she admires you.
link |
00:41:40.720
Yeah. She followed me on Twitter at some point. Yeah.
link |
00:41:44.160
She tweets about how much she likes you.
link |
00:41:45.680
So what does that mean? I have to be nice or?
link |
00:41:48.320
No, I don't know. I was emotionally manipulating you. No. How do you think of
link |
00:41:55.040
that? I think of the whole thing that happened with Sophia is quite a large number of people
link |
00:42:01.360
kind of immediately had a connection and thought that maybe we're far more advanced with robotics
link |
00:42:06.640
than we are or actually didn't even think much. I was surprised how little people cared
link |
00:42:13.680
that they kind of assumed that, well, of course AI can do this.
link |
00:42:19.200
Yeah.
link |
00:42:19.440
And then if they assume that, I felt they should be more impressed.
link |
00:42:26.960
Well, people really overestimate where we are. And so when something, I don't even think Sophia
link |
00:42:33.200
was very impressive or is very impressive. I think she's kind of a puppet, to be honest. But
link |
00:42:38.400
yeah, I think people are a little bit influenced by science fiction and pop culture to
link |
00:42:43.120
think that we should be further along than we are.
link |
00:42:45.200
So what's your favorite robots in movies and fiction?
link |
00:42:48.400
WALLI.
link |
00:42:49.680
WALLI. What do you like about WALLI? The humor, the cuteness, the perception control systems
link |
00:42:58.400
operating on WALLI that makes it all work? Just in general?
link |
00:43:02.960
The design of WALLI the robot, I think that animators figured out, starting in the 1940s,
link |
00:43:10.880
how to create characters that don't look real, but look like something that's even better than real,
link |
00:43:19.040
that we really respond to and think is really cute. They figured out how to make them move
link |
00:43:23.120
and look in the right way. And WALLI is just such a great example of that.
link |
00:43:27.600
You think eyes, big eyes or big something that's kind of eyeish. So it's always playing on some
link |
00:43:35.040
aspect of the human face, right?
link |
00:43:36.960
Often. Yeah. So big eyes. Well, I think one of the first animations to really play with this was
link |
00:43:44.080
Bambi. And they weren't originally going to do that. They were originally trying to make the
link |
00:43:48.720
deer look as lifelike as possible. They brought deer into the studio and had a little zoo there
link |
00:43:53.280
so that the animators could work with them. And then at some point they were like,
link |
00:43:57.520
if we make really big eyes and a small nose and big cheeks, kind of more like a baby face,
link |
00:44:02.640
then people like it even better than if it looks real. Do you think the future of things like
link |
00:44:10.800
Alexa in the home has possibility to take advantage of that, to build on that, to create
link |
00:44:18.960
these systems that are better than real, that create a close human connection? I can pretty
link |
00:44:25.680
much guarantee you without having any knowledge that those companies are going to make these
link |
00:44:32.080
things. And companies are working on that design behind the scenes. I'm pretty sure.
link |
00:44:37.440
I totally disagree with you.
link |
00:44:38.960
Really?
link |
00:44:39.440
So that's what I'm interested in. I'd like to build such a company. I know
link |
00:44:43.200
a lot of those folks and they're afraid of that because how do you make money off of it?
link |
00:44:49.120
Well, but even just making Alexa look a little bit more interesting than just a cylinder
link |
00:44:54.560
would do so much.
link |
00:44:55.680
It's an interesting thought, but I don't think people are from Amazon perspective are looking
link |
00:45:02.240
for that kind of connection. They want you to be addicted to the services provided by Alexa,
link |
00:45:08.320
not to the device. So the device itself, it's felt that you can lose a lot because if you create a
link |
00:45:17.440
connection and then it creates more opportunity for frustration for negative stuff than it does
link |
00:45:26.800
for positive stuff is I think the way they think about it.
link |
00:45:29.920
That's interesting. Like I agree that it's very difficult to get right and you have to get it
link |
00:45:35.600
exactly right. Otherwise you wind up with Microsoft's Clippy.
link |
00:45:40.000
Okay, easy now. What's your problem with Clippy?
link |
00:45:43.360
You like Clippy? Is Clippy your friend?
link |
00:45:45.040
Yeah, I like Clippy. I was just, I just talked to, we just had this argument and they said
link |
00:45:51.680
Microsoft's CTO and they said, he said he's not bringing Clippy back. They're not bringing
link |
00:45:57.520
Clippy back and that's very disappointing. I think it was Clippy was the greatest assistance
link |
00:46:05.600
we've ever built. It was a horrible attempt, of course, but it's the best we've ever done
link |
00:46:10.800
because it was a real attempt to have like a actual personality. I mean, it was obviously
link |
00:46:17.760
technology was way not there at the time of being able to be a recommender system for assisting you
link |
00:46:25.040
in anything and typing in Word or any kind of other application, but still it was an attempt
link |
00:46:30.480
of personality that was legitimate, which I thought was brave.
link |
00:46:34.880
Yes, yes. Okay. You know, you've convinced me I'll be slightly less hard on Clippy.
link |
00:46:39.840
And I know I have like an army of people behind me who also miss Clippy.
link |
00:46:43.680
Really? I want to meet these people. Who are these people?
link |
00:46:47.200
It's the people who like to hate stuff when it's there and miss it when it's gone.
link |
00:46:55.280
So everyone.
link |
00:46:56.240
It's everyone. Exactly. All right. So Enki and Jibo, the two companies,
link |
00:47:04.880
the two amazing companies, the social robotics companies that have recently been closed down.
link |
00:47:10.080
Yes.
link |
00:47:12.160
Why do you think it's so hard to create a personal robotics company? So making a business
link |
00:47:17.840
out of essentially something that people would anthropomorphize, have a deep connection with.
link |
00:47:23.840
Why is it so hard to make it work? Is the business case not there or what is it?
link |
00:47:28.880
I think it's a number of different things. I don't think it's going to be this way forever.
link |
00:47:35.600
I think at this current point in time, it takes so much work to build something that only barely
link |
00:47:43.360
meets people's minimal expectations because of science fiction and pop culture giving people
link |
00:47:49.680
this idea that we should be further than we already are. Like when people think about a robot
link |
00:47:53.920
assistant in the home, they think about Rosie from the Jetsons or something like that. And
link |
00:48:00.000
Enki and Jibo did such a beautiful job with the design and getting that interaction just right.
link |
00:48:06.240
But I think people just wanted more. They wanted more functionality. I think you're also right that
link |
00:48:11.440
the business case isn't really there because there hasn't been a killer application that's
link |
00:48:17.280
useful enough to get people to adopt the technology in great numbers. I think what we did see from the
link |
00:48:23.440
people who did get Jibo is a lot of them became very emotionally attached to it. But that's not,
link |
00:48:31.040
I mean, it's kind of like the Palm Pilot back in the day. Most people are like, why do I need this?
link |
00:48:35.040
Why would I? They don't see how they would benefit from it until they have it or some
link |
00:48:40.160
other company comes in and makes it a little better. Yeah. Like how far away are we, do you
link |
00:48:45.760
think? How hard is this problem? It's a good question. And I think it has a lot to do with
link |
00:48:50.320
people's expectations and those keep shifting depending on what science fiction that is popular.
link |
00:48:56.160
But also it's two things. It's people's expectation and people's need for an emotional
link |
00:49:01.840
connection. Yeah. And I believe the need is pretty high. Yes. But I don't think we're aware of it.
link |
00:49:10.080
That's right. There's like, I really think this is like the life as we know it. So we've just kind
link |
00:49:16.960
of gotten used to it of really, I hate to be dark because I have close friends, but we've gotten
link |
00:49:24.640
used to really never being close to anyone. Right. And we're deeply, I believe, okay, this is
link |
00:49:32.720
hypothesis. I think we're deeply lonely, all of us, even those in deep fulfilling relationships.
link |
00:49:37.680
In fact, what makes those relationship fulfilling, I think is that they at least tap into that deep
link |
00:49:43.120
loneliness a little bit. But I feel like there's more opportunity to explore that, that doesn't
link |
00:49:49.040
inter, doesn't interfere with the human relationships you have. It expands more on the,
link |
00:49:55.280
that, yeah, the rich deep unexplored complexity that's all of us, weird apes. Okay.
link |
00:50:02.560
I think you're right. Do you think it's possible to fall in love with a robot?
link |
00:50:05.440
Oh yeah, totally. Do you think it's possible to have a longterm committed monogamous relationship
link |
00:50:13.360
with a robot? Well, yeah, there are lots of different types of longterm committed monogamous
link |
00:50:18.480
relationships. I think monogamous implies like, you're not going to see other humans sexually or
link |
00:50:26.400
like you basically on Facebook have to say, I'm in a relationship with this person, this robot.
link |
00:50:32.320
I just don't like, again, I think this is comparing robots to humans when I would rather
link |
00:50:37.760
compare them to pets. Like you get a robot, it fulfills this loneliness that you have
link |
00:50:46.640
in maybe not the same way as a pet, maybe in a different way that is even supplemental in a
link |
00:50:52.400
different way. But I'm not saying that people won't like do this, be like, oh, I want to marry
link |
00:50:58.640
my robot or I want to have like a sexual relation, monogamous relationship with my robot. But I don't
link |
00:51:05.840
think that that's the main use case for them. But you think that there's still a gap between
link |
00:51:11.520
human and pet. So between a husband and pet, there's a different relationship. It's engineering.
link |
00:51:24.480
So that's a gap that can be closed through. I think it could be closed someday, but why
link |
00:51:30.160
would we close that? Like, I think it's so boring to think about recreating things that we already
link |
00:51:34.880
have when we could create something that's different. I know you're thinking about the
link |
00:51:43.040
people who like don't have a husband and like, what could we give them? Yeah. But I guess what
link |
00:51:50.080
I'm getting at is maybe not. So like the movie Her. Yeah. Right. So a better husband. Well,
link |
00:52:01.280
maybe better in some ways. Like it's, I do think that robots are going to continue to be a different
link |
00:52:07.360
type of relationship, even if we get them like very human looking or when, you know, the voice
link |
00:52:13.360
interactions we have with them feel very like natural and human like, I think there's still
link |
00:52:18.320
going to be differences. And there were in that movie too, like towards the end, it kind of goes
link |
00:52:22.480
off the rails. But it's just a movie. So your intuition is that, because you kind of said
link |
00:52:30.000
two things, right? So one is why would you want to basically replicate the husband? Yeah. Right.
link |
00:52:39.120
And the other is kind of implying that it's kind of hard to do. So like anytime you try,
link |
00:52:46.160
you might build something very impressive, but it'll be different. I guess my question is about
link |
00:52:51.920
human nature. It's like, how hard is it to satisfy that role of the husband? So we're moving any of
link |
00:53:01.200
the sexual stuff aside is the, it's more like the mystery, the tension, the dance of relationships
link |
00:53:08.240
you think with robots, that's difficult to build. What's your intuition? I think that, well, it also
link |
00:53:16.720
depends on are we talking about robots now in 50 years in like indefinite amount of time. I'm
link |
00:53:22.960
thinking like five or 10 years. Five or 10 years. I think that robots at best will be like, it's
link |
00:53:29.920
more similar to the relationship we have with our pets than relationship that we have with other
link |
00:53:33.920
people. I got it. So what do you think it takes to build a system that exhibits greater and greater
link |
00:53:41.520
levels of intelligence? Like it impresses us with this intelligence. Arumba, so you talk about
link |
00:53:47.440
anthropomorphization that doesn't, I think intelligence is not required. In fact, intelligence
link |
00:53:52.960
probably gets in the way sometimes, like you mentioned. But what do you think it takes to
link |
00:54:00.640
create a system where we sense that it has a human level intelligence? So something that,
link |
00:54:07.360
probably something conversational, human level intelligence. How hard do you think that problem
link |
00:54:11.920
is? It'd be interesting to sort of hear your perspective, not just purely, so I talk to a lot
link |
00:54:18.320
of people, how hard is the conversational agents? How hard is it to pass the torrent test? But my
link |
00:54:24.640
sense is it's easier than just solving, it's easier than solving the pure natural language
link |
00:54:33.440
processing problem. Because I feel like you can cheat. Yeah. So how hard is it to pass the torrent
link |
00:54:41.760
test in your view? Well, I think again, it's all about expectation management. If you set up
link |
00:54:47.120
people's expectations to think that they're communicating with, what was it, a 13 year old
link |
00:54:52.160
boy from the Ukraine? Yeah, that's right. Then they're not going to expect perfect English,
link |
00:54:56.160
they're not going to expect perfect, you know, understanding of concepts or even like being on
link |
00:55:00.640
the same wavelength in terms of like conversation flow. So it's much easier to pass in that case.
link |
00:55:08.560
Do you think, you kind of alluded this too with audio, do you think it needs to have a body?
link |
00:55:14.960
I think that we definitely have, so we treat physical things with more social agency,
link |
00:55:21.440
because we're very physical creatures. I think a body can be useful.
link |
00:55:29.840
Does it get in the way? Is there a negative aspects like...
link |
00:55:33.600
Yeah, there can be. So if you're trying to create a body that's too similar to something that people
link |
00:55:38.320
are familiar with, like I have this robot cat at home that has robots. I have a robot cat at home
link |
00:55:44.320
that has roommates. And it's very disturbing to watch because I'm constantly assuming that it's
link |
00:55:50.960
going to move like a real cat and it doesn't because it's like a $100 piece of technology.
link |
00:55:57.040
So it's very like disappointing and it's very hard to treat it like it's alive. So you can get a lot
link |
00:56:04.800
wrong with the body too, but you can also use tricks, same as, you know, the expectation
link |
00:56:09.680
management of the 13 year old boy from the Ukraine. If you pick an animal that people
link |
00:56:13.360
aren't intimately familiar with, like the baby dinosaur, like the baby seal that people have
link |
00:56:17.680
never actually held in their arms, you can get away with much more because they don't have these
link |
00:56:22.400
preformed expectations. Yeah, I remember you thinking of a Ted talk or something that clicked
link |
00:56:27.280
for me that nobody actually knows what a dinosaur looks like. So you can actually get away with a
link |
00:56:34.400
lot more. That was great. So what do you think about consciousness and mortality
link |
00:56:46.400
being displayed in a robot? So not actually having consciousness, but having these kind
link |
00:56:55.760
of human elements that are much more than just the interaction, much more than just,
link |
00:57:01.600
like you mentioned with a dinosaur moving kind of in an interesting ways, but really being worried
link |
00:57:07.440
about its own death and really acting as if it's aware and self aware and identity. Have you seen
link |
00:57:16.080
that done in robotics? What do you think about doing that? Is that a powerful good thing?
link |
00:57:24.560
Well, I think it can be a design tool that you can use for different purposes. So I can't say
link |
00:57:29.600
whether it's inherently good or bad, but I do think it can be a powerful tool. The fact that the
link |
00:57:36.480
pleo mimics distress when you quote unquote hurt it is a really powerful tool to get people to
link |
00:57:46.720
engage with it in a certain way. I had a research partner that I did some of the empathy work with
link |
00:57:52.560
named Palash Nandi and he had built a robot for himself that had like a lifespan and that would
link |
00:57:57.760
stop working after a certain amount of time just because he was interested in whether he himself
link |
00:58:02.800
would treat it differently. And we know from Tamagotchis, those little games that we used to
link |
00:58:10.320
have that were extremely primitive, that people respond to this idea of mortality and you can get
link |
00:58:17.600
people to do a lot with little design tricks like that. Now, whether it's a good thing depends on
link |
00:58:21.920
what you're trying to get them to do. Have a deeper relationship, have a deeper connection,
link |
00:58:27.760
sign a relationship. If it's for their own benefit, that sounds great. Okay. You could do that for a
link |
00:58:34.800
lot of other reasons. I see. So what kind of stuff are you worried about? So is it mostly about
link |
00:58:39.920
manipulation of your emotions for like advertisement and so on, things like that? Yeah, or data
link |
00:58:44.880
collection or, I mean, you could think of governments misusing this to extract information
link |
00:58:51.280
from people. It's, you know, just like any other technological tool, it just raises a lot of
link |
00:58:57.200
questions. If you look at Facebook, if you look at Twitter and social networks, there's a lot
link |
00:59:02.880
of concern of data collection now. What's from the legal perspective or in general,
link |
00:59:12.240
how do we prevent the violation of sort of these companies crossing a line? It's a great area,
link |
00:59:19.760
but crossing a line, they shouldn't in terms of manipulating, like we're talking about and
link |
00:59:24.480
manipulating our emotion, manipulating our behavior, using tactics that are not so savory.
link |
00:59:32.080
Yeah. It's really difficult because we are starting to create technology that relies on
link |
00:59:38.960
data collection to provide functionality. And there's not a lot of incentive,
link |
00:59:44.000
even on the consumer side, to curb that because the other problem is that the harms aren't
link |
00:59:49.600
tangible. They're not really apparent to a lot of people because they kind of trickle down on a
link |
00:59:55.040
societal level. And then suddenly we're living in like 1984, which, you know, sounds extreme,
link |
01:00:02.240
but that book was very prescient and I'm not worried about, you know, these systems. I have,
link |
01:00:11.280
you know, Amazon's Echo at home and tell Alexa all sorts of stuff. And it helps me because,
link |
01:00:19.520
you know, Alexa knows what brand of diaper we use. And so I can just easily order it again.
link |
01:00:25.200
So I don't have any incentive to ask a lawmaker to curb that. But when I think about that data
link |
01:00:30.880
then being used against low income people to target them for scammy loans or education programs,
link |
01:00:39.200
that's then a societal effect that I think is very severe and, you know,
link |
01:00:45.120
legislators should be thinking about.
link |
01:00:47.280
But yeah, the gray area is the removing ourselves from consideration of like,
link |
01:00:55.360
of explicitly defining objectives and more saying,
link |
01:00:58.880
well, we want to maximize engagement in our social network.
link |
01:01:03.680
Yeah.
link |
01:01:04.240
And then just, because you're not actually doing a bad thing. It makes sense. You want people to
link |
01:01:11.840
keep a conversation going, to have more conversations, to keep coming back
link |
01:01:16.480
again and again, to have conversations. And whatever happens after that,
link |
01:01:21.920
you're kind of not exactly directly responsible. You're only indirectly responsible. So I think
link |
01:01:28.320
it's a really hard problem. Are you optimistic about us ever being able to solve it?
link |
01:01:37.280
You mean the problem of capitalism? It's like, because the problem is that the companies
link |
01:01:43.120
are acting in the company's interests and not in people's interests. And when those interests are
link |
01:01:47.680
aligned, that's great. But the completely free market doesn't seem to work because of this
link |
01:01:53.840
information asymmetry.
link |
01:01:55.120
But it's hard to know how to, so say you were trying to do the right thing. I guess what I'm
link |
01:02:01.120
trying to say is it's not obvious for these companies what the good thing for society is to
link |
01:02:07.600
do. Like, I don't think they sit there with, I don't know, with a glass of wine and a cat,
link |
01:02:14.880
like petting a cat, evil cat. And there's two decisions and one of them is good for society.
link |
01:02:21.120
One is good for the profit and they choose the profit. I think they actually, there's a lot of
link |
01:02:26.960
money to be made by doing the right thing for society. Because Google, Facebook have so much cash
link |
01:02:36.480
that they actually, especially Facebook, would significantly benefit from making decisions that
link |
01:02:40.880
are good for society. It's good for their brand. But I don't know if they know what's good for
link |
01:02:46.800
society. I don't think we know what's good for society in terms of how we manage the
link |
01:02:56.800
conversation on Twitter or how we design, we're talking about robots. Like, should we
link |
01:03:06.640
emotionally manipulate you into having a deep connection with Alexa or not?
link |
01:03:10.960
Yeah. Yeah. Do you have optimism that we'll be able to solve some of these questions?
link |
01:03:17.600
Well, I'm going to say something that's controversial, like in my circles,
link |
01:03:22.400
which is that I don't think that companies who are reaching out to ethicists and trying to create
link |
01:03:28.480
interdisciplinary ethics boards, I don't think that that's totally just trying to whitewash
link |
01:03:32.240
the problem and so that they look like they've done something. I think that a lot of companies
link |
01:03:36.960
actually do, like you say, care about what the right answer is. They don't know what that is,
link |
01:03:42.960
and they're trying to find people to help them find them. Not in every case, but I think
link |
01:03:48.160
it's much too easy to just vilify the companies as, like you say, sitting there with their cat
link |
01:03:52.320
going, her, her, her, $1 million. That's not what happens. A lot of people are well meaning even
link |
01:03:59.600
within companies. I think that what we do absolutely need is more interdisciplinarity,
link |
01:04:09.840
both within companies, but also within the policymaking space because we've hurtled into
link |
01:04:17.360
the world where technological progress is much faster, it seems much faster than it was, and
link |
01:04:23.760
things are getting very complex. And you need people who understand the technology, but also
link |
01:04:28.480
people who understand what the societal implications are, and people who are thinking
link |
01:04:33.440
about this in a more systematic way to be talking to each other. There's no other solution, I think.
link |
01:04:39.920
You've also done work on intellectual property, so if you look at the algorithms that these
link |
01:04:45.440
companies are using, like YouTube, Twitter, Facebook, so on, I mean that's kind of,
link |
01:04:51.200
those are mostly secretive. The recommender systems behind these algorithms. Do you think
link |
01:04:58.400
about an IP and the transparency of algorithms like this? Like what is the responsibility of
link |
01:05:04.320
these companies to open source the algorithms or at least reveal to the public how these
link |
01:05:11.440
algorithms work? So I personally don't work on that. There are a lot of people who do though,
link |
01:05:16.000
and there are a lot of people calling for transparency. In fact, Europe's even trying
link |
01:05:19.760
to legislate transparency, maybe they even have at this point, where like if an algorithmic system
link |
01:05:26.800
makes some sort of decision that affects someone's life, that you need to be able to see how that
link |
01:05:31.440
decision was made. It's a tricky balance because obviously companies need to have some sort of
link |
01:05:41.280
competitive advantage and you can't take all of that away or you stifle innovation. But yeah,
link |
01:05:46.800
for some of the ways that these systems are already being used, I think it is pretty important that
link |
01:05:51.680
people understand how they work. What are your thoughts in general on intellectual property in
link |
01:05:56.960
this weird age of software, AI, robotics? Oh, that it's broken. I mean, the system is just broken. So
link |
01:06:04.720
can you describe, I actually, I don't even know what intellectual property is in the space of
link |
01:06:11.840
software, what it means to, I mean, so I believe I have a patent on a piece of software from my PhD.
link |
01:06:20.240
You believe? You don't know? No, we went through a whole process. Yeah, I do. You get the spam
link |
01:06:26.880
emails like, we'll frame your patent for you. Yeah, it's much like a thesis. But that's useless,
link |
01:06:36.320
right? Or not? Where does IP stand in this age? What's the right way to do it? What's the right
link |
01:06:43.040
way to protect and own ideas when it's just code and this mishmash of something that feels much
link |
01:06:51.600
softer than a piece of machinery? Yeah. I mean, it's hard because there are different types of
link |
01:06:58.160
intellectual property and they're kind of these blunt instruments. It's like patent law is like
link |
01:07:03.280
a wrench. It works really well for an industry like the pharmaceutical industry. But when you
link |
01:07:07.200
try and apply it to something else, it's like, I don't know, I'll just hit this thing with a wrench
link |
01:07:12.080
and hope it works. So software, you have a couple of different options. Any code that's written down
link |
01:07:21.600
in some tangible form is automatically copyrighted. So you have that protection, but that doesn't do
link |
01:07:27.840
much because if someone takes the basic idea that the code is executing and just does it in a
link |
01:07:35.440
slightly different way, they can get around the copyright. So that's not a lot of protection.
link |
01:07:40.400
Then you can patent software, but that's kind of, I mean, getting a patent costs,
link |
01:07:47.200
I don't know if you remember what yours cost or like, was it through an institution?
link |
01:07:51.280
Yeah, it was through a university. It was insane. There were so many lawyers, so many meetings.
link |
01:07:57.520
It made me feel like it must've been hundreds of thousands of dollars. It must've been something
link |
01:08:02.160
crazy. Oh yeah. It's insane the cost of getting a patent. And so this idea of protecting the
link |
01:08:07.760
inventor in their own garage who came up with a great idea is kind of, that's the thing of the
link |
01:08:12.560
past. It's all just companies trying to protect things and it costs a lot of money. And then
link |
01:08:18.960
with code, it's oftentimes by the time the patent is issued, which can take like five years,
link |
01:08:25.120
probably your code is obsolete at that point. So it's a very, again, a very blunt instrument that
link |
01:08:31.520
doesn't work well for that industry. And so at this point we should really have something better,
link |
01:08:37.440
but we don't. Do you like open source? Yeah. Is open source good for society?
link |
01:08:41.840
You think all of us should open source code? Well, so at the Media Lab at MIT, we have an
link |
01:08:48.720
open source default because what we've noticed is that people will come in, they'll write some code
link |
01:08:54.160
and they'll be like, how do I protect this? And we're like, that's not your problem right now.
link |
01:08:58.640
Your problem isn't that someone's going to steal your project. Your problem is getting people to
link |
01:09:02.160
use it at all. There's so much stuff out there. We don't even know if you're going to get traction
link |
01:09:07.040
for your work. And so open sourcing can sometimes help, you know, get people's work out there,
link |
01:09:12.640
but ensure that they get attribution for it, for the work that they've done. So like,
link |
01:09:17.360
I'm a fan of it in a lot of contexts. Obviously it's not like a one size fits all solution.
link |
01:09:23.680
So what I gleaned from your Twitter is, you're a mom. I saw a quote, a reference to baby bot.
link |
01:09:32.560
What have you learned about robotics and AI from raising a human baby bot?
link |
01:09:42.640
Well, I think that my child has made it more apparent to me that the systems we're currently
link |
01:09:48.560
creating aren't like human intelligence. Like there's not a lot to compare there.
link |
01:09:54.480
It's just, he has learned and developed in such a different way than a lot of the AI systems
link |
01:09:59.920
we're creating that that's not really interesting to me to compare. But what is interesting to me
link |
01:10:07.360
is how these systems are going to shape the world that he grows up in. And so I'm like even more
link |
01:10:13.520
concerned about kind of the societal effects of developing systems that, you know, rely on
link |
01:10:19.680
massive amounts of data collection, for example. So is he going to be allowed to use like Facebook or
link |
01:10:26.720
Facebook? Facebook is over. Kids don't use that anymore. Snapchat. What do they use? Instagram?
link |
01:10:33.360
Snapchat's over too. I don't know. I just heard that TikTok is over, which I've never even seen.
link |
01:10:38.080
So I don't know. No. We're old. We don't know. I need to, I'm going to start gaming and streaming
link |
01:10:44.560
my, my gameplay. So what do you see as the future of personal robotics, social robotics, interaction
link |
01:10:52.960
with other robots? Like what are you excited about if you were to sort of philosophize about what
link |
01:10:58.320
might happen in the next five, 10 years that would be cool to see? Oh, I really hope that we get kind
link |
01:11:05.040
of a home robot that makes it, that's a social robot and not just Alexa. Like it's, you know,
link |
01:11:12.160
I really love the Anki products. I thought Jibo was, had some really great aspects. So I'm hoping
link |
01:11:19.520
that a company cracks that. Me too. So Kate, it was a wonderful talking to you today. Likewise.
link |
01:11:26.800
Thank you so much. It was fun. Thanks for listening to this conversation with Kate Darling.
link |
01:11:32.080
And thank you to our sponsors, ExpressVPN and Masterclass. Please consider supporting the
link |
01:11:37.520
podcast by signing up to Masterclass at masterclass.com slash Lex and getting ExpressVPN at
link |
01:11:45.200
expressvpn.com slash LexPod. If you enjoy this podcast, subscribe on YouTube, review it with
link |
01:11:52.160
five stars on Apple podcast, support it on Patreon, or simply connect with me on Twitter
link |
01:11:57.200
at Lex Friedman. And now let me leave you with some tweets from Kate Darling. First tweet is
link |
01:12:05.440
the pandemic has fundamentally changed who I am. I now drink the leftover milk in the bottom of
link |
01:12:11.920
the cereal bowl. Second tweet is I came on here to complain that I had a really bad day and saw that
link |
01:12:19.600
a bunch of you are hurting too. Love to everyone. Thank you for listening. I hope to see you next
link |
01:12:26.320
time.