back to index

Kate Darling: Social Robotics | Lex Fridman Podcast #98


small model | large model

link |
00:00:00.000
The following is a conversation with Kate Darling, a researcher at MIT interested in
link |
00:00:05.280
social robotics, robot ethics, and generally how technology intersects with society.
link |
00:00:11.040
She explores the emotional connection between human beings and lifelike machines,
link |
00:00:15.680
which for me is one of the most exciting topics in all of artificial intelligence.
link |
00:00:21.360
As she writes in her bio, she is a caretaker of several domestic robots,
link |
00:00:26.240
including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti.
link |
00:00:33.600
She is one of the funniest and brightest minds I've ever had the fortune to talk to.
link |
00:00:37.840
This conversation was recorded recently, but before the outbreak of the pandemic.
link |
00:00:42.240
For everyone feeling the burden of this crisis, I'm sending love your way.
link |
00:00:46.720
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
00:00:51.360
review it with five stars on Apple Podcasts, support on Patreon,
link |
00:00:54.960
or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N.
link |
00:01:00.640
As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the
link |
00:01:05.040
flow of the conversation. I hope that works for you and doesn't hurt the listening experience.
link |
00:01:10.880
Quick summary of the ads, two sponsors, Masterclass and ExpressVPN. Please consider supporting
link |
00:01:17.520
the podcast by signing up to masterclass at masterclass.com slash lex and getting expressvpn
link |
00:01:24.560
at expressvpn.com slash lex pod. This show is sponsored by Masterclass.
link |
00:01:31.600
Sign up at masterclass.com slash lex to get a discount and to support this podcast.
link |
00:01:37.760
When I first heard about Masterclass, I thought it was too good to be true.
link |
00:01:41.840
For $180 a year, you get an all access pass to watch courses from to list some of my favorites,
link |
00:01:48.800
Chris Hatfield on space exploration, Neil deGrasse Tyson on scientific thinking and
link |
00:01:53.520
communication, Will Wright, creator of SimCity and Sims, love those games on game design,
link |
00:02:00.240
Carlos Santana on guitar, Gary Kasparov on chess, Daniel Negrano on poker and many more.
link |
00:02:07.680
Chris Hatfield explaining how Rockets work and the experience of being launched into space alone
link |
00:02:12.720
is worth the money. By the way, you can watch it on basically any device. Once again,
link |
00:02:18.960
sign up on masterclass.com slash lex to get a discount and to support this podcast.
link |
00:02:25.040
This show is sponsored by ExpressVPN. Get it at expressvpn.com slash lex pod
link |
00:02:32.080
to get a discount and to support this podcast. I've been using ExpressVPN for many years.
link |
00:02:37.920
I love it. It's easy to use. Press the big power on button and your privacy is protected.
link |
00:02:43.920
And if you like, you can make it look like your location is anywhere else in the world.
link |
00:02:49.040
I might be in Boston now, but it can make it look like I'm in New York, London, Paris or anywhere
link |
00:02:53.920
else. This has a large number of obvious benefits. Certainly, it allows you to access international
link |
00:03:00.000
versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any
link |
00:03:06.880
device you can imagine. I use it on Linux. Shout out to Ubuntu 2004, Windows, Android,
link |
00:03:15.440
but it's available everywhere else too. Once again, get it at expressvpn.com slash lex pod
link |
00:03:22.480
to get a discount and to support this podcast. And now here's my conversation with Kate Darling.
link |
00:03:29.920
You co taught robot ethics at Harvard. What are some ethical issues that arise
link |
00:03:37.120
in the world with robots? Yeah, that was a reading group that I did when I,
link |
00:03:42.480
like at the very beginning, first became interested in this topic. So I think if I
link |
00:03:47.920
taught that class today, it would look very, very different. Robot ethics, it sounds very
link |
00:03:53.840
science fictiony, especially did back then. But I think that some of the issues that people in
link |
00:04:01.200
robot ethics are concerned with are just around the ethical use of robotic technology in general.
link |
00:04:05.600
So for example, responsibility for harm, automated weapon systems, things like privacy and data
link |
00:04:11.040
security, things like automation and labor markets. And then personally, I'm really interested in some
link |
00:04:19.840
of the social issues that come out of our social relationships with robots. One on one relationship
link |
00:04:25.360
with robots. Yeah. I think most of the stuff we have to talk about is like one on one social
link |
00:04:29.360
stuff. That's what I love. I think that's what you're, you love as well and they're expert in.
link |
00:04:34.160
But as a societal level, there's like, there's a presidential candidate now, Andrew Yang, running,
link |
00:04:41.360
concerned about automation and robots and AI in general, taking away jobs. He has a proposal of
link |
00:04:48.160
UBI, universal basic income of everybody gets a thousand bucks. Yeah. As a way to sort of
link |
00:04:54.560
save you if you lose your job from automation to allow you time to discover what it is that you
link |
00:05:01.680
would like to or even love to do. Yes. So I lived in Switzerland for 20 years and universal basic
link |
00:05:09.600
income has been more of a topic there separate from the whole robots and jobs issue. So
link |
00:05:15.840
it's so interesting to me to see kind of these Silicon Valley people latch on to this concept
link |
00:05:22.080
that came from a very kind of left wing socialist, you know, kind of a different place in Europe.
link |
00:05:32.800
But on the automation and labor markets topic, I think that it's very, so sometimes in those
link |
00:05:41.040
conversations, I think people overestimate where robotic technology is right now. And we also have
link |
00:05:47.920
this fallacy of constantly comparing robots to humans and thinking of this as a one to one
link |
00:05:53.200
replacement of jobs. So even like Bill Gates a few years ago said something about, you know,
link |
00:05:58.400
maybe we should have a system that taxes robots for taking people's jobs. And it just, I mean,
link |
00:06:06.480
I'm sure that was taken out of context, you know, he's a really smart guy, but that sounds to me
link |
00:06:11.120
like kind of viewing it as a one to one replacement versus viewing this technology as kind of a
link |
00:06:16.800
supplemental tool that of course is going to shake up a lot of stuff. It's going to change the job
link |
00:06:22.160
landscape, but I don't see, you know, robots taking all the jobs in the next 20 years. That's just
link |
00:06:28.320
not how it's going to work. Right. So maybe drifting into the land of more personal relationships
link |
00:06:34.640
with robots and interaction and so on. I got to warn you, I go, I may ask some silly philosophical
link |
00:06:42.240
questions. I apologize. Oh, please do. Okay. Do you think humans will abuse robots in their
link |
00:06:49.680
interaction? So you've had a lot of, and we'll talk about it sort of anthropomorphization and
link |
00:06:55.600
work, you know, this intricate dance, emotional dance between human and robot, but this seems to
link |
00:07:03.520
be also a darker side where people, when they treat the other as servants, especially, they
link |
00:07:10.880
can be a little bit abusive or a lot abusive. Do you think about that? Do you worry about that?
link |
00:07:16.400
Yeah, I do think about that. So I mean, one of my, one of my main interests is the fact that
link |
00:07:22.640
people subconsciously treat robots like living things. And even though they know that they're
link |
00:07:27.600
interacting with a machine and what it means in that context to behave violently, I don't know
link |
00:07:34.640
if you could say abuse because you're not actually abusing the inner mind of the robot that robot
link |
00:07:41.280
is in doesn't have any feelings. As far as you know. Well, yeah, it also depends on how we
link |
00:07:47.040
define feelings and consciousness, but I think that's another area where people kind of overestimate
link |
00:07:52.560
where we currently are with the technology, like the robots are not even as smart as insects right
link |
00:07:56.720
now. And so I'm not worried about abuse in that sense, but it is interesting to think about what
link |
00:08:03.760
does people's behavior towards these things mean for our own behavior? Is it desensitizing the
link |
00:08:10.960
people to be verbally abusive to a robot or even physically abusive? And we don't know.
link |
00:08:17.440
Right. It's a similar connection from like, if you play violent video games,
link |
00:08:20.960
what connection does that have to desensitization to violence? I haven't read literature on that.
link |
00:08:29.360
I wonder about that. Because everything I've heard, people don't seem to any longer be so
link |
00:08:35.680
worried about violent video games. Correct. We've seemed, the research on it is,
link |
00:08:42.560
it's a difficult thing to research. So it's sort of inconclusive, but we seem to have gotten the
link |
00:08:49.280
sense, at least as a society, that people can compartmentalize. When it's something on a screen
link |
00:08:54.800
and you're like shooting a bunch of characters or running over people with your car that doesn't
link |
00:08:59.680
necessarily translate to you doing that in real life, we do, however, have some concerns about
link |
00:09:05.120
children playing violent video games. And so we do restrict it there. I'm not sure that's based on
link |
00:09:11.040
any real evidence either, but it's just the way that we've kind of decided, we want to be a little
link |
00:09:16.240
more cautious there. And the reason I think robots are a little bit different is because there is a
link |
00:09:20.720
lot of research showing that we respond differently to something in our physical space than something
link |
00:09:25.600
on a screen. We will treat it much more viscerally, much more like a physical actor. And so it's
link |
00:09:34.240
totally possible that this is not a problem. And it's the same thing as violent video games,
link |
00:09:40.320
you know, maybe, you know, restrict it with kids to be safe, but adults can do what they want.
link |
00:09:45.200
But we just need to ask the question again, because we don't have any evidence at all yet.
link |
00:09:50.880
Maybe there's an intermediate place to, I did my research on Twitter. By research, I mean
link |
00:09:58.320
scrolling through your Twitter feed. You mentioned that you were going at some point to an animal
link |
00:10:03.120
law conference. So I have to ask, do you think there's something that we can learn
link |
00:10:09.360
from animal rights that guides our thinking about robots?
link |
00:10:12.320
Oh, I think there is so much to learn from that. I'm actually writing a book on it right now,
link |
00:10:17.040
that's why I'm going to this conference. So I'm writing a book that looks at the history of
link |
00:10:22.160
animal domestication and how we've used animals for work, for weaponry, for companionship. And,
link |
00:10:28.320
you know, one of the things the book tries to do is move away from this fallacy that I talked about
link |
00:10:34.000
of comparing robots and humans, because I don't think that's the right analogy. But I do think
link |
00:10:39.680
that on a social level, even on a social level, there's so much that we can learn from looking
link |
00:10:43.920
at that history, because throughout history, we've treated most animals like tools, like products,
link |
00:10:49.360
and then some of them we've treated differently. And we're starting to see people treat robots in
link |
00:10:53.200
really similar ways. So I think it's a really helpful predictor to how we're going to interact
link |
00:10:57.920
with the robots. Do you think we'll look back at this time, like 100 years from now, and see
link |
00:11:04.400
what we do to animals as similar to the way we view the Holocaust in World War II?
link |
00:11:13.200
That's a great question. I mean, I hope so. I am not convinced that we will. But I often wonder,
link |
00:11:22.400
you know, what are my grandkids going to view as abhorrent that my generation did,
link |
00:11:28.480
that they would never do? And I'm like, well, what's the big deal? It's a fun question to ask
link |
00:11:33.920
yourself. It always seems that there's atrocities that we discover later. So the things that at
link |
00:11:41.520
the time people didn't see as, you know, you look at everything from slavery,
link |
00:11:48.880
to any kinds of abuse throughout history, to the kind of insane wars that were happening,
link |
00:11:54.080
to the way war was carried out, and rape, and the kind of violence that was happening during
link |
00:12:00.560
war that we now, you know, we see as atrocities, but at the time, perhaps, didn't as much. And so
link |
00:12:10.000
now I have this intuition that I have this worry, maybe you're going to probably criticize me,
link |
00:12:18.320
but I do anthropomorphize robots. I don't see a fundamental philosophical difference
link |
00:12:27.760
between a robot and a human being in terms of once the capabilities are matched. So the fact
link |
00:12:37.280
that we're really far away doesn't, in terms of capabilities in the net from, from natural
link |
00:12:43.360
language processing, understanding a generation to just reasoning and all that stuff. I think
link |
00:12:48.000
once you solve it, I see the, this is a very gray area. And I don't feel comfortable with the kind
link |
00:12:53.920
of abuse that people throw at robots, subtle, but I can see it becoming, I can see basically a
link |
00:13:01.280
civil rights movement for robots in the future. Do you think, let me put it in the form of a
link |
00:13:06.960
question, do you think robots should have some kinds of rights? Well, it's interesting because I
link |
00:13:12.720
came at this originally from your perspective. I was like, you know what, there's no fundamental
link |
00:13:19.360
difference between technology and like human consciousness. Like we can probably recreate
link |
00:13:25.200
anything. We just don't know how yet. And so there's no reason not to give machines the same rights
link |
00:13:32.640
that we have once, like you say, they're kind of on an equivalent level. But I realized that that
link |
00:13:38.640
is kind of a far future question. I still think we should talk about it because I think it's
link |
00:13:42.480
really interesting. But I realized that it's actually, we might need to ask the robot rights
link |
00:13:47.840
question even sooner than that. While the machines are still, you know, quote unquote, really, you
link |
00:13:53.600
know, dumb and not on our level, because of the way that we perceive them. And I think one of the
link |
00:14:00.240
lessons we learned from looking at the history of animal rights, and one of the reasons we may not
link |
00:14:04.640
get to a place in 100 years where we view it as wrong to, you know, eat or otherwise, you know,
link |
00:14:10.560
use animals for our own purposes, is because historically, we've always protected those
link |
00:14:16.000
things that we relate to the most. So one example is whales. No one gave a shit about the whales.
link |
00:14:22.480
Am I allowed to swear? Yeah, you swear as much as you want. Freedom. Yeah, no one gave a shit
link |
00:14:28.080
about the whales until someone recorded them singing. And suddenly people were like, oh,
link |
00:14:32.320
this is a beautiful creature. And now we need to save the whales. And that started the whole
link |
00:14:36.640
save the whales movement in the 70s. So I'm, as much as I, and I think a lot of people want to
link |
00:14:45.680
believe that we care about consistent biological criteria, that's not historically how we formed
link |
00:14:53.360
our alliances. Yeah, so what, why do we, why do we believe that all humans are created equal?
link |
00:15:01.760
Killing of a human being, no matter who the human being is, that's what I meant by equality, is bad.
link |
00:15:08.960
And then because I'm connecting that to robots, and I'm wondering whether mortality, so the
link |
00:15:14.960
killing act is what makes something, that's the fundamental first right. So I'm, I am currently
link |
00:15:21.280
allowed to take a shotgun and shoot a Roomba. I think I'm not sure, but I'm pretty sure it's
link |
00:15:29.120
not considered murder, right? Or even shutting them off. So that's, that's where the line appears
link |
00:15:36.480
to be, right? Is it mortality? A critical thing here? I think here again, like the animal analogy
link |
00:15:44.000
is really useful because you're also allowed to shoot your dog, but people won't be happy about it.
link |
00:15:50.080
So we give, we do give animals certain protections from like, you know, you're not allowed to torture
link |
00:15:56.560
your dog and set it on fire, at least in most states and countries, you know. But you're still
link |
00:16:03.200
allowed to treat it like a piece of property in a lot of other ways. And so we draw these,
link |
00:16:08.960
you know, arbitrary lines all the time. And, you know, there's a lot of philosophical thought on
link |
00:16:18.320
why viewing humans as something unique is not, is just speciesism and not, you know,
link |
00:16:27.360
based on any criteria that would actually justify making a difference between us and other species.
link |
00:16:33.680
Do you think in general, people, most people are good? Do you think, or do you think there's
link |
00:16:42.960
evil and good in all of us? That's revealed through our circumstances and through our interactions.
link |
00:16:53.520
I like to view myself as a person who like, believes that there's no absolute evil and
link |
00:16:57.600
good and that everything is, you know, gray. But I do think it's an interesting question. Like,
link |
00:17:06.800
when I see people being violent towards robotic objects, you said that bothers you because
link |
00:17:11.760
the robots might someday, you know, be smart. And is that what? Well, it bothers me because it
link |
00:17:19.120
reveals, so I personally believe, because I've studied way too much, so I'm Jewish, I studied
link |
00:17:24.400
the Holocaust and World War II exceptionally well. I personally believe that most of us have evil in us
link |
00:17:31.680
that what bothers me is the abuse of robots reveals the evil in human beings. Yeah. And
link |
00:17:40.960
I think it doesn't just bother me. I think it's an opportunity for roboticists to
link |
00:17:48.000
make, help people find the better sides, the angels of their nature, right? That that abuse
link |
00:17:56.320
isn't just a fun side thing. That's you revealing a dark part that you shouldn't,
link |
00:18:01.520
that should be hidden deep inside. Yeah, I mean, you laugh, but some of our research does indicate
link |
00:18:09.280
that maybe people's behavior towards robots reveals something about their tendencies for
link |
00:18:14.240
empathy generally, even using very simple robots that we have today that like clearly don't feel
link |
00:18:18.480
anything. So, you know, Westworld is maybe, you know, not so far off and it's like, you know,
link |
00:18:27.360
depicting the bad characters as willing to go around and shoot and rape the robots and the
link |
00:18:31.920
good characters is not wanting to do that, even without assuming that the robots have consciousness.
link |
00:18:37.600
So there's a opportunity, it's interesting, there's opportunity to almost practice empathy. The, on
link |
00:18:45.040
robots is an opportunity to practice empathy. I agree with you. Some people would say,
link |
00:18:51.680
why are we practicing empathy on robots instead of, you know, on our fellow humans or on animals
link |
00:18:57.040
that are actually alive and experience the world? And I don't agree with them because I don't think
link |
00:19:02.080
empathy is a zero sum game. And I do think that it's a muscle that you can train and that we
link |
00:19:06.160
should be doing that. But some people disagree. So the interesting thing, you've heard, you know,
link |
00:19:14.480
raising kids, sort of asking them or telling them to be nice to the smart speakers, to Alexa,
link |
00:19:25.360
and so on, saying please and so on during the request. I don't know if I'm a huge fan of that
link |
00:19:31.280
idea. Because yeah, that's towards the idea of practicing empathy. I feel like politeness,
link |
00:19:36.400
I'm always polite to all the, all the systems that we build, especially anything that's speech
link |
00:19:41.040
interaction based, like when we talk to the car, I always have a pretty good detector for please.
link |
00:19:46.880
I feel like there should be a room for encouraging empathy in those interactions.
link |
00:19:53.200
Yeah. Okay, so I agree with you. So I'm going to play devil's advocate. Sure.
link |
00:19:56.320
Sure. What is the devil's advocate argument there?
link |
00:20:01.120
The devil's advocate argument is that if you are the type of person who has abusive tendencies or
link |
00:20:07.120
needs to get some sort of like behavior like that out, needs an outlet for it, that it's great to
link |
00:20:12.320
have a robot that you can scream at so that you're not screaming at a person. And we just don't know
link |
00:20:18.320
whether that's true, whether it's an outlet for people or whether it just kind of, as my friend
link |
00:20:23.120
once said, trains their cruelty muscles and makes them more cruel in other situations.
link |
00:20:28.800
Oh boy, yeah. And that expands to other topics, which I don't know. There's a topic of sex,
link |
00:20:38.800
which is weird one that I tend to avoid from robotics perspective. And mostly general public
link |
00:20:44.400
doesn't. They talk about sex robots and so on. Is that an area you've touched at all research wise?
link |
00:20:54.320
That's what people imagine, sort of any kind of interaction between human and robot that
link |
00:21:00.960
shows any kind of compassion. They immediately think from a product perspective in the near term
link |
00:21:07.120
is sort of expansion of what pornography is and all that kind of stuff.
link |
00:21:11.520
Yeah. Do researchers touch this? That's kind of you to characterize it as though they're thinking
link |
00:21:17.200
rationally about product. I feel like sex robots are just such a titillating news hook for people
link |
00:21:22.960
that they become like the story. And it's really hard to not get fatigued by it when you're in
link |
00:21:29.920
the space because you tell someone you do human robot interaction. Of course, the first thing
link |
00:21:33.920
they want to talk about is sex robots. Really? Yeah, it happens a lot. And it's unfortunate
link |
00:21:40.080
that I'm so fatigued by it because I do think that there are some interesting questions that
link |
00:21:44.480
become salient when you talk about sex with robots. See, what I think would happen when
link |
00:21:50.480
people get sex robots, like let's talk guys, okay, guys get female sex robots. What I think
link |
00:21:56.480
there's an opportunity for is an actual, like they'll actually interact,
link |
00:22:04.800
what I'm trying to say, they won't, outside of the sex would be the most fulfilling part.
link |
00:22:11.360
Like the interaction, it's like the folks who, there's movies in this, right? Who pay a prostitute
link |
00:22:18.240
and then end up just talking to her the whole time. So I feel like there's an opportunity,
link |
00:22:22.720
it's like most guys and people in general joke about the sex act, but really people are just
link |
00:22:29.360
lonely inside and looking for connection, many of them. And it'd be unfortunate if that connection
link |
00:22:38.640
is established through the sex industry. I feel like it should go into the front door of like,
link |
00:22:44.640
people are lonely and they want a connection. Well, I also feel like we should kind of de
link |
00:22:50.880
stigmatize the sex industry because even prostitution, like there are prostitutes that
link |
00:22:57.360
specialize in disabled people who don't have the same kind of opportunities to explore their
link |
00:23:04.800
sexuality. So I feel like we should de stigmatize all of that generally. But yeah, that connection
link |
00:23:12.240
and that loneliness is an interesting topic that you bring up because while people are
link |
00:23:18.000
constantly worried about robots replacing humans and oh, if people get sex robots and the sex is
link |
00:23:22.880
really good and they won't want their partner or whatever, but we rarely talk about robots
link |
00:23:28.400
actually filling a hole where there's nothing and what benefit that can provide to people.
link |
00:23:34.640
Yeah, I think that's an exciting, there's a giant hole that's unfillable by humans.
link |
00:23:42.000
It's asking too much of your friends and people you're in a relationship with in your family
link |
00:23:46.720
to fill that hole because it's exploring the full complexity and richness of who you are.
link |
00:23:57.120
Like, who are you really? Your family doesn't have enough patients to really sit there and
link |
00:24:04.320
listen to who are you really? And I feel like there's an opportunity to really make that connection
link |
00:24:09.280
with robots. I just feel like we're complex as humans and we're capable of lots of different
link |
00:24:15.920
types of relationships. So whether that's with family members, with friends, with our pets or
link |
00:24:21.920
with robots, I feel like there's space for all of that and all of that can provide value in a
link |
00:24:26.960
different way. Yeah, absolutely. So I'm jumping around. Currently, most of my work is in autonomous
link |
00:24:35.040
vehicles. So the most popular topic among general public is the trolley problem. So most robots
link |
00:24:48.960
kind of hate this question, but what do you think of this thought experiment? What do you think we
link |
00:24:53.360
can learn from it outside of the silliness of the actual application of it to the autonomous vehicle?
link |
00:24:58.960
I think it's still an interesting ethical question, and that in itself, just like much of the
link |
00:25:05.280
interaction with robots has something to teach us. But from your perspective, do you think there's
link |
00:25:10.080
anything there? Well, I think you're right that it does have something to teach us. But I think
link |
00:25:15.120
what people are forgetting in all of these conversations is the origins of the trolley
link |
00:25:19.920
problem and what it was meant to show us, which is that there is no right answer and that sometimes
link |
00:25:25.600
our moral intuition that comes to us instinctively is not actually what we should follow
link |
00:25:33.600
if we care about creating systematic rules that apply to everyone. So I think that as a
link |
00:25:39.920
philosophical concept, it could teach us at least that, but that's not how people are using it right
link |
00:25:46.720
now. These are friends of mine, and I love them dearly, and their project adds a lot of value,
link |
00:25:53.120
but if we're viewing the moral machine project as what we can learn from the trolley problems,
link |
00:25:59.280
the moral machine is, I'm sure you're familiar, it's this website that you can go to, and it gives
link |
00:26:04.560
you different scenarios like, oh, you're in a car, you can decide to run over these two people or
link |
00:26:10.560
this child. What do you choose? Do you choose the homeless person? Do you choose the person who's
link |
00:26:15.280
jaywalking? And so it pits these moral choices against each other and then tries to crowdsource
link |
00:26:22.480
the quote unquote correct answer, which is really interesting and I think valuable data,
link |
00:26:28.960
but I don't think that's what we should base our rules in autonomous vehicles on because
link |
00:26:34.080
it is exactly what the trolley problem is trying to show, which is your first instinct might not
link |
00:26:39.760
be the correct one if you look at rules that then have to apply to everyone and everything.
link |
00:26:45.680
So how do we encode these ethical choices in interaction with robots? So for example,
link |
00:26:50.720
with autonomous vehicles, there is a serious ethical question of, do I protect myself?
link |
00:26:58.960
Does my life have higher priority than the life of another human being?
link |
00:27:03.760
Because that changes certain control decisions that you make. So if your life matters more than
link |
00:27:10.160
other human beings, then you'd be more likely to swerve out of your current lane. So currently,
link |
00:27:16.160
automated emergency braking systems that just break, they don't ever swerve.
link |
00:27:22.080
So swerving into oncoming traffic or, or no, just in a different lane can cause significant
link |
00:27:28.400
harm to others, but it's possible that it causes less harm to you. So that's a difficult ethical
link |
00:27:34.320
question. Do you have a hope that like the trolley problem is not supposed to have a
link |
00:27:43.520
right answer, right? Do you hope that when we have robots at the table, we'll be able to discover
link |
00:27:49.520
the right answer for some of these questions? Well, what's happening right now, I think, is
link |
00:27:56.480
this question that we're facing of what ethical rules should we be programming into the machines
link |
00:28:02.080
is revealing to us that our ethical rules are much less programmable than we probably thought
link |
00:28:09.120
before. And so that's a really valuable insight, I think, that these issues are very complicated,
link |
00:28:17.840
and that in a lot of these cases, you can't really make that call, like not even as a legislator.
link |
00:28:24.640
And so what's going to happen in reality, I think, is that car manufacturers are just going to try and
link |
00:28:31.600
avoid the problem and avoid liability in any way possible, or like they're going to always protect
link |
00:28:36.960
the driver, because who's going to buy a car if it's programmed to kill you instead of someone
link |
00:28:43.200
else. So that's what's going to happen in reality. But what did you mean by like once we have robots
link |
00:28:49.760
at the table, like do you mean when they can help us figure out what to do? No, I mean, when robots
link |
00:28:57.440
are part of the ethical decisions. So no, no, not they help us. Well,
link |
00:29:01.920
Oh, you mean when it's like, should I run over a robot or a person?
link |
00:29:09.280
Right, that kind of thing. So when you it's exactly what you said, which is when you have to
link |
00:29:17.200
encode the ethics into an algorithm, you start to try to really understand what are the fundamentals
link |
00:29:23.920
of the decision making process, you make this make certain decisions. Should you
link |
00:29:28.480
do like capital punishment? Should you take a person's life or not to punish them for a certain
link |
00:29:35.440
crime? Sort of, you can use, you can develop an algorithm to make that decision, right?
link |
00:29:42.560
And the hope is that the act of making that algorithm, however you make it, so there's a
link |
00:29:49.360
few approaches, will help us actually get to the core of what is right and what is wrong under
link |
00:29:57.920
our current societal standards. But isn't that what's happening right now? And we're realizing
link |
00:30:03.520
that we don't have a consensus on what's right and wrong. You mean in politics in general?
link |
00:30:08.240
Well, like when we're thinking about these trolley problems and autonomous vehicles and how to
link |
00:30:12.880
program ethics into machines and how to, you know, make make AI algorithms fair and equitable,
link |
00:30:22.000
we're realizing that this is so complicated. And it's complicated in part because there is
link |
00:30:27.440
doesn't seem to be a one right answer in any of these cases. Do you have a hope for like,
link |
00:30:32.240
one of the ideas of the moral machine is that crowdsourcing can help us
link |
00:30:37.120
converge towards like democracy can help us converge towards the right answer.
link |
00:30:42.080
Do you have a hope for crowdsourcing? Well, yes and no. So I think that in general,
link |
00:30:47.680
you know, I have a legal background and policymaking is often about trying to suss out,
link |
00:30:51.920
you know, what rules does this society, this particular society agree on and then trying to
link |
00:30:56.800
codify that. So the law makes these choices all the time and then tries to adapt according to
link |
00:31:01.280
changing culture. But in the case of the moral machine project, I don't think that people's
link |
00:31:07.280
choices on that website necessarily necessarily reflect what laws they would want in place.
link |
00:31:13.840
If given, I think you would have to ask them a series of different questions in order to get
link |
00:31:18.480
at what their consensus is. I agree. But that has to do more with the artificial nature of,
link |
00:31:24.960
I mean, they're showing some cute icons on a screen. That's almost, so if you, for example,
link |
00:31:31.520
we would do a lot of work in virtual reality. And so if you make, if you put those same people
link |
00:31:36.800
into virtual reality where they have to make that decision, their decision would be very different,
link |
00:31:42.000
I think. I agree with that. That's one aspect. And the other aspect is it's a different question
link |
00:31:47.520
to ask someone, would you run over the homeless person or the doctor in this scene? Or do you
link |
00:31:54.560
want cars to always run over the homeless people? I think, yeah. So let's talk about anthropomorphism.
link |
00:32:02.160
To me, anthropomorphism, if I can pronounce it correctly, is, is one of the most fascinating
link |
00:32:08.080
phenomena from like both engineering perspective and psychology perspective, machine learning
link |
00:32:13.760
perspective and robotics in general. Can you step back and define anthropomorphism, how you see it
link |
00:32:23.200
in general terms in your, in your work? Sure. So anthropomorphism is this tendency that we
link |
00:32:28.800
have to project human like traits and behaviors and qualities onto nonhumans. And we often see it
link |
00:32:36.640
with animals, like we'll, we'll project emotions on animals that may or may not actually be there.
link |
00:32:41.760
We, we often see that we're trying to interpret things according to our own behavior when we get
link |
00:32:46.400
it wrong. But we do it with more than just animals, we do it with objects, you know,
link |
00:32:51.280
teddy bears, we see, you know, faces in the headlights of cars. And we do it with robots,
link |
00:32:57.840
very, very extremely. You think that can be engineered? Can that be used to enrich an
link |
00:33:02.640
interaction between an AI system and the human? Oh yeah, for sure. And do you see it being used
link |
00:33:10.000
that way often? Like, I don't, I haven't seen, whether it's Alexa or any of the smart speaker
link |
00:33:19.600
systems often trying to optimize for the anthropomorphization. You said you haven't seen?
link |
00:33:27.920
I haven't seen. They keep moving away from that. I think they're afraid of that.
link |
00:33:32.400
They, they actually, so I only recently found out, but did you know that Amazon has like a whole
link |
00:33:38.080
team of people who are just there to work on Alexa's personality?
link |
00:33:44.480
So I've, I know that depends on what you mean by personality. I didn't know, I didn't know that
link |
00:33:49.520
exact thing. But I do know that the, how the voice is perceived is worked on a lot, whether
link |
00:33:58.640
that if it's a pleasant feeling about the voice, but that has to do more with the texture of the
link |
00:34:03.520
sound and the audience on what personality is more like. It's like what's her favorite beer
link |
00:34:10.000
when you ask her. And the personality team is different for every country too. Like there's
link |
00:34:14.720
a different personality for German Alexa than there is for American Alexa. That said, I think
link |
00:34:20.640
it's very difficult to, you know, use the really, really harness the anthropomorphism
link |
00:34:29.520
with these voice assistants because the voice interface is still very primitive. And I think that
link |
00:34:37.280
in order to get people to really suspend their disbelief and treat a robot like it's alive,
link |
00:34:43.840
less is sometimes more. You, you want them to project onto the robot and you want the robot to
link |
00:34:48.960
not disappoint their expectations for how it's going to answer or behave in order for them to
link |
00:34:54.160
have this kind of illusion. And with Alexa, I don't think we're there yet or Siri that just,
link |
00:35:00.480
they're just not good at that. But if you look at some of the more animal like robots, like the baby
link |
00:35:07.520
seal that they use with the dementia patients, so much more simple design doesn't try to talk to you.
link |
00:35:12.880
You can't disappoint you in that way. It just makes little movements and sounds and
link |
00:35:17.360
people stroke it and it responds to their touch. And that is like a very effective way to harness
link |
00:35:23.280
people's tendency to kind of treat the robot like a living thing.
link |
00:35:28.880
Yeah. So you bring up some interesting ideas in your paper chapter, I guess,
link |
00:35:35.520
anthropomorphic framing human robot interaction that I read the last time we scheduled this.
link |
00:35:40.640
Oh my God, that was a long time ago.
link |
00:35:44.320
What are some good and bad cases of anthropomorphism in your perspective?
link |
00:35:48.160
Like when is it good? When is it bad? Well, I should start by saying that while design can
link |
00:35:55.600
really enhance the anthropomorphism, it doesn't take a lot to get people to treat a robot like
link |
00:36:00.800
it's alive. Over 85% of Roombas have a name, which I don't know the numbers for your regular
link |
00:36:08.000
type of vacuum cleaner, but they're not that high, right? So people will feel bad for the Roomba
link |
00:36:12.400
when it gets stuck. They'll send it in for repair and want to get the same one back. And that one
link |
00:36:16.560
is not even designed to make you do that. So I think that some of the cases where it's maybe
link |
00:36:24.320
a little bit concerning that anthropomorphism is happening is when you have something that's
link |
00:36:29.040
supposed to function like a tool and people are using it in the wrong way. And one of the concerns
link |
00:36:33.760
is military robots. Early 2000s, which is a long time ago, iRobot, the Roomba company,
link |
00:36:47.680
made this robot called the PacBot that was deployed in Iraq and Afghanistan with the
link |
00:36:54.080
bomb disposal units that were there. And the soldiers became very emotionally attached to
link |
00:36:59.600
the robots. And that's fine until a soldier risks his life to save a robot, which you
link |
00:37:09.040
really don't want. But they were treating them like pets, like they would name them,
link |
00:37:12.640
they would give them funerals with gun salutes, they would get really upset and traumatized
link |
00:37:17.200
when the robot got broken. So in situations where you want a robot to be a tool, in particular,
link |
00:37:23.840
when it's supposed to do a dangerous job that you don't want a person doing,
link |
00:37:26.960
it can be hard when people get emotionally attached to it. That's maybe something that
link |
00:37:33.520
you would want to discourage. Another case for concern is maybe when companies try to
link |
00:37:40.400
leverage the emotional attachment to exploit people. So if it's something that's not in the
link |
00:37:46.000
consumer's interest, trying to sell them products or services or exploit an emotional connection
link |
00:37:52.160
to keep them paying for a cloud service for a social robot or something like that,
link |
00:37:56.240
might be, I think that's a little bit concerning as well.
link |
00:38:00.160
Yeah, the emotional manipulation, which probably happens behind the scenes now
link |
00:38:04.160
with some social networks and so on, but making it more explicit. What's your favorite robot?
link |
00:38:12.560
Fictional or real?
link |
00:38:13.760
No, real. Real robot, which you have felt a connection with, or not anthropomorphic
link |
00:38:23.440
connection, but I mean, you sit back and say, damn, this is an impressive system.
link |
00:38:32.080
Wow, so two different robots. So the Pleo baby dinosaur robot that is no longer sold that came
link |
00:38:39.200
out in 2007, that one I was very impressed with. But from an anthropomorphic perspective,
link |
00:38:46.080
I was impressed with how much I bonded with it, how much I wanted to believe that it had this
link |
00:38:50.880
inner life. Can you describe Pleo? Can you describe what it is? How big is it? What can it actually
link |
00:38:57.520
do? Yeah, Pleo is about the size of a small cat. It had a lot of motors that gave it this kind of
link |
00:39:06.480
lifelike movement. It had things like touch sensors and an infrared camera. So it had all
link |
00:39:11.440
these cool little technical features, even though it was a toy. And the thing that really
link |
00:39:18.800
struck me about it was that it could mimic pain and distress really well. So if you held it up
link |
00:39:24.320
by the tail, it had a tilt sensor that told it what direction it was facing and it would start to
link |
00:39:28.960
squirm and cry out. If you hit it too hard, it would start to cry. So it was very impressive
link |
00:39:36.400
in design. And what's the second robot that you said there might have been two that you liked?
link |
00:39:43.040
Yeah, so the Boston Dynamics robots are just impressive feats of engineering.
link |
00:39:49.760
Have you met them in person? Yeah, I recently got a chance to go visit. And I was always one of
link |
00:39:54.800
those people who watched the videos and was like, this is super cool, but also it's a product video.
link |
00:39:59.600
Like, I don't know how many times that they had to shoot this to get it right. But visiting them,
link |
00:40:05.280
you know, I'm pretty sure that I was very impressed. Let's put it that way.
link |
00:40:09.360
Yeah. And in terms of the control, I think that was a transformational moment for me
link |
00:40:15.520
when I met Spotmini in person. Because, okay, maybe this is a psychology experiment,
link |
00:40:23.440
but I anthropomorphized the crap out of it. So I immediately, it was like my best friend.
link |
00:40:30.640
Right? I think it's really hard for anyone to watch Spotmove and not feel like it has agency.
link |
00:40:35.760
Yeah, this movement, especially the arm on Spotmini, really obviously looks like a head.
link |
00:40:44.240
Yeah. And they say, no, wouldn't mean it that way. But it obviously, it looks exactly like that.
link |
00:40:51.440
And so it's almost impossible to not think of it as almost like the baby dinosaur, but slightly
link |
00:40:57.840
larger. And this movement of the, of course, the intelligence is their whole idea is that
link |
00:41:04.240
it's not supposed to be intelligent. It's a platform on which you build
link |
00:41:08.480
higher intelligence. It's actually really, really dumb. It's just a basic movement platform.
link |
00:41:13.520
Yeah. But even dumb robots can, like we can immediately respond to them in this visceral way.
link |
00:41:19.920
What are your thoughts about Sophia, the robot, this kind of mix of some basic natural English
link |
00:41:26.640
processing and basically an art experiment? Yeah. An art experiment is a good way to characterize it.
link |
00:41:34.560
I'm much less impressed with Sophia than I am with Boston Dynamics.
link |
00:41:37.760
She said she likes you. She said she admires you.
link |
00:41:40.720
Yeah, she followed me on Twitter at some point. Yeah.
link |
00:41:43.680
Yeah. And she tweets about how much she likes you. So.
link |
00:41:46.400
So what does that mean? I have to be nice or?
link |
00:41:48.320
No, I don't know. See, I was emotionally manipulating you.
link |
00:41:51.520
And no, how do you think of the whole thing that happened with Sophia is quite a large
link |
00:41:59.840
number of people kind of immediately had a connection and thought that maybe we're far
link |
00:42:05.440
more advanced with robotics than we are or actually didn't even think much. I was surprised
link |
00:42:10.080
how little people cared that they kind of assumed that, well, of course, AI can do this.
link |
00:42:18.320
Yeah. And then if they assumed that, I felt they should be more impressed.
link |
00:42:26.960
Well, you know what I mean? People really overestimate where we are. And so when something,
link |
00:42:31.600
I don't even think Sophia was very impressive or is very impressive. I think she's kind of a puppet,
link |
00:42:36.720
to be honest. But yeah, I think people have a little bit influenced by science fiction and
link |
00:42:42.080
pop culture to think that we should be further along than we are.
link |
00:42:45.200
So what's your favorite robots in movies and fiction?
link |
00:42:49.680
Wally. Wally. What what do you like about Wally? The humor, the cuteness,
link |
00:42:57.200
the perception control systems operating on Wally that makes it all work out.
link |
00:43:03.120
Just in general. The design of Wally the robot, I think that animators figured out,
link |
00:43:09.440
you know, starting in like the 1940s how to create characters that don't look real but look
link |
00:43:18.080
like something that's even better than real that we really respond to and think is really cute.
link |
00:43:22.480
They figured out how to make them move and look in the right way.
link |
00:43:26.160
And Wally is just such a great example of that.
link |
00:43:28.720
You think eyes, big eyes or big something that's kind of eyish. So it's always playing on some
link |
00:43:34.480
aspect of the human face, right? Often, yeah. So big eyes. Well, I think one of the
link |
00:43:43.200
one of the first like animations to really play with this was Bambi. And they weren't originally
link |
00:43:48.240
going to do that. They were originally trying to make the deer look as lifelike as possible.
link |
00:43:52.400
Like they brought deer into the studio and had a little zoo there so that the animators could
link |
00:43:56.320
work with them. And then at some point they're like, hmm, if we make really big eyes and like a
link |
00:44:01.280
small nose and like big cheeks, kind of more like a baby face, then people like it even
link |
00:44:05.840
better than if it looks real. Do you think the future of things like Alexa in the home
link |
00:44:14.400
has possibly to take advantage of that, to build on that, to create these systems that are better
link |
00:44:23.120
than real that create a close human connection? I can pretty much guarantee you without having any
link |
00:44:29.440
knowledge that those companies are working on that, on that design behind the scenes.
link |
00:44:36.160
Like, I'm pretty sure. I totally disagree with you. Really? So that's what I'm interested in.
link |
00:44:41.120
I'd like to build such a company. I know a lot of those folks and they're afraid of that
link |
00:44:45.600
because you don't, well, how do you make money off of it? Well, but even just like
link |
00:44:50.880
making Alexa look a little bit more interesting than just like a cylinder would do so much.
link |
00:44:55.760
It's an interesting thought, but I don't think people from Amazon perspective are looking for
link |
00:45:03.440
that kind of connection. They want you to be addicted to the services provided by Alexa,
link |
00:45:09.360
not to the device. So the device itself, it's felt that you can lose a lot because if you create a
link |
00:45:18.480
connection and then it creates more opportunity for frustration for negative stuff than it does
link |
00:45:27.760
for positive stuff is I think the way they think about it. That's interesting. Like,
link |
00:45:32.160
I agree that it's very difficult to get right and you have to get it exactly right. Otherwise,
link |
00:45:37.680
you wind up with Microsoft's Clippy. Okay, easy now. What's your problem with Clippy?
link |
00:45:43.600
You like Clippy? Is Clippy your friend? Yeah, I was just, I just, I just talked to,
link |
00:45:49.760
we just had this argument. They said Microsoft CTO and they said, he said he's not bringing
link |
00:45:54.720
Clippy back. They're not bringing Clippy back and that's very disappointing. I think it was,
link |
00:46:00.880
Clippy was the greatest assistance we've ever built. It was a horrible attempt, of course,
link |
00:46:08.960
but it's the best we've ever done because it was a real attempt to have an actual personality.
link |
00:46:16.480
I mean, it was obviously technology was way not there at the time of being able to be a
link |
00:46:23.200
recommender system for assisting you in anything and typing in Word or any kind of other application,
link |
00:46:29.360
but still was an attempt of personality that was legitimate. That's true. I thought was brave.
link |
00:46:34.080
Yes. Okay. You know, you've convinced me I'll be slightly less hard on Clippy.
link |
00:46:39.760
And I know I have like an army of people behind me who also miss Clippy, so.
link |
00:46:43.840
Really? I want to meet these people. Who are these people?
link |
00:46:46.720
It's the people who like to hate stuff when it's there and miss it when it's gone.
link |
00:46:53.600
So everyone. Exactly. All right. So Anki and Gibo, the two companies,
link |
00:47:05.680
two amazing companies, social robotics companies that have recently been closed down.
link |
00:47:12.720
Why do you think it's so hard to create a personal robotics company? So making a business
link |
00:47:17.680
out of essentially something that people would anthropomorphize, have a deep connection with,
link |
00:47:24.400
why is it so hard to make it work? Is the business case not there or what is it?
link |
00:47:29.360
I think it's a number of different things. I don't think it's going to be this way forever.
link |
00:47:35.600
I think at this current point in time, it takes so much work to build something that only barely
link |
00:47:43.440
meets people's minimal expectations because of science fiction and pop culture giving people
link |
00:47:49.680
this idea that we should be further than we already are. When people think about a robot
link |
00:47:54.000
assistant in the home, they think about Rosie from the Jetsons or something like that. And
link |
00:48:00.000
Anki and Gibo did such a beautiful job with the design and getting that interaction just right.
link |
00:48:06.320
But I think people just wanted more. They wanted more functionality. I think you're also right
link |
00:48:10.880
that the business case isn't really there because there hasn't been a killer application
link |
00:48:16.960
that's useful enough to get people to adopt the technology in great numbers. I think what we did
link |
00:48:22.800
see from the people who did get Gibo is a lot of them became very emotionally attached to it.
link |
00:48:29.600
But that's not... I mean, it's kind of like the palm pilot back in the day. Most people are like,
link |
00:48:34.240
why do I need this? Why would I? They don't see how they would benefit from it until
link |
00:48:37.760
they have it or some other company comes in and makes it a little better.
link |
00:48:43.520
Yeah. How far away are we? Do you think? How hard is this problem?
link |
00:48:48.160
It's a good question. And I think it has a lot to do with people's expectations.
link |
00:48:51.520
And those keep shifting depending on what science fiction that is popular.
link |
00:48:56.160
But also, it's two things. It's people's expectation and people's need for an emotional
link |
00:49:01.920
connection. And I believe the need is pretty high. Yes. But I don't think we're aware of it.
link |
00:49:10.080
That's right. I really think this is like the life as we know it. So we've just kind of gotten used
link |
00:49:17.680
to it. I hate to be dark because I have close friends. But we've gotten used to really never
link |
00:49:26.400
weren't being close to anyone. And we're deeply, I believe, okay, this is hypothesis,
link |
00:49:33.440
I think we're deeply lonely, all of us, even those in deep fulfilling relationships.
link |
00:49:37.680
In fact, what makes those relationships fulfilling, I think, is that they at least
link |
00:49:41.840
tap into that deep loneliness a little bit. But I feel like there's more opportunity
link |
00:49:47.360
to explore that, that doesn't interfere with the human relationships you have.
link |
00:49:52.000
It expands more on the, yeah, the rich, deep, unexplored complexity that's all of us,
link |
00:50:00.080
weird apes. Okay. I think you're right. Do you think it's possible to fall in love with a robot?
link |
00:50:06.080
Oh, yeah, totally. Do you think it's possible to have a long term committed
link |
00:50:12.400
monogamous relationship with a robot? Well, yeah, there are lots of different types of
link |
00:50:17.040
long term committed monogamous relationships. I think monogamous implies, like,
link |
00:50:22.720
you're not going to see other humans sexually or like you basically on Facebook have to say,
link |
00:50:29.520
I'm in a relationship with this person, this robot. I just don't, like, again, I think this
link |
00:50:34.960
is comparing robots to humans. When I would rather compare them to pets, like you get a robot,
link |
00:50:40.640
but it fulfills, you know, this loneliness that you have in a, maybe not the same way as a pet,
link |
00:50:48.560
maybe in a different way that is even, you know, supplemental in a different way. But,
link |
00:50:54.320
you know, I'm not saying that people won't like do this, be like, Oh, I want to marry my robot,
link |
00:50:59.360
or I want to have like a, you know, sexual relation monogamous relationship with my robot.
link |
00:51:05.360
But I don't think that that's the main use case for them.
link |
00:51:08.400
But you think that there's still a gap between human and pet.
link |
00:51:17.360
So between husband and pet, there's a different relationship. It's an engineering,
link |
00:51:24.480
so that that's a gap that can be closed through. I think it could be closed someday. But why would
link |
00:51:30.400
we close that? Like, I think it's so boring to think about recreating things that we already
link |
00:51:34.880
have when we could, when we could create something that's different. I know you're thinking about
link |
00:51:42.880
the people who like don't have a husband and like, what could we give them?
link |
00:51:47.840
Yeah, but, but let's, I guess what I'm getting at is maybe not. So like the movie, Her.
link |
00:51:56.320
Yeah. Right. So a better husband.
link |
00:52:00.240
Well, maybe better in some ways. Like it's, I do think that robots are going to continue to be
link |
00:52:06.320
a different type of relationship, even if we get them like very human looking, or when, you know,
link |
00:52:12.560
the voice interactions we have with them feel very like natural and human like, I think
link |
00:52:17.840
there's still going to be differences. And there were in that movie too, like towards the end,
link |
00:52:21.760
it kind of goes off the rails. But it's just a movie. So your intuition is that that,
link |
00:52:27.040
because, because you kind of said two things, right? So one is, why would you want
link |
00:52:34.800
to basically replicate the husband? Yeah. Right. And the other is kind of implying that
link |
00:52:41.920
it's kind of hard to do. So like anytime you try, you might build something very impressive,
link |
00:52:48.480
but it'll be different. I guess my question is about human nature. It's like,
link |
00:52:54.480
how hard is it to satisfy that role of the husband? So removing any of the sexual stuff
link |
00:53:02.640
aside is the, it's more like the mystery, the tension, the dance of relationships.
link |
00:53:09.600
Do you think with robots that's difficult to build? What's your intuition about it?
link |
00:53:13.440
I think that, well, it also depends on how we talk about robots now in 50 years,
link |
00:53:21.520
in like indefinite amount of time. I'm thinking like five or 10 years.
link |
00:53:26.240
Five or 10 years. I think that robots at best will be like,
link |
00:53:31.200
it's more similar to the relationship we have with our pets than relationship that we have with
link |
00:53:35.040
other people. I got it. So what do you think it takes to build a system that exhibits greater
link |
00:53:42.400
and greater levels of intelligence? Like it impresses us with this intelligence. You know,
link |
00:53:47.200
a Roomba, so you talked about anthropomorphization that doesn't, I think intelligence is not
link |
00:53:53.360
required. In fact, intelligence probably gets in the way sometimes, like you mentioned.
link |
00:53:57.920
But what do you think it takes to create a system where we sense that it has a human level
link |
00:54:06.400
intelligence? So something that, probably something conversational, human level intelligence.
link |
00:54:12.000
How hard do you think that problem is? It'd be interesting to hear your perspective, not just
link |
00:54:18.080
purely, I talked to a lot of people, how hard is the conversational agents? How hard is it
link |
00:54:24.240
to pass a touring test? But my sense is it's easier than just solving, it's easier than solving the
link |
00:54:33.680
pure natural language processing problem, because I feel like you can cheat.
link |
00:54:37.920
Yeah. So how hard is it to pass a touring test in your view?
link |
00:54:43.600
Well, I think, again, it's all about expectation management. If you set up people's expectations
link |
00:54:49.520
to think that they're communicating with, what was it, a 13 year old boy from the Ukraine?
link |
00:54:54.240
Yeah, that's right. Then they're not going to expect perfect English. They're not going to
link |
00:54:58.160
expect perfect understanding of concepts or even like being on the same wavelength in terms of
link |
00:55:04.000
like conversation flow. So it's much easier to pass in that case.
link |
00:55:09.840
Do you think, you kind of alluded this to with audio, do you think it needs to have a body?
link |
00:55:18.160
I think that we definitely have, so we treat physical things with more social agency,
link |
00:55:24.800
because we're very physical creatures. I think a body can be useful.
link |
00:55:28.240
Does it get in the way? Is there a negative aspects like?
link |
00:55:36.960
Yeah, there can be. So if you're trying to create a body that's too similar to something that people
link |
00:55:41.760
are familiar with, like I have this robot cat at home that Hasbro makes. And it's very disturbing
link |
00:55:48.000
to watch because I'm constantly assuming that it's going to move like a real cat and it doesn't,
link |
00:55:53.040
because it's like a $100 piece of technology. So it's very disappointing and it's very hard to
link |
00:56:01.520
treat it like it's alive. So you can get a lot wrong with the body too, but you can also use
link |
00:56:07.040
tricks same as the expectation management of the 13 year old boy from the Ukraine. If you
link |
00:56:12.160
pick an animal that people aren't intimately familiar with, like the baby dinosaur, like the
link |
00:56:16.560
baby seal that people have never actually held in their arms, you can get away with much more
link |
00:56:21.360
because they don't have these preformed expectations. Yeah, I remember you were thinking at Ted Talk
link |
00:56:26.320
or something that clicked for me that nobody actually knows what a dinosaur looks like.
link |
00:56:32.880
So you can actually get away with a lot more. That was great. So what do you think about
link |
00:56:41.280
consciousness and mortality being displayed in a robot? So not actually having consciousness,
link |
00:56:54.400
but having these kind of human elements that are much more than just the interaction, much more
link |
00:57:00.720
than just, like you mentioned, with a dinosaur moving kind of interesting ways, but really
link |
00:57:06.800
being worried about its own death and really acting as if it's aware and self aware and identity.
link |
00:57:15.440
Have you seen that done in robotics? What do you think about doing that? Is that a powerful good
link |
00:57:23.440
thing? Well, I think it can be a design tool that you can use for different purposes. So I
link |
00:57:29.280
can't say whether it's inherently good or bad, but I do think it can be a powerful tool. The fact
link |
00:57:35.120
that the pleo mimics distress when you, quote unquote, hurt it is a really powerful tool to
link |
00:57:46.080
get people to engage with it in a certain way. I had a research partner that I did some of the
link |
00:57:51.440
empathy work with named Palashnandi and he had built a robot for himself that had a lifespan
link |
00:57:57.200
and that would stop working after a certain amount of time just because he was interested in whether
link |
00:58:02.160
he himself would treat it differently. And we know from Tamagotchi's those little games that
link |
00:58:09.920
we used to have that were extremely primitive, that people respond to this idea of mortality
link |
00:58:15.600
and you can get people to do a lot with little design tricks like that. Now, whether it's a
link |
00:58:21.280
good thing depends on what you're trying to get them to do. Have a deeper relationship. Have a
link |
00:58:27.120
deeper connection, have a relationship. If it's for their own benefit, that sounds great. Okay.
link |
00:58:33.840
You can do that for a lot of other reasons. I see. So what kind of stuff are you worried about?
link |
00:58:38.720
So is it mostly about manipulation of your emotions for like advertisements and so on,
link |
00:58:43.120
things like that? Yeah, or data collection or, I mean, you could think of governments misusing
link |
00:58:48.160
this to extract information from people. It's, you know, just like any other technological tool,
link |
00:58:56.400
just raises a lot of questions. What's, if you look at Facebook, if you look at Twitter and
link |
00:59:01.920
social networks, there's a lot of concern of data collection now. What's from legal perspective or
link |
00:59:10.000
in general, how do we prevent the violation of sort of these companies crossing a line? It's
link |
00:59:19.120
a great area, but crossing a line, they shouldn't in terms of manipulating, like we're talking about
link |
00:59:24.400
manipulating our emotion, manipulating our behavior using tactics that are not so savory.
link |
00:59:32.080
Yeah, it's really difficult because we are starting to create technology that relies on data
link |
00:59:39.360
collection to provide functionality. And there's not a lot of incentive, even on the consumer side
link |
00:59:46.000
to curb that because the other problem is that the harms aren't tangible. They're not really
link |
00:59:51.760
apparent to a lot of people because they kind of trickle down on a societal level and then
link |
00:59:56.400
suddenly we're living in 1984, which sounds extreme, but that book was very prescient. And
link |
01:00:05.360
I'm not worried about these systems. I have Amazon's Echo at home and tell Alexa all sorts of stuff
link |
01:00:16.880
and it helps me because Alexa knows what brand of diaper we use and so I can just easily order it
link |
01:00:25.040
again. So I don't have any incentive to ask a lawmaker to curb that. But when I think about
link |
01:00:30.320
that data then being used against low income people to target them for scammy loans or education
link |
01:00:38.160
programs, that's then a societal effect that I think is very severe and legislators should be
link |
01:00:46.160
thinking about. But yeah, the gray area is the removing ourselves from consideration of explicitly
link |
01:00:56.880
defining objectives and more saying, well, we want to maximize engagement in our social network.
link |
01:01:03.680
Yeah. And then just because you're not actually doing a bad thing, it makes sense. You want
link |
01:01:10.480
people to keep a conversation going, to have more conversations, to keep coming back again and again
link |
01:01:17.360
to have conversations. And whatever happens after that, you're kind of not exactly directly responsible.
link |
01:01:25.440
You're only indirectly responsible. So I think it's a really hard problem. Do you
link |
01:01:32.400
optimistic about us ever being able to solve it? You mean the problem of capitalism? Because the
link |
01:01:41.040
problem is that the companies are acting in the company's interests and not in people's interest
link |
01:01:46.800
and when those interests are aligned, that's great. But the completely free market doesn't seem to work
link |
01:01:53.200
because of this information asymmetry. But it's hard to know how to... So say you were trying to do
link |
01:01:58.800
the right thing. I guess what I'm trying to say is it's not obvious for these companies what the
link |
01:02:05.680
good thing for society is to do. I don't think they sit there with a glass of wine and a cat,
link |
01:02:14.880
like petting a cat, evil cat. And there's two decisions and one of them is good for society,
link |
01:02:21.120
one is good for the profit and they choose the profit. I think actually there's a lot of money
link |
01:02:27.200
to be made by doing the right thing for society. Because Google, Facebook have so much cash that
link |
01:02:36.720
they actually, especially Facebook, would significantly benefit from making decisions
link |
01:02:40.800
that are good for society. It's good for their brand. But I don't know if they know what's good
link |
01:02:46.640
for society. I don't think we know what's good for society in terms of how we manage the
link |
01:02:56.800
conversation on Twitter or how we design... We're talking about robots. Should we emotionally
link |
01:03:07.200
manipulate you into having a deep connection with Alexa or not? Yeah. Do you have optimism
link |
01:03:15.280
that we'll be able to solve some of these questions? Well, I'm going to say something
link |
01:03:19.920
that's controversial in my circles, which is that I don't think that companies who are reaching out
link |
01:03:26.400
to ethicists and trying to create interdisciplinary ethics boards, I don't think that that's
link |
01:03:30.720
totally just trying to whitewash the problem and so that they look like they've done something.
link |
01:03:35.600
I think that a lot of companies actually do, like you say, care about what the right answer is.
link |
01:03:41.440
They don't know what that is and they're trying to find people to help them find them.
link |
01:03:45.680
Not in every case, but I think it's much too easy to just vilify the companies
link |
01:03:50.400
as, like you said, sitting there with their cat going, one million dollars. That's not what happens.
link |
01:03:57.040
A lot of people are well meaning even within companies. I think that what we do absolutely need
link |
01:04:05.920
is more interdisciplinarity both within companies, but also within the policymaking space because
link |
01:04:14.640
we've hurtled into the world where technological progress is much faster. It seems much faster
link |
01:04:23.200
than it was and things are getting very complex. You need people who understand the technology,
link |
01:04:28.000
but also people who understand what the societal implications are and people who are thinking
link |
01:04:33.520
about this in a more systematic way to be talking to each other. There's no other solution, I think.
link |
01:04:39.280
We've also done work on intellectual property. If you look at the algorithms that these companies
link |
01:04:45.840
are using, like YouTube, Twitter, Facebook, so on, those are mostly secretive.
link |
01:04:54.000
The recommender systems behind these algorithms. Do you think about IP and the transparency
link |
01:05:00.320
about algorithms like this? Is the responsibility of these companies to open source the algorithms
link |
01:05:07.360
or at least reveal to the public how these algorithms work?
link |
01:05:13.200
I personally don't work on that. There are a lot of people who do though, and there are a lot of
link |
01:05:16.960
people calling for transparency. In fact, Europe's even trying to legislate transparency. Maybe they
link |
01:05:22.480
even have at this point where if an algorithmic system makes some sort of decision that affects
link |
01:05:29.280
someone's life, that you need to be able to see how that decision was made, it's a tricky balance
link |
01:05:39.360
because, obviously, companies need to have some sort of competitive advantage and you can't take
link |
01:05:43.840
all of that away or you stifle innovation. For some of the ways that these systems are already
link |
01:05:49.760
being used, I think it is pretty important that people understand how they work.
link |
01:05:54.080
What are your thoughts in general on intellectual property in this weird age of software, AI,
link |
01:06:00.480
robotics? That it's broken. I mean, the system is just broken.
link |
01:06:06.560
Can you describe? Actually, I don't even know what intellectual property is in the space of
link |
01:06:13.120
software. I believe I have a patent on a piece of software from my PhD.
link |
01:06:21.280
You believe? You don't know? No, we went through a whole process. Yeah, I do.
link |
01:06:26.240
You get the spam emails like, we'll frame your patent for you.
link |
01:06:30.080
Yeah, it's much like a thesis. That's useless, right? Or not? Where does IP stand in this age?
link |
01:06:41.280
What's the right way to do it? What's the right way to protect and own ideas when it's just code
link |
01:06:47.520
and this mishmash of something that feels much softer than a piece of machinery or any idea?
link |
01:06:54.880
I mean, it's hard because there are different types of intellectual property and they're
link |
01:06:59.200
kind of these blunt instruments. It's like patent law is like a wrench. It works really well for an
link |
01:07:05.200
industry like the pharmaceutical industry, but when you try and apply it to something else,
link |
01:07:09.040
it's like, I don't know, I'll just hit this thing with a wrench and hope it works.
link |
01:07:12.880
So software, software, you have a couple of different options.
link |
01:07:18.240
Software like any code that's written down in some tangible form is automatically copyrighted.
link |
01:07:25.760
So you have that protection, but that doesn't do much because if someone takes the basic idea that
link |
01:07:31.760
the code is executing and just does it in a slightly different way, they can get around
link |
01:07:38.240
the copyright. So that's not a lot of protection. Then you can patent software, but that's kind of,
link |
01:07:43.840
I mean, getting a patent costs, I don't know if you remember what yours cost or was it through
link |
01:07:50.400
an institution? Yeah, it was through a university. It was insane. There were so many lawyers, so many
link |
01:07:56.160
meetings. It made me feel like it must have been hundreds of thousands of dollars. It must have
link |
01:08:01.600
been something crazy. It's insane the cost of getting a patent. And so this idea of protecting
link |
01:08:07.360
the inventor in their own garage came up with a great idea. That's the thing of the past.
link |
01:08:12.960
It's all just companies trying to protect things and it costs a lot of money. And then with code,
link |
01:08:19.680
it's oftentimes, by the time the patent is issued, which can take like five years,
link |
01:08:25.120
probably your code is obsolete at that point. So it's a very, again, a very blunt instrument
link |
01:08:31.040
that doesn't work well for that industry. And so at this point, we should really
link |
01:08:36.560
have something better, but we don't. Do you like open source? Yeah, it's open source good for
link |
01:08:40.800
society. You think all of us should open source code? Well, so at the Media Lab at MIT,
link |
01:08:48.320
we have an open source default because what we've noticed is that people will come in, they'll write
link |
01:08:53.680
some code and they'll be like, how do I protect this? And we're like, that's not your problem
link |
01:08:58.400
right now. Your problem isn't that someone's going to steal your project. Your problem is
link |
01:09:01.440
getting people to use it at all. There's so much stuff out there. We don't even know if
link |
01:09:06.160
you're going to get traction for your work. And so open sourcing can sometimes help get
link |
01:09:11.600
people's work out there, but ensure that they get attribution for it for the work that they've done.
link |
01:09:16.880
So I'm a fan of it in a lot of contexts. Obviously, it's not like a one size fits all solution.
link |
01:09:23.760
So what I gleaned from your Twitter is your mom. I saw a quote, a reference to Babybot.
link |
01:09:32.560
What have you learned about robotics and AI from raising a human baby bot?
link |
01:09:42.640
Well, I think that my child has just made it more apparent to me that the systems we're currently
link |
01:09:48.560
creating aren't like human intelligence. There's not a lot to compare there. He has learned and
link |
01:09:56.240
developed in such a different way than a lot of the AI systems we're creating that that's not really
link |
01:10:02.640
interesting to me to compare. But what is interesting to me is how these systems are going to shape
link |
01:10:10.240
the world that he grows up in. And so I'm even more concerned about the societal effects of
link |
01:10:16.720
developing systems that rely on massive amounts of data collection, for example.
link |
01:10:22.320
So is he going to be allowed to use like Facebook? Facebook is over. Kids don't use that anymore.
link |
01:10:31.440
Snapchat? What do they use Instagram? I don't know. I just heard that TikTok is over,
link |
01:10:36.720
which I've never even seen. So I don't know. We're old. We don't know.
link |
01:10:42.640
I'm going to start gaming and streaming my gameplay. So what do you see as the future of
link |
01:10:48.960
personal robotics, social robotics, interaction with our robots? Like, what are you excited about
link |
01:10:56.000
if you were to sort of philosophize about what might happen the next five, 10 years?
link |
01:11:00.960
That would be cool to see. Oh, I really hope that we get kind of a home robot that makes it.
link |
01:11:07.200
That's a social robot and not just Alexa. Like, it's, you know, I really love the Anki products.
link |
01:11:14.720
I thought Jibo has had some really great aspects. So I'm hoping that a company cracks that.
link |
01:11:21.520
Me too. So, Kate, it was a wonderful talking to you today. Likewise. Thank you so much. It was fun.
link |
01:11:29.440
Thanks for listening to this conversation with Kate Darling. And thank you to our sponsors,
link |
01:11:33.840
ExpressVPN and Masterclass. Please consider supporting the podcast by signing up to Masterclass
link |
01:11:40.240
and Masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex pod.
link |
01:11:48.800
If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple podcast,
link |
01:11:53.920
support on Patreon or simply connect with me on Twitter and Lex Friedman.
link |
01:11:59.280
And now let me leave you with some tweets from Kate Darling. First tweet is the pandemic has
link |
01:12:06.560
fundamentally changed who I am. I now drink the leftover milk in the bottom of the cereal bowl.
link |
01:12:14.240
Second tweet is I came on here to complain that I had a really bad day and saw that a
link |
01:12:19.680
bunch of you are hurting too. Love to everyone. Thank you for listening and hope to see you next time.