back to indexKate Darling: Social Robotics | Lex Fridman Podcast #98
link |
The following is a conversation with Kate Darling, a researcher at MIT,
link |
interested in social robotics, robot ethics, and generally how technology intersects with society.
link |
She explores the emotional connection between human beings and lifelike machines,
link |
which for me is one of the most exciting topics in all of artificial intelligence.
link |
As she writes in her bio, she is a caretaker of several domestic robots,
link |
including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti.
link |
She is one of the funniest and brightest minds I've ever had the fortune to talk to.
link |
This conversation was recorded recently, but before the outbreak of the pandemic.
link |
For everyone feeling the burden of this crisis, I'm sending love your way.
link |
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
review it with five stars on Apple Podcast, support it on Patreon, or simply connect with me on Twitter
link |
at Lex Friedman, spelled F R I D M A N. As usual, I'll do a few minutes of ads now and never any
link |
ads in the middle that can break the flow of the conversation. I hope that works for you and
link |
doesn't hurt the listening experience. Quick summary of the ads. Two sponsors,
link |
Masterclass and ExpressVPN. Please consider supporting the podcast by signing up to
link |
Masterclass at masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex
link |
Pod. This show is sponsored by Masterclass. Sign up at masterclass.com slash Lex to get a discount
link |
and to support this podcast. When I first heard about Masterclass, I thought it was too good to
link |
be true. For $180 a year, you get an all access pass to watch courses from, to list some of my
link |
favorites. Chris Hadfield on space exploration, Neil deGrasse Tyson on scientific thinking and
link |
communication, Will Wright, creator of SimCity and Sims, love those games, on game design,
link |
Carlos Santana on guitar, Garry Kasparov on chess, Daniel Nagrano on poker, and many more.
link |
Chris Hadfield explaining how rockets work and the experience of being launched into space alone
link |
is worth the money. By the way, you can watch it on basically any device. Once again,
link |
sign up on masterclass.com slash Lex to get a discount and to support this podcast.
link |
This show is sponsored by ExpressVPN. Get it at expressvpn.com slash Lex Pod to get a discount
link |
and to support this podcast. I've been using ExpressVPN for many years. I love it. It's easy
link |
to use, press the big power on button, and your privacy is protected. And, if you like, you can
link |
make it look like your location is anywhere else in the world. I might be in Boston now, but I can
link |
make it look like I'm in New York, London, Paris, or anywhere else. This has a large number of
link |
obvious benefits. Certainly, it allows you to access international versions of streaming websites
link |
like the Japanese Netflix or the UK Hulu. ExpressVPN works on any device you can imagine. I
link |
use it on Linux. Shout out to Ubuntu, 2004, Windows, Android, but it's available everywhere else too.
link |
Once again, get it at expressvpn.com slash Lex Pod to get a discount and to support this podcast.
link |
And now, here's my conversation with Kate Darling.
link |
You co taught robot ethics at Harvard. What are some ethical issues that arise
link |
in the world with robots?
link |
Yeah, that was a reading group that I did when I, like, at the very beginning,
link |
first became interested in this topic. So, I think if I taught that class today,
link |
it would look very, very different. Robot ethics, it sounds very science fictiony,
link |
especially did back then, but I think that some of the issues that people in robot ethics are
link |
concerned with are just around the ethical use of robotic technology in general. So, for example,
link |
responsibility for harm, automated weapon systems, things like privacy and data security,
link |
things like, you know, automation and labor markets. And then personally, I'm really
link |
interested in some of the social issues that come out of our social relationships with robots.
link |
One on one relationship with robots.
link |
I think most of the stuff we have to talk about is like one on one social stuff. That's what I
link |
love. I think that's what you're, you love as well and are expert in. But a societal level,
link |
there's like, there's a presidential candidate now, Andrew Yang running,
link |
concerned about automation and robots and AI in general taking away jobs. He has a proposal of UBI,
link |
universal basic income of everybody gets 1000 bucks as a way to sort of save you if you lose
link |
your job from automation to allow you time to discover what it is that you would like to or
link |
Yes. So I lived in Switzerland for 20 years and universal basic income has been more of a topic
link |
there separate from the whole robots and jobs issue. So it's so interesting to me to see kind
link |
of these Silicon Valley people latch onto this concept that came from a very kind of
link |
left wing socialist, kind of a different place in Europe. But on the automation labor markets
link |
topic, I think that it's very, so sometimes in those conversations, I think people overestimate
link |
where robotic technology is right now. And we also have this fallacy of constantly comparing robots
link |
to humans and thinking of this as a one to one replacement of jobs. So even like Bill Gates a few
link |
years ago said something about, maybe we should have a system that taxes robots for taking people's
link |
jobs. And it just, I mean, I'm sure that was taken out of context, he's a really smart guy,
link |
but that sounds to me like kind of viewing it as a one to one replacement versus viewing this
link |
technology as kind of a supplemental tool that of course is going to shake up a lot of stuff.
link |
It's going to change the job landscape, but I don't see, you know, robots taking all the
link |
jobs in the next 20 years. That's just not how it's going to work.
link |
Right. So maybe drifting into the land of more personal relationships with robots and
link |
interaction and so on. I got to warn you, I go, I may ask some silly philosophical questions.
link |
Okay. Do you think humans will abuse robots in their interactions? So you've had a lot of,
link |
and we'll talk about it sort of anthropomorphization and this intricate dance,
link |
emotional dance between human and robot, but there seems to be also a darker side where people, when
link |
they treat the other as servants, especially, they can be a little bit abusive or a lot abusive.
link |
Do you think about that? Do you worry about that?
link |
Yeah, I do think about that. So, I mean, one of my main interests is the fact that people
link |
subconsciously treat robots like living things. And even though they know that they're interacting
link |
with a machine and what it means in that context to behave violently. I don't know if you could say
link |
abuse because you're not actually abusing the inner mind of the robot. The robot doesn't have
link |
As far as you know.
link |
Well, yeah. It also depends on how we define feelings and consciousness. But I think that's
link |
another area where people kind of overestimate where we currently are with the technology.
link |
The robots are not even as smart as insects right now. And so I'm not worried about abuse
link |
in that sense. But it is interesting to think about what does people's behavior towards these
link |
things mean for our own behavior? Is it desensitizing the people to be verbally abusive
link |
to a robot or even physically abusive? And we don't know.
link |
Right. It's a similar connection from like if you play violent video games, what connection does
link |
that have to desensitization to violence? I haven't read literature on that. I wonder about that.
link |
Because everything I've heard, people don't seem to any longer be so worried about violent video
link |
Correct. The research on it is, it's a difficult thing to research. So it's sort of inconclusive,
link |
but we seem to have gotten the sense, at least as a society, that people can compartmentalize. When
link |
it's something on a screen and you're shooting a bunch of characters or running over people with
link |
your car, that doesn't necessarily translate to you doing that in real life. We do, however,
link |
have some concerns about children playing violent video games. And so we do restrict it there.
link |
I'm not sure that's based on any real evidence either, but it's just the way that we've kind of
link |
decided we want to be a little more cautious there. And the reason I think robots are a little
link |
bit different is because there is a lot of research showing that we respond differently
link |
to something in our physical space than something on a screen. We will treat it much more viscerally,
link |
much more like a physical actor. And so it's totally possible that this is not a problem.
link |
And it's the same thing as violence in video games. Maybe restrict it with kids to be safe,
link |
but adults can do what they want. But we just need to ask the question again because we don't
link |
have any evidence at all yet. Maybe there's an intermediate place too. I did my research
link |
on Twitter. By research, I mean scrolling through your Twitter feed.
link |
You mentioned that you were going at some point to an animal law conference.
link |
So I have to ask, do you think there's something that we can learn
link |
from animal rights that guides our thinking about robots?
link |
Oh, I think there is so much to learn from that. I'm actually writing a book on it right now. That's
link |
why I'm going to this conference. So I'm writing a book that looks at the history of animal
link |
domestication and how we've used animals for work, for weaponry, for companionship.
link |
And one of the things the book tries to do is move away from this fallacy that I talked about
link |
of comparing robots and humans because I don't think that's the right analogy. But I do think
link |
that on a social level, even on a social level, there's so much that we can learn from looking
link |
at that history because throughout history, we've treated most animals like tools, like products.
link |
And then some of them we've treated differently and we're starting to see people treat robots in
link |
really similar ways. So I think it's a really helpful predictor to how we're going to interact
link |
with the robots. Do you think we'll look back at this time like 100 years from now and see
link |
what we do to animals as like similar to the way we view like the Holocaust in World War II?
link |
That's a great question. I mean, I hope so. I am not convinced that we will. But I often wonder,
link |
you know, what are my grandkids going to view as, you know, abhorrent that my generation did
link |
that they would never do? And I'm like, well, what's the big deal? You know, it's a fun question
link |
to ask yourself. It always seems that there's atrocities that we discover later. So the things
link |
that at the time people didn't see as, you know, you look at everything from slavery to any kinds
link |
of abuse throughout history to the kind of insane wars that were happening to the way war was carried
link |
out and rape and the kind of violence that was happening during war that we now, you know,
link |
we see as atrocities, but at the time perhaps didn't as much. And so now I have this intuition
link |
that I have this worry, maybe you're going to probably criticize me, but I do anthropomorphize
link |
robots. I don't see a fundamental philosophical difference between a robot and a human being
link |
in terms of once the capabilities are matched. So the fact that we're really far away doesn't,
link |
in terms of capabilities and then that from natural language processing, understanding
link |
and generation to just reasoning and all that stuff. I think once you solve it, I see though,
link |
this is a very gray area and I don't feel comfortable with the kind of abuse that people
link |
throw at robots. Subtle, but I can see it becoming, I can see basically a civil rights movement for
link |
robots in the future. Do you think, let me put it in the form of a question, do you think robots
link |
should have some kinds of rights? Well, it's interesting because I came at this originally
link |
from your perspective. I was like, you know what, there's no fundamental difference between
link |
technology and like human consciousness. Like we, we can probably recreate anything. We just don't
link |
know how yet. And so there's no reason not to give machines the same rights that we have once,
link |
like you say, they're kind of on an equivalent level. But I realized that that is kind of a
link |
far future question. I still think we should talk about it because I think it's really interesting.
link |
But I realized that it's actually, we might need to ask the robot rights question even sooner than
link |
that while the machines are still, quote unquote, really dumb and not on our level because of the
link |
way that we perceive them. And I think one of the lessons we learned from looking at the history of
link |
animal rights and one of the reasons we may not get to a place in a hundred years where we view
link |
it as wrong to, you know, eat or otherwise, you know, use animals for our own purposes is because
link |
historically we've always protected those things that we relate to the most. So one example is
link |
whales. No one gave a shit about the whales. Am I allowed to swear? Yeah, no one gave a shit about
link |
freedom. Yeah, no one gave a shit about the whales until someone recorded them singing. And suddenly
link |
people were like, oh, this is a beautiful creature and now we need to save the whales. And that
link |
started the whole Save the Whales movement in the 70s. So as much as I, and I think a lot of people
link |
want to believe that we care about consistent biological criteria, that's not historically
link |
how we formed our alliances. Yeah, so what, why do we, why do we believe that all humans are created
link |
equal? Killing of a human being, no matter who the human being is, that's what I meant by equality,
link |
is bad. And then, because I'm connecting that to robots and I'm wondering whether mortality,
link |
so the killing act is what makes something, that's the fundamental first right. So I am currently
link |
allowed to take a shotgun and shoot a Roomba. I think, I'm not sure, but I'm pretty sure it's not
link |
considered murder, right. Or even shutting them off. So that's, that's where the line appears to
link |
be, right? Is this mortality a critical thing here? I think here again, like the animal analogy is
link |
really useful because you're also allowed to shoot your dog, but people won't be happy about it.
link |
So we give, we do give animals certain protections from like, you're not allowed to torture your dog
link |
and set it on fire, at least in most states and countries, but you're still allowed to treat it
link |
like a piece of property in a lot of other ways. And so we draw these arbitrary lines all the time.
link |
And, you know, there's a lot of philosophical thought on why viewing humans as something unique
link |
is not, is just speciesism and not, you know, based on any criteria that would actually justify
link |
making a difference between us and other species. Do you think in general people, most people are
link |
good? Do you think, or do you think there's evil and good in all of us? Is that's revealed through
link |
our circumstances and through our interactions? I like to view myself as a person who like believes
link |
that there's no absolute evil and good and that everything is, you know, gray. But I do think it's
link |
an interesting question. Like when I see people being violent towards robotic objects, you said
link |
that bothers you because the robots might someday, you know, be smart. And is that why?
link |
Well, it bothers me because it reveals, so I personally believe, because I've studied way too,
link |
so I'm Jewish. I studied the Holocaust and World War II exceptionally well. I personally believe
link |
that most of us have evil in us. That what bothers me is the abuse of robots reveals the evil in
link |
human beings. And it's, I think it doesn't just bother me. It's, I think it's an opportunity for
link |
roboticists to make, help people find the better sides, the angels of their nature, right? That
link |
abuse isn't just a fun side thing. That's a, you revealing a dark part that you shouldn't,
link |
that should be hidden deep inside. Yeah. I mean, you laugh, but some of our research does indicate
link |
that maybe people's behavior towards robots reveals something about their tendencies for
link |
empathy generally, even using very simple robots that we have today that like clearly don't feel
link |
anything. So, you know, Westworld is maybe, you know, not so far off and it's like, you know,
link |
depicting the bad characters as willing to go around and shoot and rape the robots and the good
link |
characters is not wanting to do that. Even without assuming that the robots have consciousness.
link |
So there's a opportunity, it's interesting, there's opportunity to almost practice empathy.
link |
The, on robots is an opportunity to practice empathy.
link |
I agree with you. Some people would say, why are we practicing empathy on robots instead of,
link |
you know, on our fellow humans or on animals that are actually alive and experienced the world?
link |
And I don't agree with them because I don't think empathy is a zero sum game. And I do
link |
think that it's a muscle that you can train and that we should be doing that. But some people
link |
disagree. So the interesting thing, you've heard, you know, raising kids sort of asking them or
link |
telling them to be nice to the smart speakers, to Alexa and so on, saying please and so on during
link |
the requests. I don't know if, I'm a huge fan of that idea because yeah, that's towards the idea of
link |
practicing empathy. I feel like politeness, I'm always polite to all the, all the systems that we
link |
build, especially anything that's speech interaction based. Like when we talk to the car, I'll always
link |
have a pretty good detector for please to, I feel like there should be a room for encouraging empathy
link |
in those interactions. Yeah. Okay. So I agree with you. So I'm going to play devil's advocate. Sure.
link |
So what is the, what is the devil's advocate argument there? The devil's advocate argument
link |
is that if you are the type of person who has abusive tendencies or needs to get some sort of
link |
like behavior like that out, needs an outlet for it, that it's great to have a robot that you can
link |
scream at so that you're not screaming at a person. And we just don't know whether that's true,
link |
whether it's an outlet for people or whether it just kind of, as my friend once said,
link |
trains their cruelty muscles and makes them more cruel in other situations.
link |
Oh boy. Yeah. And that expands to other topics, which I, I don't know, you know, there's a,
link |
there's a topic of sex, which is weird one that I tend to avoid it from robotics perspective.
link |
And most of the general public doesn't, they talk about sex robots and so on. Is that an area you've
link |
touched at all research wise? Like the way, cause that's what people imagine sort of any kind of
link |
interaction between human and robot that shows any kind of compassion. They immediately think
link |
from a product perspective in the near term is sort of expansion of what pornography is and all
link |
that kind of stuff. Yeah. Do researchers touch this? Well that's kind of you to like characterize
link |
it as though there's thinking rationally about product. I feel like sex robots are just such a
link |
like titillating news hook for people that they become like the story. And it's really hard to
link |
not get fatigued by it when you're in the space because you tell someone you do human robot
link |
interaction. Of course, the first thing they want to talk about is sex robots. Yeah, it happens a
link |
lot. And it's, it's unfortunate that I'm so fatigued by it because I do think that there
link |
are some interesting questions that become salient when you talk about, you know, sex with robots.
link |
See what I think would happen when people get sex robots, like if it's some guys, okay, guys get
link |
female sex robots. What I think there's an opportunity for is an actual, like, like they'll
link |
actually interact. What I'm trying to say, they won't outside of the sex would be the most
link |
fulfilling part. Like the interaction, it's like the folks who there's movies and this, right,
link |
who pray, pay a prostitute and then end up just talking to her the whole time. So I feel like
link |
there's an opportunity. It's like most guys and people in general joke about this, the sex act,
link |
but really people are just lonely inside and they're looking for connection. Many of them.
link |
And it'd be unfortunate if that connection is established through the sex industry. I feel like
link |
it should go into the front door of like, people are lonely and they want a connection.
link |
Well, I also feel like we should kind of de, you know, de stigmatize the sex industry because,
link |
you know, even prostitution, like there are prostitutes that specialize in disabled people
link |
who don't have the same kind of opportunities to explore their sexuality. So it's, I feel like we
link |
should like de stigmatize all of that generally. But yeah, that connection and that loneliness is
link |
an interesting topic that you bring up because while people are constantly worried about robots
link |
replacing humans and oh, if people get sex robots and the sex is really good, then they won't want
link |
their, you know, partner or whatever. But we rarely talk about robots actually filling a hole where
link |
there's nothing and what benefit that can provide to people. Yeah, I think that's an exciting,
link |
there's a whole giant, there's a giant hole that's unfillable by humans. It's asking too much of
link |
your, of people, your friends and people you're in a relationship with in your family to fill that
link |
hole. There's, because, you know, it's exploring the full, like, you know, exploring the full
link |
complexity and richness of who you are. Like who are you really? Like people, your family doesn't
link |
have enough patience to really sit there and listen to who are you really. And I feel like
link |
there's an opportunity to really make that connection with robots. I just feel like we're
link |
complex as humans and we're capable of lots of different types of relationships. So whether that's,
link |
you know, with family members, with friends, with our pets, or with robots, I feel like
link |
there's space for all of that and all of that can provide value in a different way.
link |
Yeah, absolutely. So I'm jumping around. Currently most of my work is in autonomous vehicles.
link |
So the most popular topic among the general public is the trolley problem. So most, most,
link |
most roboticists kind of hate this question, but what do you think of this thought experiment?
link |
What do you think we can learn from it outside of the silliness of
link |
the actual application of it to the autonomous vehicle? I think it's still an interesting
link |
ethical question. And that in itself, just like much of the interaction with robots
link |
has something to teach us. But from your perspective, do you think there's anything there?
link |
Well, I think you're right that it does have something to teach us because,
link |
but I think what people are forgetting in all of these conversations is the origins of the trolley
link |
problem and what it was meant to show us, which is that there is no right answer. And that sometimes
link |
our moral intuition that comes to us instinctively is not actually what we should follow if we care
link |
about creating systematic rules that apply to everyone. So I think that as a philosophical
link |
concept, it could teach us at least that, but that's not how people are using it right now.
link |
These are friends of mine and I love them dearly and their project adds a lot of value. But if
link |
we're viewing the moral machine project as what we can learn from the trolley problems, the moral
link |
machine is, I'm sure you're familiar, it's this website that you can go to and it gives you
link |
different scenarios like, oh, you're in a car, you can decide to run over these two people or
link |
this child. What do you choose? Do you choose the homeless person? Do you choose the person who's
link |
jaywalking? And so it pits these like moral choices against each other and then tries to
link |
crowdsource the quote unquote correct answer, which is really interesting and I think valuable data,
link |
but I don't think that's what we should base our rules in autonomous vehicles on because
link |
it is exactly what the trolley problem is trying to show, which is your first instinct might not
link |
be the correct one if you look at rules that then have to apply to everyone and everything.
link |
So how do we encode these ethical choices in interaction with robots? For example,
link |
autonomous vehicles, there is a serious ethical question of do I protect myself?
link |
Does my life have higher priority than the life of another human being? Because that changes
link |
certain control decisions that you make. So if your life matters more than other human beings,
link |
then you'd be more likely to swerve out of your current lane. So currently automated emergency
link |
braking systems that just brake, they don't ever swerve. So swerving into oncoming traffic or
link |
no, just in a different lane can cause significant harm to others, but it's possible that it causes
link |
less harm to you. So that's a difficult ethical question. Do you have a hope that
link |
like the trolley problem is not supposed to have a right answer, right? Do you hope that
link |
when we have robots at the table, we'll be able to discover the right answer for some of these
link |
questions? Well, what's happening right now, I think, is this question that we're facing of
link |
what ethical rules should we be programming into the machines is revealing to us that
link |
our ethical rules are much less programmable than we probably thought before. And so that's a really
link |
valuable insight, I think, that these issues are very complicated and that in a lot of these cases,
link |
it's you can't really make that call, like not even as a legislator. And so what's going to
link |
happen in reality, I think, is that car manufacturers are just going to try and avoid
link |
the problem and avoid liability in any way possible. Or like they're going to always protect
link |
the driver because who's going to buy a car if it's programmed to kill someone?
link |
Kill you instead of someone else. So that's what's going to happen in reality.
link |
But what did you mean by like once we have robots at the table, like do you mean when they can help
link |
us figure out what to do?
link |
No, I mean when robots are part of the ethical decisions. So no, no, no, not they help us. Well.
link |
Oh, you mean when it's like, should I run over a robot or a person?
link |
Right. That kind of thing. So what, no, no, no. So when you, it's exactly what you said, which is
link |
when you have to encode the ethics into an algorithm, you start to try to really understand
link |
what are the fundamentals of the decision making process you make to make certain decisions.
link |
Should you, like capital punishment, should you take a person's life or not to punish them for
link |
a certain crime? Sort of, you can use, you can develop an algorithm to make that decision, right?
link |
And the hope is that the act of making that algorithm, however you make it, so there's a few
link |
approaches, will help us actually get to the core of what is right and what is wrong under our current
link |
societal standards.
link |
But isn't that what's happening right now? And we're realizing that we don't have a consensus on
link |
what's right and wrong.
link |
You mean in politics in general?
link |
Well, like when we're thinking about these trolley problems and autonomous vehicles and how to
link |
program ethics into machines and how to, you know, make AI algorithms fair and equitable, we're
link |
realizing that this is so complicated and it's complicated in part because there doesn't seem
link |
to be a one right answer in any of these cases.
link |
Do you have a hope for, like one of the ideas of the moral machine is that crowdsourcing can help
link |
us converge towards, like democracy can help us converge towards the right answer.
link |
Do you have a hope for crowdsourcing?
link |
Well, yes and no. So I think that in general, you know, I have a legal background and
link |
policymaking is often about trying to suss out, you know, what rules does this particular society
link |
agree on and then trying to codify that. So the law makes these choices all the time and then
link |
tries to adapt according to changing culture. But in the case of the moral machine project,
link |
I don't think that people's choices on that website necessarily reflect what laws they would
link |
want in place. I think you would have to ask them a series of different questions in order to get
link |
at what their consensus is.
link |
I agree, but that has to do more with the artificial nature of, I mean, they're showing
link |
some cute icons on a screen. That's almost, so if you, for example, we do a lot of work in virtual
link |
reality. And so if you put those same people into virtual reality where they have to make that
link |
decision, their decision would be very different, I think.
link |
I agree with that. That's one aspect. And the other aspect is it's a different question to ask
link |
someone, would you run over the homeless person or the doctor in this scene? Or do you want cars to
link |
always run over the homeless people?
link |
I think, yeah. So let's talk about anthropomorphism. To me, anthropomorphism, if I can
link |
pronounce it correctly, is one of the most fascinating phenomena from like both the
link |
engineering perspective and the psychology perspective, machine learning perspective,
link |
and robotics in general. Can you step back and define anthropomorphism, how you see it in
link |
general terms in your work?
link |
Sure. So anthropomorphism is this tendency that we have to project human like traits and
link |
behaviors and qualities onto nonhumans. And we often see it with animals, like we'll project
link |
emotions on animals that may or may not actually be there. We often see that we're trying to
link |
interpret things according to our own behavior when we get it wrong. But we do it with more
link |
than just animals. We do it with objects, you know, teddy bears. We see, you know, faces in
link |
the headlights of cars. And we do it with robots very, very extremely.
link |
You think that can be engineered? Can that be used to enrich an interaction between an AI
link |
system and the human?
link |
Oh, yeah, for sure.
link |
And do you see it being used that way often? Like, I don't, I haven't seen, whether it's
link |
Alexa or any of the smart speaker systems, often trying to optimize for the anthropomorphization.
link |
You said you haven't seen?
link |
I haven't seen. They keep moving away from that. I think they're afraid of that.
link |
They actually, so I only recently found out, but did you know that Amazon has like a whole
link |
team of people who are just there to work on Alexa's personality?
link |
So I know that depends on what you mean by personality. I didn't know that exact thing.
link |
But I do know that how the voice is perceived is worked on a lot, whether if it's a pleasant
link |
feeling about the voice, but that has to do more with the texture of the sound and the
link |
audio and so on. But personality is more like...
link |
It's like, what's her favorite beer when you ask her? And the personality team is different
link |
for every country too. Like there's a different personality for German Alexa than there is
link |
for American Alexa. That said, I think it's very difficult to, you know, use the, really,
link |
really harness the anthropomorphism with these voice assistants because the voice interface
link |
is still very primitive. And I think that in order to get people to really suspend their
link |
disbelief and treat a robot like it's alive, less is sometimes more. You want them to project
link |
onto the robot and you want the robot to not disappoint their expectations for how it's
link |
going to answer or behave in order for them to have this kind of illusion. And with Alexa,
link |
I don't think we're there yet, or Siri, that they're just not good at that. But if you
link |
look at some of the more animal like robots, like the baby seal that they use with the
link |
dementia patients, it's a much more simple design. It doesn't try to talk to you. It
link |
can't disappoint you in that way. It just makes little movements and sounds and people
link |
stroke it and it responds to their touch. And that is like a very effective way to harness
link |
people's tendency to kind of treat the robot like a living thing.
link |
Yeah. So you bring up some interesting ideas in your paper chapter, I guess,
link |
Anthropomorphic Framing Human Robot Interaction that I read the last time we scheduled this.
link |
Oh my God, that was a long time ago.
link |
Yeah. What are some good and bad cases of anthropomorphism in your perspective?
link |
Like when is the good ones and bad?
link |
Well, I should start by saying that, you know, while design can really enhance the
link |
anthropomorphism, it doesn't take a lot to get people to treat a robot like it's alive. Like
link |
people will, over 85% of Roombas have a name, which I'm, I don't know the numbers for your
link |
regular type of vacuum cleaner, but they're not that high, right? So people will feel bad for the
link |
Roomba when it gets stuck, they'll send it in for repair and want to get the same one back. And
link |
that's, that one is not even designed to like make you do that. So I think that some of the cases
link |
where it's maybe a little bit concerning that anthropomorphism is happening is when you have
link |
something that's supposed to function like a tool and people are using it in the wrong way.
link |
And one of the concerns is military robots where, so gosh, 2000, like early 2000s, which is a long
link |
time ago, iRobot, the Roomba company made this robot called the Pacbot that was deployed in Iraq
link |
and Afghanistan with the bomb disposal units that were there. And the soldiers became very emotionally
link |
attached to the robots. And that's fine until a soldier risks his life to save a robot, which
link |
you really don't want. But they were treating them like pets. Like they would name them,
link |
they would give them funerals with gun salutes, they would get really upset and traumatized when
link |
the robot got broken. So in situations where you want a robot to be a tool, in particular,
link |
when it's supposed to like do a dangerous job that you don't want a person doing,
link |
it can be hard when people get emotionally attached to it. That's maybe something that
link |
you would want to discourage. Another case for concern is maybe when companies try to
link |
leverage the emotional attachment to exploit people. So if it's something that's not in the
link |
consumer's interest, trying to like sell them products or services or exploit an emotional
link |
connection to keep them paying for a cloud service for a social robot or something like that might be,
link |
I think that's a little bit concerning as well.
link |
Yeah, the emotional manipulation, which probably happens behind the scenes now with some like
link |
social networks and so on, but making it more explicit. What's your favorite robot?
link |
Fictional or real?
link |
No, real. Real robot, which you have felt a connection with or not like, not anthropomorphic
link |
connection, but I mean like you sit back and say, damn, this is an impressive system.
link |
Wow. So two different robots. So the, the PLEO baby dinosaur robot that is no longer sold that
link |
came out in 2007, that one I was very impressed with. It was, but, but from an anthropomorphic
link |
perspective, I was impressed with how much I bonded with it, how much I like wanted to believe
link |
that it had this inner life.
link |
Can you describe PLEO, can you describe what it is? How big is it? What can it actually do?
link |
Yeah. PLEO is about the size of a small cat. It had a lot of like motors that gave it this kind
link |
of lifelike movement. It had things like touch sensors and an infrared camera. So it had all
link |
these like cool little technical features, even though it was a toy. And the thing that really
link |
struck me about it was that it, it could mimic pain and distress really well. So if you held
link |
it up by the tail, it had a tilt sensor that, you know, told it what direction it was facing
link |
and it would start to squirm and cry out. If you hit it too hard, it would start to cry.
link |
So it was very impressive in design.
link |
And what's the second robot that you were, you said there might've been two that you liked.
link |
Yeah. So the Boston Dynamics robots are just impressive feats of engineering.
link |
Have you met them in person?
link |
Yeah. I recently got a chance to go visit and I, you know, I was always one of those people who
link |
watched the videos and was like, this is super cool, but also it's a product video. Like,
link |
I don't know how many times that they had to shoot this to get it right.
link |
But visiting them, I, you know, I'm pretty sure that I was very impressed. Let's put it that way.
link |
Yeah. And in terms of the control, I think that was a transformational moment for me
link |
when I met Spot Mini in person.
link |
Because, okay, maybe this is a psychology experiment, but I anthropomorphized the,
link |
the crap out of it. So I immediately, it was like my best friend, right?
link |
I think it's really hard for anyone to watch Spot move and not feel like it has agency.
link |
Yeah. This movement, especially the arm on Spot Mini really obviously looks like a head.
link |
That they say, no, wouldn't mean it that way, but it obviously, it looks exactly like that.
link |
And so it's almost impossible to not think of it as a, almost like the baby dinosaur,
link |
but slightly larger. And this movement of the, of course, the intelligence is,
link |
their whole idea is that it's not supposed to be intelligent. It's a platform on which you build
link |
higher intelligence. It's actually really, really dumb. It's just a basic movement platform.
link |
Yeah. But even dumb robots can, like, we can immediately respond to them in this visceral way.
link |
What are your thoughts about Sophia the robot? This kind of mix of some basic natural language
link |
processing and basically an art experiment.
link |
Yeah. An art experiment is a good way to characterize it. I'm much less impressed
link |
with Sophia than I am with Boston Dynamics.
link |
She said she likes you. She said she admires you.
link |
Yeah. She followed me on Twitter at some point. Yeah.
link |
She tweets about how much she likes you.
link |
So what does that mean? I have to be nice or?
link |
No, I don't know. I was emotionally manipulating you. No. How do you think of
link |
that? I think of the whole thing that happened with Sophia is quite a large number of people
link |
kind of immediately had a connection and thought that maybe we're far more advanced with robotics
link |
than we are or actually didn't even think much. I was surprised how little people cared
link |
that they kind of assumed that, well, of course AI can do this.
link |
And then if they assume that, I felt they should be more impressed.
link |
Well, people really overestimate where we are. And so when something, I don't even think Sophia
link |
was very impressive or is very impressive. I think she's kind of a puppet, to be honest. But
link |
yeah, I think people are a little bit influenced by science fiction and pop culture to
link |
think that we should be further along than we are.
link |
So what's your favorite robots in movies and fiction?
link |
WALLI. What do you like about WALLI? The humor, the cuteness, the perception control systems
link |
operating on WALLI that makes it all work? Just in general?
link |
The design of WALLI the robot, I think that animators figured out, starting in the 1940s,
link |
how to create characters that don't look real, but look like something that's even better than real,
link |
that we really respond to and think is really cute. They figured out how to make them move
link |
and look in the right way. And WALLI is just such a great example of that.
link |
You think eyes, big eyes or big something that's kind of eyeish. So it's always playing on some
link |
aspect of the human face, right?
link |
Often. Yeah. So big eyes. Well, I think one of the first animations to really play with this was
link |
Bambi. And they weren't originally going to do that. They were originally trying to make the
link |
deer look as lifelike as possible. They brought deer into the studio and had a little zoo there
link |
so that the animators could work with them. And then at some point they were like,
link |
if we make really big eyes and a small nose and big cheeks, kind of more like a baby face,
link |
then people like it even better than if it looks real. Do you think the future of things like
link |
Alexa in the home has possibility to take advantage of that, to build on that, to create
link |
these systems that are better than real, that create a close human connection? I can pretty
link |
much guarantee you without having any knowledge that those companies are going to make these
link |
things. And companies are working on that design behind the scenes. I'm pretty sure.
link |
I totally disagree with you.
link |
So that's what I'm interested in. I'd like to build such a company. I know
link |
a lot of those folks and they're afraid of that because how do you make money off of it?
link |
Well, but even just making Alexa look a little bit more interesting than just a cylinder
link |
It's an interesting thought, but I don't think people are from Amazon perspective are looking
link |
for that kind of connection. They want you to be addicted to the services provided by Alexa,
link |
not to the device. So the device itself, it's felt that you can lose a lot because if you create a
link |
connection and then it creates more opportunity for frustration for negative stuff than it does
link |
for positive stuff is I think the way they think about it.
link |
That's interesting. Like I agree that it's very difficult to get right and you have to get it
link |
exactly right. Otherwise you wind up with Microsoft's Clippy.
link |
Okay, easy now. What's your problem with Clippy?
link |
You like Clippy? Is Clippy your friend?
link |
Yeah, I like Clippy. I was just, I just talked to, we just had this argument and they said
link |
Microsoft's CTO and they said, he said he's not bringing Clippy back. They're not bringing
link |
Clippy back and that's very disappointing. I think it was Clippy was the greatest assistance
link |
we've ever built. It was a horrible attempt, of course, but it's the best we've ever done
link |
because it was a real attempt to have like a actual personality. I mean, it was obviously
link |
technology was way not there at the time of being able to be a recommender system for assisting you
link |
in anything and typing in Word or any kind of other application, but still it was an attempt
link |
of personality that was legitimate, which I thought was brave.
link |
Yes, yes. Okay. You know, you've convinced me I'll be slightly less hard on Clippy.
link |
And I know I have like an army of people behind me who also miss Clippy.
link |
Really? I want to meet these people. Who are these people?
link |
It's the people who like to hate stuff when it's there and miss it when it's gone.
link |
It's everyone. Exactly. All right. So Enki and Jibo, the two companies,
link |
the two amazing companies, the social robotics companies that have recently been closed down.
link |
Why do you think it's so hard to create a personal robotics company? So making a business
link |
out of essentially something that people would anthropomorphize, have a deep connection with.
link |
Why is it so hard to make it work? Is the business case not there or what is it?
link |
I think it's a number of different things. I don't think it's going to be this way forever.
link |
I think at this current point in time, it takes so much work to build something that only barely
link |
meets people's minimal expectations because of science fiction and pop culture giving people
link |
this idea that we should be further than we already are. Like when people think about a robot
link |
assistant in the home, they think about Rosie from the Jetsons or something like that. And
link |
Enki and Jibo did such a beautiful job with the design and getting that interaction just right.
link |
But I think people just wanted more. They wanted more functionality. I think you're also right that
link |
the business case isn't really there because there hasn't been a killer application that's
link |
useful enough to get people to adopt the technology in great numbers. I think what we did see from the
link |
people who did get Jibo is a lot of them became very emotionally attached to it. But that's not,
link |
I mean, it's kind of like the Palm Pilot back in the day. Most people are like, why do I need this?
link |
Why would I? They don't see how they would benefit from it until they have it or some
link |
other company comes in and makes it a little better. Yeah. Like how far away are we, do you
link |
think? How hard is this problem? It's a good question. And I think it has a lot to do with
link |
people's expectations and those keep shifting depending on what science fiction that is popular.
link |
But also it's two things. It's people's expectation and people's need for an emotional
link |
connection. Yeah. And I believe the need is pretty high. Yes. But I don't think we're aware of it.
link |
That's right. There's like, I really think this is like the life as we know it. So we've just kind
link |
of gotten used to it of really, I hate to be dark because I have close friends, but we've gotten
link |
used to really never being close to anyone. Right. And we're deeply, I believe, okay, this is
link |
hypothesis. I think we're deeply lonely, all of us, even those in deep fulfilling relationships.
link |
In fact, what makes those relationship fulfilling, I think is that they at least tap into that deep
link |
loneliness a little bit. But I feel like there's more opportunity to explore that, that doesn't
link |
inter, doesn't interfere with the human relationships you have. It expands more on the,
link |
that, yeah, the rich deep unexplored complexity that's all of us, weird apes. Okay.
link |
I think you're right. Do you think it's possible to fall in love with a robot?
link |
Oh yeah, totally. Do you think it's possible to have a longterm committed monogamous relationship
link |
with a robot? Well, yeah, there are lots of different types of longterm committed monogamous
link |
relationships. I think monogamous implies like, you're not going to see other humans sexually or
link |
like you basically on Facebook have to say, I'm in a relationship with this person, this robot.
link |
I just don't like, again, I think this is comparing robots to humans when I would rather
link |
compare them to pets. Like you get a robot, it fulfills this loneliness that you have
link |
in maybe not the same way as a pet, maybe in a different way that is even supplemental in a
link |
different way. But I'm not saying that people won't like do this, be like, oh, I want to marry
link |
my robot or I want to have like a sexual relation, monogamous relationship with my robot. But I don't
link |
think that that's the main use case for them. But you think that there's still a gap between
link |
human and pet. So between a husband and pet, there's a different relationship. It's engineering.
link |
So that's a gap that can be closed through. I think it could be closed someday, but why
link |
would we close that? Like, I think it's so boring to think about recreating things that we already
link |
have when we could create something that's different. I know you're thinking about the
link |
people who like don't have a husband and like, what could we give them? Yeah. But I guess what
link |
I'm getting at is maybe not. So like the movie Her. Yeah. Right. So a better husband. Well,
link |
maybe better in some ways. Like it's, I do think that robots are going to continue to be a different
link |
type of relationship, even if we get them like very human looking or when, you know, the voice
link |
interactions we have with them feel very like natural and human like, I think there's still
link |
going to be differences. And there were in that movie too, like towards the end, it kind of goes
link |
off the rails. But it's just a movie. So your intuition is that, because you kind of said
link |
two things, right? So one is why would you want to basically replicate the husband? Yeah. Right.
link |
And the other is kind of implying that it's kind of hard to do. So like anytime you try,
link |
you might build something very impressive, but it'll be different. I guess my question is about
link |
human nature. It's like, how hard is it to satisfy that role of the husband? So we're moving any of
link |
the sexual stuff aside is the, it's more like the mystery, the tension, the dance of relationships
link |
you think with robots, that's difficult to build. What's your intuition? I think that, well, it also
link |
depends on are we talking about robots now in 50 years in like indefinite amount of time. I'm
link |
thinking like five or 10 years. Five or 10 years. I think that robots at best will be like, it's
link |
more similar to the relationship we have with our pets than relationship that we have with other
link |
people. I got it. So what do you think it takes to build a system that exhibits greater and greater
link |
levels of intelligence? Like it impresses us with this intelligence. Arumba, so you talk about
link |
anthropomorphization that doesn't, I think intelligence is not required. In fact, intelligence
link |
probably gets in the way sometimes, like you mentioned. But what do you think it takes to
link |
create a system where we sense that it has a human level intelligence? So something that,
link |
probably something conversational, human level intelligence. How hard do you think that problem
link |
is? It'd be interesting to sort of hear your perspective, not just purely, so I talk to a lot
link |
of people, how hard is the conversational agents? How hard is it to pass the torrent test? But my
link |
sense is it's easier than just solving, it's easier than solving the pure natural language
link |
processing problem. Because I feel like you can cheat. Yeah. So how hard is it to pass the torrent
link |
test in your view? Well, I think again, it's all about expectation management. If you set up
link |
people's expectations to think that they're communicating with, what was it, a 13 year old
link |
boy from the Ukraine? Yeah, that's right. Then they're not going to expect perfect English,
link |
they're not going to expect perfect, you know, understanding of concepts or even like being on
link |
the same wavelength in terms of like conversation flow. So it's much easier to pass in that case.
link |
Do you think, you kind of alluded this too with audio, do you think it needs to have a body?
link |
I think that we definitely have, so we treat physical things with more social agency,
link |
because we're very physical creatures. I think a body can be useful.
link |
Does it get in the way? Is there a negative aspects like...
link |
Yeah, there can be. So if you're trying to create a body that's too similar to something that people
link |
are familiar with, like I have this robot cat at home that has robots. I have a robot cat at home
link |
that has roommates. And it's very disturbing to watch because I'm constantly assuming that it's
link |
going to move like a real cat and it doesn't because it's like a $100 piece of technology.
link |
So it's very like disappointing and it's very hard to treat it like it's alive. So you can get a lot
link |
wrong with the body too, but you can also use tricks, same as, you know, the expectation
link |
management of the 13 year old boy from the Ukraine. If you pick an animal that people
link |
aren't intimately familiar with, like the baby dinosaur, like the baby seal that people have
link |
never actually held in their arms, you can get away with much more because they don't have these
link |
preformed expectations. Yeah, I remember you thinking of a Ted talk or something that clicked
link |
for me that nobody actually knows what a dinosaur looks like. So you can actually get away with a
link |
lot more. That was great. So what do you think about consciousness and mortality
link |
being displayed in a robot? So not actually having consciousness, but having these kind
link |
of human elements that are much more than just the interaction, much more than just,
link |
like you mentioned with a dinosaur moving kind of in an interesting ways, but really being worried
link |
about its own death and really acting as if it's aware and self aware and identity. Have you seen
link |
that done in robotics? What do you think about doing that? Is that a powerful good thing?
link |
Well, I think it can be a design tool that you can use for different purposes. So I can't say
link |
whether it's inherently good or bad, but I do think it can be a powerful tool. The fact that the
link |
pleo mimics distress when you quote unquote hurt it is a really powerful tool to get people to
link |
engage with it in a certain way. I had a research partner that I did some of the empathy work with
link |
named Palash Nandi and he had built a robot for himself that had like a lifespan and that would
link |
stop working after a certain amount of time just because he was interested in whether he himself
link |
would treat it differently. And we know from Tamagotchis, those little games that we used to
link |
have that were extremely primitive, that people respond to this idea of mortality and you can get
link |
people to do a lot with little design tricks like that. Now, whether it's a good thing depends on
link |
what you're trying to get them to do. Have a deeper relationship, have a deeper connection,
link |
sign a relationship. If it's for their own benefit, that sounds great. Okay. You could do that for a
link |
lot of other reasons. I see. So what kind of stuff are you worried about? So is it mostly about
link |
manipulation of your emotions for like advertisement and so on, things like that? Yeah, or data
link |
collection or, I mean, you could think of governments misusing this to extract information
link |
from people. It's, you know, just like any other technological tool, it just raises a lot of
link |
questions. If you look at Facebook, if you look at Twitter and social networks, there's a lot
link |
of concern of data collection now. What's from the legal perspective or in general,
link |
how do we prevent the violation of sort of these companies crossing a line? It's a great area,
link |
but crossing a line, they shouldn't in terms of manipulating, like we're talking about and
link |
manipulating our emotion, manipulating our behavior, using tactics that are not so savory.
link |
Yeah. It's really difficult because we are starting to create technology that relies on
link |
data collection to provide functionality. And there's not a lot of incentive,
link |
even on the consumer side, to curb that because the other problem is that the harms aren't
link |
tangible. They're not really apparent to a lot of people because they kind of trickle down on a
link |
societal level. And then suddenly we're living in like 1984, which, you know, sounds extreme,
link |
but that book was very prescient and I'm not worried about, you know, these systems. I have,
link |
you know, Amazon's Echo at home and tell Alexa all sorts of stuff. And it helps me because,
link |
you know, Alexa knows what brand of diaper we use. And so I can just easily order it again.
link |
So I don't have any incentive to ask a lawmaker to curb that. But when I think about that data
link |
then being used against low income people to target them for scammy loans or education programs,
link |
that's then a societal effect that I think is very severe and, you know,
link |
legislators should be thinking about.
link |
But yeah, the gray area is the removing ourselves from consideration of like,
link |
of explicitly defining objectives and more saying,
link |
well, we want to maximize engagement in our social network.
link |
And then just, because you're not actually doing a bad thing. It makes sense. You want people to
link |
keep a conversation going, to have more conversations, to keep coming back
link |
again and again, to have conversations. And whatever happens after that,
link |
you're kind of not exactly directly responsible. You're only indirectly responsible. So I think
link |
it's a really hard problem. Are you optimistic about us ever being able to solve it?
link |
You mean the problem of capitalism? It's like, because the problem is that the companies
link |
are acting in the company's interests and not in people's interests. And when those interests are
link |
aligned, that's great. But the completely free market doesn't seem to work because of this
link |
information asymmetry.
link |
But it's hard to know how to, so say you were trying to do the right thing. I guess what I'm
link |
trying to say is it's not obvious for these companies what the good thing for society is to
link |
do. Like, I don't think they sit there with, I don't know, with a glass of wine and a cat,
link |
like petting a cat, evil cat. And there's two decisions and one of them is good for society.
link |
One is good for the profit and they choose the profit. I think they actually, there's a lot of
link |
money to be made by doing the right thing for society. Because Google, Facebook have so much cash
link |
that they actually, especially Facebook, would significantly benefit from making decisions that
link |
are good for society. It's good for their brand. But I don't know if they know what's good for
link |
society. I don't think we know what's good for society in terms of how we manage the
link |
conversation on Twitter or how we design, we're talking about robots. Like, should we
link |
emotionally manipulate you into having a deep connection with Alexa or not?
link |
Yeah. Yeah. Do you have optimism that we'll be able to solve some of these questions?
link |
Well, I'm going to say something that's controversial, like in my circles,
link |
which is that I don't think that companies who are reaching out to ethicists and trying to create
link |
interdisciplinary ethics boards, I don't think that that's totally just trying to whitewash
link |
the problem and so that they look like they've done something. I think that a lot of companies
link |
actually do, like you say, care about what the right answer is. They don't know what that is,
link |
and they're trying to find people to help them find them. Not in every case, but I think
link |
it's much too easy to just vilify the companies as, like you say, sitting there with their cat
link |
going, her, her, her, $1 million. That's not what happens. A lot of people are well meaning even
link |
within companies. I think that what we do absolutely need is more interdisciplinarity,
link |
both within companies, but also within the policymaking space because we've hurtled into
link |
the world where technological progress is much faster, it seems much faster than it was, and
link |
things are getting very complex. And you need people who understand the technology, but also
link |
people who understand what the societal implications are, and people who are thinking
link |
about this in a more systematic way to be talking to each other. There's no other solution, I think.
link |
You've also done work on intellectual property, so if you look at the algorithms that these
link |
companies are using, like YouTube, Twitter, Facebook, so on, I mean that's kind of,
link |
those are mostly secretive. The recommender systems behind these algorithms. Do you think
link |
about an IP and the transparency of algorithms like this? Like what is the responsibility of
link |
these companies to open source the algorithms or at least reveal to the public how these
link |
algorithms work? So I personally don't work on that. There are a lot of people who do though,
link |
and there are a lot of people calling for transparency. In fact, Europe's even trying
link |
to legislate transparency, maybe they even have at this point, where like if an algorithmic system
link |
makes some sort of decision that affects someone's life, that you need to be able to see how that
link |
decision was made. It's a tricky balance because obviously companies need to have some sort of
link |
competitive advantage and you can't take all of that away or you stifle innovation. But yeah,
link |
for some of the ways that these systems are already being used, I think it is pretty important that
link |
people understand how they work. What are your thoughts in general on intellectual property in
link |
this weird age of software, AI, robotics? Oh, that it's broken. I mean, the system is just broken. So
link |
can you describe, I actually, I don't even know what intellectual property is in the space of
link |
software, what it means to, I mean, so I believe I have a patent on a piece of software from my PhD.
link |
You believe? You don't know? No, we went through a whole process. Yeah, I do. You get the spam
link |
emails like, we'll frame your patent for you. Yeah, it's much like a thesis. But that's useless,
link |
right? Or not? Where does IP stand in this age? What's the right way to do it? What's the right
link |
way to protect and own ideas when it's just code and this mishmash of something that feels much
link |
softer than a piece of machinery? Yeah. I mean, it's hard because there are different types of
link |
intellectual property and they're kind of these blunt instruments. It's like patent law is like
link |
a wrench. It works really well for an industry like the pharmaceutical industry. But when you
link |
try and apply it to something else, it's like, I don't know, I'll just hit this thing with a wrench
link |
and hope it works. So software, you have a couple of different options. Any code that's written down
link |
in some tangible form is automatically copyrighted. So you have that protection, but that doesn't do
link |
much because if someone takes the basic idea that the code is executing and just does it in a
link |
slightly different way, they can get around the copyright. So that's not a lot of protection.
link |
Then you can patent software, but that's kind of, I mean, getting a patent costs,
link |
I don't know if you remember what yours cost or like, was it through an institution?
link |
Yeah, it was through a university. It was insane. There were so many lawyers, so many meetings.
link |
It made me feel like it must've been hundreds of thousands of dollars. It must've been something
link |
crazy. Oh yeah. It's insane the cost of getting a patent. And so this idea of protecting the
link |
inventor in their own garage who came up with a great idea is kind of, that's the thing of the
link |
past. It's all just companies trying to protect things and it costs a lot of money. And then
link |
with code, it's oftentimes by the time the patent is issued, which can take like five years,
link |
probably your code is obsolete at that point. So it's a very, again, a very blunt instrument that
link |
doesn't work well for that industry. And so at this point we should really have something better,
link |
but we don't. Do you like open source? Yeah. Is open source good for society?
link |
You think all of us should open source code? Well, so at the Media Lab at MIT, we have an
link |
open source default because what we've noticed is that people will come in, they'll write some code
link |
and they'll be like, how do I protect this? And we're like, that's not your problem right now.
link |
Your problem isn't that someone's going to steal your project. Your problem is getting people to
link |
use it at all. There's so much stuff out there. We don't even know if you're going to get traction
link |
for your work. And so open sourcing can sometimes help, you know, get people's work out there,
link |
but ensure that they get attribution for it, for the work that they've done. So like,
link |
I'm a fan of it in a lot of contexts. Obviously it's not like a one size fits all solution.
link |
So what I gleaned from your Twitter is, you're a mom. I saw a quote, a reference to baby bot.
link |
What have you learned about robotics and AI from raising a human baby bot?
link |
Well, I think that my child has made it more apparent to me that the systems we're currently
link |
creating aren't like human intelligence. Like there's not a lot to compare there.
link |
It's just, he has learned and developed in such a different way than a lot of the AI systems
link |
we're creating that that's not really interesting to me to compare. But what is interesting to me
link |
is how these systems are going to shape the world that he grows up in. And so I'm like even more
link |
concerned about kind of the societal effects of developing systems that, you know, rely on
link |
massive amounts of data collection, for example. So is he going to be allowed to use like Facebook or
link |
Facebook? Facebook is over. Kids don't use that anymore. Snapchat. What do they use? Instagram?
link |
Snapchat's over too. I don't know. I just heard that TikTok is over, which I've never even seen.
link |
So I don't know. No. We're old. We don't know. I need to, I'm going to start gaming and streaming
link |
my, my gameplay. So what do you see as the future of personal robotics, social robotics, interaction
link |
with other robots? Like what are you excited about if you were to sort of philosophize about what
link |
might happen in the next five, 10 years that would be cool to see? Oh, I really hope that we get kind
link |
of a home robot that makes it, that's a social robot and not just Alexa. Like it's, you know,
link |
I really love the Anki products. I thought Jibo was, had some really great aspects. So I'm hoping
link |
that a company cracks that. Me too. So Kate, it was a wonderful talking to you today. Likewise.
link |
Thank you so much. It was fun. Thanks for listening to this conversation with Kate Darling.
link |
And thank you to our sponsors, ExpressVPN and Masterclass. Please consider supporting the
link |
podcast by signing up to Masterclass at masterclass.com slash Lex and getting ExpressVPN at
link |
expressvpn.com slash LexPod. If you enjoy this podcast, subscribe on YouTube, review it with
link |
five stars on Apple podcast, support it on Patreon, or simply connect with me on Twitter
link |
at Lex Friedman. And now let me leave you with some tweets from Kate Darling. First tweet is
link |
the pandemic has fundamentally changed who I am. I now drink the leftover milk in the bottom of
link |
the cereal bowl. Second tweet is I came on here to complain that I had a really bad day and saw that
link |
a bunch of you are hurting too. Love to everyone. Thank you for listening. I hope to see you next