back to indexKate Darling: Social Robotics | Lex Fridman Podcast #98
link |
The following is a conversation with Kate Darling, a researcher at MIT interested in
link |
social robotics, robot ethics, and generally how technology intersects with society.
link |
She explores the emotional connection between human beings and lifelike machines,
link |
which for me is one of the most exciting topics in all of artificial intelligence.
link |
As she writes in her bio, she is a caretaker of several domestic robots,
link |
including her plio dinosaur robots named Yochai, Peter, and Mr. Spaghetti.
link |
She is one of the funniest and brightest minds I've ever had the fortune to talk to.
link |
This conversation was recorded recently, but before the outbreak of the pandemic.
link |
For everyone feeling the burden of this crisis, I'm sending love your way.
link |
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
review it with five stars on Apple Podcasts, support on Patreon,
link |
or simply connect with me on Twitter at Lex Friedman spelled F R I D M A N.
link |
As usual, I'll do a few minutes of ads now and never any ads in the middle that can break the
link |
flow of the conversation. I hope that works for you and doesn't hurt the listening experience.
link |
Quick summary of the ads, two sponsors, Masterclass and ExpressVPN. Please consider supporting
link |
the podcast by signing up to masterclass at masterclass.com slash lex and getting expressvpn
link |
at expressvpn.com slash lex pod. This show is sponsored by Masterclass.
link |
Sign up at masterclass.com slash lex to get a discount and to support this podcast.
link |
When I first heard about Masterclass, I thought it was too good to be true.
link |
For $180 a year, you get an all access pass to watch courses from to list some of my favorites,
link |
Chris Hatfield on space exploration, Neil deGrasse Tyson on scientific thinking and
link |
communication, Will Wright, creator of SimCity and Sims, love those games on game design,
link |
Carlos Santana on guitar, Gary Kasparov on chess, Daniel Negrano on poker and many more.
link |
Chris Hatfield explaining how Rockets work and the experience of being launched into space alone
link |
is worth the money. By the way, you can watch it on basically any device. Once again,
link |
sign up on masterclass.com slash lex to get a discount and to support this podcast.
link |
This show is sponsored by ExpressVPN. Get it at expressvpn.com slash lex pod
link |
to get a discount and to support this podcast. I've been using ExpressVPN for many years.
link |
I love it. It's easy to use. Press the big power on button and your privacy is protected.
link |
And if you like, you can make it look like your location is anywhere else in the world.
link |
I might be in Boston now, but it can make it look like I'm in New York, London, Paris or anywhere
link |
else. This has a large number of obvious benefits. Certainly, it allows you to access international
link |
versions of streaming websites like the Japanese Netflix or the UK Hulu. ExpressVPN works on any
link |
device you can imagine. I use it on Linux. Shout out to Ubuntu 2004, Windows, Android,
link |
but it's available everywhere else too. Once again, get it at expressvpn.com slash lex pod
link |
to get a discount and to support this podcast. And now here's my conversation with Kate Darling.
link |
You co taught robot ethics at Harvard. What are some ethical issues that arise
link |
in the world with robots? Yeah, that was a reading group that I did when I,
link |
like at the very beginning, first became interested in this topic. So I think if I
link |
taught that class today, it would look very, very different. Robot ethics, it sounds very
link |
science fictiony, especially did back then. But I think that some of the issues that people in
link |
robot ethics are concerned with are just around the ethical use of robotic technology in general.
link |
So for example, responsibility for harm, automated weapon systems, things like privacy and data
link |
security, things like automation and labor markets. And then personally, I'm really interested in some
link |
of the social issues that come out of our social relationships with robots. One on one relationship
link |
with robots. Yeah. I think most of the stuff we have to talk about is like one on one social
link |
stuff. That's what I love. I think that's what you're, you love as well and they're expert in.
link |
But as a societal level, there's like, there's a presidential candidate now, Andrew Yang, running,
link |
concerned about automation and robots and AI in general, taking away jobs. He has a proposal of
link |
UBI, universal basic income of everybody gets a thousand bucks. Yeah. As a way to sort of
link |
save you if you lose your job from automation to allow you time to discover what it is that you
link |
would like to or even love to do. Yes. So I lived in Switzerland for 20 years and universal basic
link |
income has been more of a topic there separate from the whole robots and jobs issue. So
link |
it's so interesting to me to see kind of these Silicon Valley people latch on to this concept
link |
that came from a very kind of left wing socialist, you know, kind of a different place in Europe.
link |
But on the automation and labor markets topic, I think that it's very, so sometimes in those
link |
conversations, I think people overestimate where robotic technology is right now. And we also have
link |
this fallacy of constantly comparing robots to humans and thinking of this as a one to one
link |
replacement of jobs. So even like Bill Gates a few years ago said something about, you know,
link |
maybe we should have a system that taxes robots for taking people's jobs. And it just, I mean,
link |
I'm sure that was taken out of context, you know, he's a really smart guy, but that sounds to me
link |
like kind of viewing it as a one to one replacement versus viewing this technology as kind of a
link |
supplemental tool that of course is going to shake up a lot of stuff. It's going to change the job
link |
landscape, but I don't see, you know, robots taking all the jobs in the next 20 years. That's just
link |
not how it's going to work. Right. So maybe drifting into the land of more personal relationships
link |
with robots and interaction and so on. I got to warn you, I go, I may ask some silly philosophical
link |
questions. I apologize. Oh, please do. Okay. Do you think humans will abuse robots in their
link |
interaction? So you've had a lot of, and we'll talk about it sort of anthropomorphization and
link |
work, you know, this intricate dance, emotional dance between human and robot, but this seems to
link |
be also a darker side where people, when they treat the other as servants, especially, they
link |
can be a little bit abusive or a lot abusive. Do you think about that? Do you worry about that?
link |
Yeah, I do think about that. So I mean, one of my, one of my main interests is the fact that
link |
people subconsciously treat robots like living things. And even though they know that they're
link |
interacting with a machine and what it means in that context to behave violently, I don't know
link |
if you could say abuse because you're not actually abusing the inner mind of the robot that robot
link |
is in doesn't have any feelings. As far as you know. Well, yeah, it also depends on how we
link |
define feelings and consciousness, but I think that's another area where people kind of overestimate
link |
where we currently are with the technology, like the robots are not even as smart as insects right
link |
now. And so I'm not worried about abuse in that sense, but it is interesting to think about what
link |
does people's behavior towards these things mean for our own behavior? Is it desensitizing the
link |
people to be verbally abusive to a robot or even physically abusive? And we don't know.
link |
Right. It's a similar connection from like, if you play violent video games,
link |
what connection does that have to desensitization to violence? I haven't read literature on that.
link |
I wonder about that. Because everything I've heard, people don't seem to any longer be so
link |
worried about violent video games. Correct. We've seemed, the research on it is,
link |
it's a difficult thing to research. So it's sort of inconclusive, but we seem to have gotten the
link |
sense, at least as a society, that people can compartmentalize. When it's something on a screen
link |
and you're like shooting a bunch of characters or running over people with your car that doesn't
link |
necessarily translate to you doing that in real life, we do, however, have some concerns about
link |
children playing violent video games. And so we do restrict it there. I'm not sure that's based on
link |
any real evidence either, but it's just the way that we've kind of decided, we want to be a little
link |
more cautious there. And the reason I think robots are a little bit different is because there is a
link |
lot of research showing that we respond differently to something in our physical space than something
link |
on a screen. We will treat it much more viscerally, much more like a physical actor. And so it's
link |
totally possible that this is not a problem. And it's the same thing as violent video games,
link |
you know, maybe, you know, restrict it with kids to be safe, but adults can do what they want.
link |
But we just need to ask the question again, because we don't have any evidence at all yet.
link |
Maybe there's an intermediate place to, I did my research on Twitter. By research, I mean
link |
scrolling through your Twitter feed. You mentioned that you were going at some point to an animal
link |
law conference. So I have to ask, do you think there's something that we can learn
link |
from animal rights that guides our thinking about robots?
link |
Oh, I think there is so much to learn from that. I'm actually writing a book on it right now,
link |
that's why I'm going to this conference. So I'm writing a book that looks at the history of
link |
animal domestication and how we've used animals for work, for weaponry, for companionship. And,
link |
you know, one of the things the book tries to do is move away from this fallacy that I talked about
link |
of comparing robots and humans, because I don't think that's the right analogy. But I do think
link |
that on a social level, even on a social level, there's so much that we can learn from looking
link |
at that history, because throughout history, we've treated most animals like tools, like products,
link |
and then some of them we've treated differently. And we're starting to see people treat robots in
link |
really similar ways. So I think it's a really helpful predictor to how we're going to interact
link |
with the robots. Do you think we'll look back at this time, like 100 years from now, and see
link |
what we do to animals as similar to the way we view the Holocaust in World War II?
link |
That's a great question. I mean, I hope so. I am not convinced that we will. But I often wonder,
link |
you know, what are my grandkids going to view as abhorrent that my generation did,
link |
that they would never do? And I'm like, well, what's the big deal? It's a fun question to ask
link |
yourself. It always seems that there's atrocities that we discover later. So the things that at
link |
the time people didn't see as, you know, you look at everything from slavery,
link |
to any kinds of abuse throughout history, to the kind of insane wars that were happening,
link |
to the way war was carried out, and rape, and the kind of violence that was happening during
link |
war that we now, you know, we see as atrocities, but at the time, perhaps, didn't as much. And so
link |
now I have this intuition that I have this worry, maybe you're going to probably criticize me,
link |
but I do anthropomorphize robots. I don't see a fundamental philosophical difference
link |
between a robot and a human being in terms of once the capabilities are matched. So the fact
link |
that we're really far away doesn't, in terms of capabilities in the net from, from natural
link |
language processing, understanding a generation to just reasoning and all that stuff. I think
link |
once you solve it, I see the, this is a very gray area. And I don't feel comfortable with the kind
link |
of abuse that people throw at robots, subtle, but I can see it becoming, I can see basically a
link |
civil rights movement for robots in the future. Do you think, let me put it in the form of a
link |
question, do you think robots should have some kinds of rights? Well, it's interesting because I
link |
came at this originally from your perspective. I was like, you know what, there's no fundamental
link |
difference between technology and like human consciousness. Like we can probably recreate
link |
anything. We just don't know how yet. And so there's no reason not to give machines the same rights
link |
that we have once, like you say, they're kind of on an equivalent level. But I realized that that
link |
is kind of a far future question. I still think we should talk about it because I think it's
link |
really interesting. But I realized that it's actually, we might need to ask the robot rights
link |
question even sooner than that. While the machines are still, you know, quote unquote, really, you
link |
know, dumb and not on our level, because of the way that we perceive them. And I think one of the
link |
lessons we learned from looking at the history of animal rights, and one of the reasons we may not
link |
get to a place in 100 years where we view it as wrong to, you know, eat or otherwise, you know,
link |
use animals for our own purposes, is because historically, we've always protected those
link |
things that we relate to the most. So one example is whales. No one gave a shit about the whales.
link |
Am I allowed to swear? Yeah, you swear as much as you want. Freedom. Yeah, no one gave a shit
link |
about the whales until someone recorded them singing. And suddenly people were like, oh,
link |
this is a beautiful creature. And now we need to save the whales. And that started the whole
link |
save the whales movement in the 70s. So I'm, as much as I, and I think a lot of people want to
link |
believe that we care about consistent biological criteria, that's not historically how we formed
link |
our alliances. Yeah, so what, why do we, why do we believe that all humans are created equal?
link |
Killing of a human being, no matter who the human being is, that's what I meant by equality, is bad.
link |
And then because I'm connecting that to robots, and I'm wondering whether mortality, so the
link |
killing act is what makes something, that's the fundamental first right. So I'm, I am currently
link |
allowed to take a shotgun and shoot a Roomba. I think I'm not sure, but I'm pretty sure it's
link |
not considered murder, right? Or even shutting them off. So that's, that's where the line appears
link |
to be, right? Is it mortality? A critical thing here? I think here again, like the animal analogy
link |
is really useful because you're also allowed to shoot your dog, but people won't be happy about it.
link |
So we give, we do give animals certain protections from like, you know, you're not allowed to torture
link |
your dog and set it on fire, at least in most states and countries, you know. But you're still
link |
allowed to treat it like a piece of property in a lot of other ways. And so we draw these,
link |
you know, arbitrary lines all the time. And, you know, there's a lot of philosophical thought on
link |
why viewing humans as something unique is not, is just speciesism and not, you know,
link |
based on any criteria that would actually justify making a difference between us and other species.
link |
Do you think in general, people, most people are good? Do you think, or do you think there's
link |
evil and good in all of us? That's revealed through our circumstances and through our interactions.
link |
I like to view myself as a person who like, believes that there's no absolute evil and
link |
good and that everything is, you know, gray. But I do think it's an interesting question. Like,
link |
when I see people being violent towards robotic objects, you said that bothers you because
link |
the robots might someday, you know, be smart. And is that what? Well, it bothers me because it
link |
reveals, so I personally believe, because I've studied way too much, so I'm Jewish, I studied
link |
the Holocaust and World War II exceptionally well. I personally believe that most of us have evil in us
link |
that what bothers me is the abuse of robots reveals the evil in human beings. Yeah. And
link |
I think it doesn't just bother me. I think it's an opportunity for roboticists to
link |
make, help people find the better sides, the angels of their nature, right? That that abuse
link |
isn't just a fun side thing. That's you revealing a dark part that you shouldn't,
link |
that should be hidden deep inside. Yeah, I mean, you laugh, but some of our research does indicate
link |
that maybe people's behavior towards robots reveals something about their tendencies for
link |
empathy generally, even using very simple robots that we have today that like clearly don't feel
link |
anything. So, you know, Westworld is maybe, you know, not so far off and it's like, you know,
link |
depicting the bad characters as willing to go around and shoot and rape the robots and the
link |
good characters is not wanting to do that, even without assuming that the robots have consciousness.
link |
So there's a opportunity, it's interesting, there's opportunity to almost practice empathy. The, on
link |
robots is an opportunity to practice empathy. I agree with you. Some people would say,
link |
why are we practicing empathy on robots instead of, you know, on our fellow humans or on animals
link |
that are actually alive and experience the world? And I don't agree with them because I don't think
link |
empathy is a zero sum game. And I do think that it's a muscle that you can train and that we
link |
should be doing that. But some people disagree. So the interesting thing, you've heard, you know,
link |
raising kids, sort of asking them or telling them to be nice to the smart speakers, to Alexa,
link |
and so on, saying please and so on during the request. I don't know if I'm a huge fan of that
link |
idea. Because yeah, that's towards the idea of practicing empathy. I feel like politeness,
link |
I'm always polite to all the, all the systems that we build, especially anything that's speech
link |
interaction based, like when we talk to the car, I always have a pretty good detector for please.
link |
I feel like there should be a room for encouraging empathy in those interactions.
link |
Yeah. Okay, so I agree with you. So I'm going to play devil's advocate. Sure.
link |
Sure. What is the devil's advocate argument there?
link |
The devil's advocate argument is that if you are the type of person who has abusive tendencies or
link |
needs to get some sort of like behavior like that out, needs an outlet for it, that it's great to
link |
have a robot that you can scream at so that you're not screaming at a person. And we just don't know
link |
whether that's true, whether it's an outlet for people or whether it just kind of, as my friend
link |
once said, trains their cruelty muscles and makes them more cruel in other situations.
link |
Oh boy, yeah. And that expands to other topics, which I don't know. There's a topic of sex,
link |
which is weird one that I tend to avoid from robotics perspective. And mostly general public
link |
doesn't. They talk about sex robots and so on. Is that an area you've touched at all research wise?
link |
That's what people imagine, sort of any kind of interaction between human and robot that
link |
shows any kind of compassion. They immediately think from a product perspective in the near term
link |
is sort of expansion of what pornography is and all that kind of stuff.
link |
Yeah. Do researchers touch this? That's kind of you to characterize it as though they're thinking
link |
rationally about product. I feel like sex robots are just such a titillating news hook for people
link |
that they become like the story. And it's really hard to not get fatigued by it when you're in
link |
the space because you tell someone you do human robot interaction. Of course, the first thing
link |
they want to talk about is sex robots. Really? Yeah, it happens a lot. And it's unfortunate
link |
that I'm so fatigued by it because I do think that there are some interesting questions that
link |
become salient when you talk about sex with robots. See, what I think would happen when
link |
people get sex robots, like let's talk guys, okay, guys get female sex robots. What I think
link |
there's an opportunity for is an actual, like they'll actually interact,
link |
what I'm trying to say, they won't, outside of the sex would be the most fulfilling part.
link |
Like the interaction, it's like the folks who, there's movies in this, right? Who pay a prostitute
link |
and then end up just talking to her the whole time. So I feel like there's an opportunity,
link |
it's like most guys and people in general joke about the sex act, but really people are just
link |
lonely inside and looking for connection, many of them. And it'd be unfortunate if that connection
link |
is established through the sex industry. I feel like it should go into the front door of like,
link |
people are lonely and they want a connection. Well, I also feel like we should kind of de
link |
stigmatize the sex industry because even prostitution, like there are prostitutes that
link |
specialize in disabled people who don't have the same kind of opportunities to explore their
link |
sexuality. So I feel like we should de stigmatize all of that generally. But yeah, that connection
link |
and that loneliness is an interesting topic that you bring up because while people are
link |
constantly worried about robots replacing humans and oh, if people get sex robots and the sex is
link |
really good and they won't want their partner or whatever, but we rarely talk about robots
link |
actually filling a hole where there's nothing and what benefit that can provide to people.
link |
Yeah, I think that's an exciting, there's a giant hole that's unfillable by humans.
link |
It's asking too much of your friends and people you're in a relationship with in your family
link |
to fill that hole because it's exploring the full complexity and richness of who you are.
link |
Like, who are you really? Your family doesn't have enough patients to really sit there and
link |
listen to who are you really? And I feel like there's an opportunity to really make that connection
link |
with robots. I just feel like we're complex as humans and we're capable of lots of different
link |
types of relationships. So whether that's with family members, with friends, with our pets or
link |
with robots, I feel like there's space for all of that and all of that can provide value in a
link |
different way. Yeah, absolutely. So I'm jumping around. Currently, most of my work is in autonomous
link |
vehicles. So the most popular topic among general public is the trolley problem. So most robots
link |
kind of hate this question, but what do you think of this thought experiment? What do you think we
link |
can learn from it outside of the silliness of the actual application of it to the autonomous vehicle?
link |
I think it's still an interesting ethical question, and that in itself, just like much of the
link |
interaction with robots has something to teach us. But from your perspective, do you think there's
link |
anything there? Well, I think you're right that it does have something to teach us. But I think
link |
what people are forgetting in all of these conversations is the origins of the trolley
link |
problem and what it was meant to show us, which is that there is no right answer and that sometimes
link |
our moral intuition that comes to us instinctively is not actually what we should follow
link |
if we care about creating systematic rules that apply to everyone. So I think that as a
link |
philosophical concept, it could teach us at least that, but that's not how people are using it right
link |
now. These are friends of mine, and I love them dearly, and their project adds a lot of value,
link |
but if we're viewing the moral machine project as what we can learn from the trolley problems,
link |
the moral machine is, I'm sure you're familiar, it's this website that you can go to, and it gives
link |
you different scenarios like, oh, you're in a car, you can decide to run over these two people or
link |
this child. What do you choose? Do you choose the homeless person? Do you choose the person who's
link |
jaywalking? And so it pits these moral choices against each other and then tries to crowdsource
link |
the quote unquote correct answer, which is really interesting and I think valuable data,
link |
but I don't think that's what we should base our rules in autonomous vehicles on because
link |
it is exactly what the trolley problem is trying to show, which is your first instinct might not
link |
be the correct one if you look at rules that then have to apply to everyone and everything.
link |
So how do we encode these ethical choices in interaction with robots? So for example,
link |
with autonomous vehicles, there is a serious ethical question of, do I protect myself?
link |
Does my life have higher priority than the life of another human being?
link |
Because that changes certain control decisions that you make. So if your life matters more than
link |
other human beings, then you'd be more likely to swerve out of your current lane. So currently,
link |
automated emergency braking systems that just break, they don't ever swerve.
link |
So swerving into oncoming traffic or, or no, just in a different lane can cause significant
link |
harm to others, but it's possible that it causes less harm to you. So that's a difficult ethical
link |
question. Do you have a hope that like the trolley problem is not supposed to have a
link |
right answer, right? Do you hope that when we have robots at the table, we'll be able to discover
link |
the right answer for some of these questions? Well, what's happening right now, I think, is
link |
this question that we're facing of what ethical rules should we be programming into the machines
link |
is revealing to us that our ethical rules are much less programmable than we probably thought
link |
before. And so that's a really valuable insight, I think, that these issues are very complicated,
link |
and that in a lot of these cases, you can't really make that call, like not even as a legislator.
link |
And so what's going to happen in reality, I think, is that car manufacturers are just going to try and
link |
avoid the problem and avoid liability in any way possible, or like they're going to always protect
link |
the driver, because who's going to buy a car if it's programmed to kill you instead of someone
link |
else. So that's what's going to happen in reality. But what did you mean by like once we have robots
link |
at the table, like do you mean when they can help us figure out what to do? No, I mean, when robots
link |
are part of the ethical decisions. So no, no, not they help us. Well,
link |
Oh, you mean when it's like, should I run over a robot or a person?
link |
Right, that kind of thing. So when you it's exactly what you said, which is when you have to
link |
encode the ethics into an algorithm, you start to try to really understand what are the fundamentals
link |
of the decision making process, you make this make certain decisions. Should you
link |
do like capital punishment? Should you take a person's life or not to punish them for a certain
link |
crime? Sort of, you can use, you can develop an algorithm to make that decision, right?
link |
And the hope is that the act of making that algorithm, however you make it, so there's a
link |
few approaches, will help us actually get to the core of what is right and what is wrong under
link |
our current societal standards. But isn't that what's happening right now? And we're realizing
link |
that we don't have a consensus on what's right and wrong. You mean in politics in general?
link |
Well, like when we're thinking about these trolley problems and autonomous vehicles and how to
link |
program ethics into machines and how to, you know, make make AI algorithms fair and equitable,
link |
we're realizing that this is so complicated. And it's complicated in part because there is
link |
doesn't seem to be a one right answer in any of these cases. Do you have a hope for like,
link |
one of the ideas of the moral machine is that crowdsourcing can help us
link |
converge towards like democracy can help us converge towards the right answer.
link |
Do you have a hope for crowdsourcing? Well, yes and no. So I think that in general,
link |
you know, I have a legal background and policymaking is often about trying to suss out,
link |
you know, what rules does this society, this particular society agree on and then trying to
link |
codify that. So the law makes these choices all the time and then tries to adapt according to
link |
changing culture. But in the case of the moral machine project, I don't think that people's
link |
choices on that website necessarily necessarily reflect what laws they would want in place.
link |
If given, I think you would have to ask them a series of different questions in order to get
link |
at what their consensus is. I agree. But that has to do more with the artificial nature of,
link |
I mean, they're showing some cute icons on a screen. That's almost, so if you, for example,
link |
we would do a lot of work in virtual reality. And so if you make, if you put those same people
link |
into virtual reality where they have to make that decision, their decision would be very different,
link |
I think. I agree with that. That's one aspect. And the other aspect is it's a different question
link |
to ask someone, would you run over the homeless person or the doctor in this scene? Or do you
link |
want cars to always run over the homeless people? I think, yeah. So let's talk about anthropomorphism.
link |
To me, anthropomorphism, if I can pronounce it correctly, is, is one of the most fascinating
link |
phenomena from like both engineering perspective and psychology perspective, machine learning
link |
perspective and robotics in general. Can you step back and define anthropomorphism, how you see it
link |
in general terms in your, in your work? Sure. So anthropomorphism is this tendency that we
link |
have to project human like traits and behaviors and qualities onto nonhumans. And we often see it
link |
with animals, like we'll, we'll project emotions on animals that may or may not actually be there.
link |
We, we often see that we're trying to interpret things according to our own behavior when we get
link |
it wrong. But we do it with more than just animals, we do it with objects, you know,
link |
teddy bears, we see, you know, faces in the headlights of cars. And we do it with robots,
link |
very, very extremely. You think that can be engineered? Can that be used to enrich an
link |
interaction between an AI system and the human? Oh yeah, for sure. And do you see it being used
link |
that way often? Like, I don't, I haven't seen, whether it's Alexa or any of the smart speaker
link |
systems often trying to optimize for the anthropomorphization. You said you haven't seen?
link |
I haven't seen. They keep moving away from that. I think they're afraid of that.
link |
They, they actually, so I only recently found out, but did you know that Amazon has like a whole
link |
team of people who are just there to work on Alexa's personality?
link |
So I've, I know that depends on what you mean by personality. I didn't know, I didn't know that
link |
exact thing. But I do know that the, how the voice is perceived is worked on a lot, whether
link |
that if it's a pleasant feeling about the voice, but that has to do more with the texture of the
link |
sound and the audience on what personality is more like. It's like what's her favorite beer
link |
when you ask her. And the personality team is different for every country too. Like there's
link |
a different personality for German Alexa than there is for American Alexa. That said, I think
link |
it's very difficult to, you know, use the really, really harness the anthropomorphism
link |
with these voice assistants because the voice interface is still very primitive. And I think that
link |
in order to get people to really suspend their disbelief and treat a robot like it's alive,
link |
less is sometimes more. You, you want them to project onto the robot and you want the robot to
link |
not disappoint their expectations for how it's going to answer or behave in order for them to
link |
have this kind of illusion. And with Alexa, I don't think we're there yet or Siri that just,
link |
they're just not good at that. But if you look at some of the more animal like robots, like the baby
link |
seal that they use with the dementia patients, so much more simple design doesn't try to talk to you.
link |
You can't disappoint you in that way. It just makes little movements and sounds and
link |
people stroke it and it responds to their touch. And that is like a very effective way to harness
link |
people's tendency to kind of treat the robot like a living thing.
link |
Yeah. So you bring up some interesting ideas in your paper chapter, I guess,
link |
anthropomorphic framing human robot interaction that I read the last time we scheduled this.
link |
Oh my God, that was a long time ago.
link |
What are some good and bad cases of anthropomorphism in your perspective?
link |
Like when is it good? When is it bad? Well, I should start by saying that while design can
link |
really enhance the anthropomorphism, it doesn't take a lot to get people to treat a robot like
link |
it's alive. Over 85% of Roombas have a name, which I don't know the numbers for your regular
link |
type of vacuum cleaner, but they're not that high, right? So people will feel bad for the Roomba
link |
when it gets stuck. They'll send it in for repair and want to get the same one back. And that one
link |
is not even designed to make you do that. So I think that some of the cases where it's maybe
link |
a little bit concerning that anthropomorphism is happening is when you have something that's
link |
supposed to function like a tool and people are using it in the wrong way. And one of the concerns
link |
is military robots. Early 2000s, which is a long time ago, iRobot, the Roomba company,
link |
made this robot called the PacBot that was deployed in Iraq and Afghanistan with the
link |
bomb disposal units that were there. And the soldiers became very emotionally attached to
link |
the robots. And that's fine until a soldier risks his life to save a robot, which you
link |
really don't want. But they were treating them like pets, like they would name them,
link |
they would give them funerals with gun salutes, they would get really upset and traumatized
link |
when the robot got broken. So in situations where you want a robot to be a tool, in particular,
link |
when it's supposed to do a dangerous job that you don't want a person doing,
link |
it can be hard when people get emotionally attached to it. That's maybe something that
link |
you would want to discourage. Another case for concern is maybe when companies try to
link |
leverage the emotional attachment to exploit people. So if it's something that's not in the
link |
consumer's interest, trying to sell them products or services or exploit an emotional connection
link |
to keep them paying for a cloud service for a social robot or something like that,
link |
might be, I think that's a little bit concerning as well.
link |
Yeah, the emotional manipulation, which probably happens behind the scenes now
link |
with some social networks and so on, but making it more explicit. What's your favorite robot?
link |
Fictional or real?
link |
No, real. Real robot, which you have felt a connection with, or not anthropomorphic
link |
connection, but I mean, you sit back and say, damn, this is an impressive system.
link |
Wow, so two different robots. So the Pleo baby dinosaur robot that is no longer sold that came
link |
out in 2007, that one I was very impressed with. But from an anthropomorphic perspective,
link |
I was impressed with how much I bonded with it, how much I wanted to believe that it had this
link |
inner life. Can you describe Pleo? Can you describe what it is? How big is it? What can it actually
link |
do? Yeah, Pleo is about the size of a small cat. It had a lot of motors that gave it this kind of
link |
lifelike movement. It had things like touch sensors and an infrared camera. So it had all
link |
these cool little technical features, even though it was a toy. And the thing that really
link |
struck me about it was that it could mimic pain and distress really well. So if you held it up
link |
by the tail, it had a tilt sensor that told it what direction it was facing and it would start to
link |
squirm and cry out. If you hit it too hard, it would start to cry. So it was very impressive
link |
in design. And what's the second robot that you said there might have been two that you liked?
link |
Yeah, so the Boston Dynamics robots are just impressive feats of engineering.
link |
Have you met them in person? Yeah, I recently got a chance to go visit. And I was always one of
link |
those people who watched the videos and was like, this is super cool, but also it's a product video.
link |
Like, I don't know how many times that they had to shoot this to get it right. But visiting them,
link |
you know, I'm pretty sure that I was very impressed. Let's put it that way.
link |
Yeah. And in terms of the control, I think that was a transformational moment for me
link |
when I met Spotmini in person. Because, okay, maybe this is a psychology experiment,
link |
but I anthropomorphized the crap out of it. So I immediately, it was like my best friend.
link |
Right? I think it's really hard for anyone to watch Spotmove and not feel like it has agency.
link |
Yeah, this movement, especially the arm on Spotmini, really obviously looks like a head.
link |
Yeah. And they say, no, wouldn't mean it that way. But it obviously, it looks exactly like that.
link |
And so it's almost impossible to not think of it as almost like the baby dinosaur, but slightly
link |
larger. And this movement of the, of course, the intelligence is their whole idea is that
link |
it's not supposed to be intelligent. It's a platform on which you build
link |
higher intelligence. It's actually really, really dumb. It's just a basic movement platform.
link |
Yeah. But even dumb robots can, like we can immediately respond to them in this visceral way.
link |
What are your thoughts about Sophia, the robot, this kind of mix of some basic natural English
link |
processing and basically an art experiment? Yeah. An art experiment is a good way to characterize it.
link |
I'm much less impressed with Sophia than I am with Boston Dynamics.
link |
She said she likes you. She said she admires you.
link |
Yeah, she followed me on Twitter at some point. Yeah.
link |
Yeah. And she tweets about how much she likes you. So.
link |
So what does that mean? I have to be nice or?
link |
No, I don't know. See, I was emotionally manipulating you.
link |
And no, how do you think of the whole thing that happened with Sophia is quite a large
link |
number of people kind of immediately had a connection and thought that maybe we're far
link |
more advanced with robotics than we are or actually didn't even think much. I was surprised
link |
how little people cared that they kind of assumed that, well, of course, AI can do this.
link |
Yeah. And then if they assumed that, I felt they should be more impressed.
link |
Well, you know what I mean? People really overestimate where we are. And so when something,
link |
I don't even think Sophia was very impressive or is very impressive. I think she's kind of a puppet,
link |
to be honest. But yeah, I think people have a little bit influenced by science fiction and
link |
pop culture to think that we should be further along than we are.
link |
So what's your favorite robots in movies and fiction?
link |
Wally. Wally. What what do you like about Wally? The humor, the cuteness,
link |
the perception control systems operating on Wally that makes it all work out.
link |
Just in general. The design of Wally the robot, I think that animators figured out,
link |
you know, starting in like the 1940s how to create characters that don't look real but look
link |
like something that's even better than real that we really respond to and think is really cute.
link |
They figured out how to make them move and look in the right way.
link |
And Wally is just such a great example of that.
link |
You think eyes, big eyes or big something that's kind of eyish. So it's always playing on some
link |
aspect of the human face, right? Often, yeah. So big eyes. Well, I think one of the
link |
one of the first like animations to really play with this was Bambi. And they weren't originally
link |
going to do that. They were originally trying to make the deer look as lifelike as possible.
link |
Like they brought deer into the studio and had a little zoo there so that the animators could
link |
work with them. And then at some point they're like, hmm, if we make really big eyes and like a
link |
small nose and like big cheeks, kind of more like a baby face, then people like it even
link |
better than if it looks real. Do you think the future of things like Alexa in the home
link |
has possibly to take advantage of that, to build on that, to create these systems that are better
link |
than real that create a close human connection? I can pretty much guarantee you without having any
link |
knowledge that those companies are working on that, on that design behind the scenes.
link |
Like, I'm pretty sure. I totally disagree with you. Really? So that's what I'm interested in.
link |
I'd like to build such a company. I know a lot of those folks and they're afraid of that
link |
because you don't, well, how do you make money off of it? Well, but even just like
link |
making Alexa look a little bit more interesting than just like a cylinder would do so much.
link |
It's an interesting thought, but I don't think people from Amazon perspective are looking for
link |
that kind of connection. They want you to be addicted to the services provided by Alexa,
link |
not to the device. So the device itself, it's felt that you can lose a lot because if you create a
link |
connection and then it creates more opportunity for frustration for negative stuff than it does
link |
for positive stuff is I think the way they think about it. That's interesting. Like,
link |
I agree that it's very difficult to get right and you have to get it exactly right. Otherwise,
link |
you wind up with Microsoft's Clippy. Okay, easy now. What's your problem with Clippy?
link |
You like Clippy? Is Clippy your friend? Yeah, I was just, I just, I just talked to,
link |
we just had this argument. They said Microsoft CTO and they said, he said he's not bringing
link |
Clippy back. They're not bringing Clippy back and that's very disappointing. I think it was,
link |
Clippy was the greatest assistance we've ever built. It was a horrible attempt, of course,
link |
but it's the best we've ever done because it was a real attempt to have an actual personality.
link |
I mean, it was obviously technology was way not there at the time of being able to be a
link |
recommender system for assisting you in anything and typing in Word or any kind of other application,
link |
but still was an attempt of personality that was legitimate. That's true. I thought was brave.
link |
Yes. Okay. You know, you've convinced me I'll be slightly less hard on Clippy.
link |
And I know I have like an army of people behind me who also miss Clippy, so.
link |
Really? I want to meet these people. Who are these people?
link |
It's the people who like to hate stuff when it's there and miss it when it's gone.
link |
So everyone. Exactly. All right. So Anki and Gibo, the two companies,
link |
two amazing companies, social robotics companies that have recently been closed down.
link |
Why do you think it's so hard to create a personal robotics company? So making a business
link |
out of essentially something that people would anthropomorphize, have a deep connection with,
link |
why is it so hard to make it work? Is the business case not there or what is it?
link |
I think it's a number of different things. I don't think it's going to be this way forever.
link |
I think at this current point in time, it takes so much work to build something that only barely
link |
meets people's minimal expectations because of science fiction and pop culture giving people
link |
this idea that we should be further than we already are. When people think about a robot
link |
assistant in the home, they think about Rosie from the Jetsons or something like that. And
link |
Anki and Gibo did such a beautiful job with the design and getting that interaction just right.
link |
But I think people just wanted more. They wanted more functionality. I think you're also right
link |
that the business case isn't really there because there hasn't been a killer application
link |
that's useful enough to get people to adopt the technology in great numbers. I think what we did
link |
see from the people who did get Gibo is a lot of them became very emotionally attached to it.
link |
But that's not... I mean, it's kind of like the palm pilot back in the day. Most people are like,
link |
why do I need this? Why would I? They don't see how they would benefit from it until
link |
they have it or some other company comes in and makes it a little better.
link |
Yeah. How far away are we? Do you think? How hard is this problem?
link |
It's a good question. And I think it has a lot to do with people's expectations.
link |
And those keep shifting depending on what science fiction that is popular.
link |
But also, it's two things. It's people's expectation and people's need for an emotional
link |
connection. And I believe the need is pretty high. Yes. But I don't think we're aware of it.
link |
That's right. I really think this is like the life as we know it. So we've just kind of gotten used
link |
to it. I hate to be dark because I have close friends. But we've gotten used to really never
link |
weren't being close to anyone. And we're deeply, I believe, okay, this is hypothesis,
link |
I think we're deeply lonely, all of us, even those in deep fulfilling relationships.
link |
In fact, what makes those relationships fulfilling, I think, is that they at least
link |
tap into that deep loneliness a little bit. But I feel like there's more opportunity
link |
to explore that, that doesn't interfere with the human relationships you have.
link |
It expands more on the, yeah, the rich, deep, unexplored complexity that's all of us,
link |
weird apes. Okay. I think you're right. Do you think it's possible to fall in love with a robot?
link |
Oh, yeah, totally. Do you think it's possible to have a long term committed
link |
monogamous relationship with a robot? Well, yeah, there are lots of different types of
link |
long term committed monogamous relationships. I think monogamous implies, like,
link |
you're not going to see other humans sexually or like you basically on Facebook have to say,
link |
I'm in a relationship with this person, this robot. I just don't, like, again, I think this
link |
is comparing robots to humans. When I would rather compare them to pets, like you get a robot,
link |
but it fulfills, you know, this loneliness that you have in a, maybe not the same way as a pet,
link |
maybe in a different way that is even, you know, supplemental in a different way. But,
link |
you know, I'm not saying that people won't like do this, be like, Oh, I want to marry my robot,
link |
or I want to have like a, you know, sexual relation monogamous relationship with my robot.
link |
But I don't think that that's the main use case for them.
link |
But you think that there's still a gap between human and pet.
link |
So between husband and pet, there's a different relationship. It's an engineering,
link |
so that that's a gap that can be closed through. I think it could be closed someday. But why would
link |
we close that? Like, I think it's so boring to think about recreating things that we already
link |
have when we could, when we could create something that's different. I know you're thinking about
link |
the people who like don't have a husband and like, what could we give them?
link |
Yeah, but, but let's, I guess what I'm getting at is maybe not. So like the movie, Her.
link |
Yeah. Right. So a better husband.
link |
Well, maybe better in some ways. Like it's, I do think that robots are going to continue to be
link |
a different type of relationship, even if we get them like very human looking, or when, you know,
link |
the voice interactions we have with them feel very like natural and human like, I think
link |
there's still going to be differences. And there were in that movie too, like towards the end,
link |
it kind of goes off the rails. But it's just a movie. So your intuition is that that,
link |
because, because you kind of said two things, right? So one is, why would you want
link |
to basically replicate the husband? Yeah. Right. And the other is kind of implying that
link |
it's kind of hard to do. So like anytime you try, you might build something very impressive,
link |
but it'll be different. I guess my question is about human nature. It's like,
link |
how hard is it to satisfy that role of the husband? So removing any of the sexual stuff
link |
aside is the, it's more like the mystery, the tension, the dance of relationships.
link |
Do you think with robots that's difficult to build? What's your intuition about it?
link |
I think that, well, it also depends on how we talk about robots now in 50 years,
link |
in like indefinite amount of time. I'm thinking like five or 10 years.
link |
Five or 10 years. I think that robots at best will be like,
link |
it's more similar to the relationship we have with our pets than relationship that we have with
link |
other people. I got it. So what do you think it takes to build a system that exhibits greater
link |
and greater levels of intelligence? Like it impresses us with this intelligence. You know,
link |
a Roomba, so you talked about anthropomorphization that doesn't, I think intelligence is not
link |
required. In fact, intelligence probably gets in the way sometimes, like you mentioned.
link |
But what do you think it takes to create a system where we sense that it has a human level
link |
intelligence? So something that, probably something conversational, human level intelligence.
link |
How hard do you think that problem is? It'd be interesting to hear your perspective, not just
link |
purely, I talked to a lot of people, how hard is the conversational agents? How hard is it
link |
to pass a touring test? But my sense is it's easier than just solving, it's easier than solving the
link |
pure natural language processing problem, because I feel like you can cheat.
link |
Yeah. So how hard is it to pass a touring test in your view?
link |
Well, I think, again, it's all about expectation management. If you set up people's expectations
link |
to think that they're communicating with, what was it, a 13 year old boy from the Ukraine?
link |
Yeah, that's right. Then they're not going to expect perfect English. They're not going to
link |
expect perfect understanding of concepts or even like being on the same wavelength in terms of
link |
like conversation flow. So it's much easier to pass in that case.
link |
Do you think, you kind of alluded this to with audio, do you think it needs to have a body?
link |
I think that we definitely have, so we treat physical things with more social agency,
link |
because we're very physical creatures. I think a body can be useful.
link |
Does it get in the way? Is there a negative aspects like?
link |
Yeah, there can be. So if you're trying to create a body that's too similar to something that people
link |
are familiar with, like I have this robot cat at home that Hasbro makes. And it's very disturbing
link |
to watch because I'm constantly assuming that it's going to move like a real cat and it doesn't,
link |
because it's like a $100 piece of technology. So it's very disappointing and it's very hard to
link |
treat it like it's alive. So you can get a lot wrong with the body too, but you can also use
link |
tricks same as the expectation management of the 13 year old boy from the Ukraine. If you
link |
pick an animal that people aren't intimately familiar with, like the baby dinosaur, like the
link |
baby seal that people have never actually held in their arms, you can get away with much more
link |
because they don't have these preformed expectations. Yeah, I remember you were thinking at Ted Talk
link |
or something that clicked for me that nobody actually knows what a dinosaur looks like.
link |
So you can actually get away with a lot more. That was great. So what do you think about
link |
consciousness and mortality being displayed in a robot? So not actually having consciousness,
link |
but having these kind of human elements that are much more than just the interaction, much more
link |
than just, like you mentioned, with a dinosaur moving kind of interesting ways, but really
link |
being worried about its own death and really acting as if it's aware and self aware and identity.
link |
Have you seen that done in robotics? What do you think about doing that? Is that a powerful good
link |
thing? Well, I think it can be a design tool that you can use for different purposes. So I
link |
can't say whether it's inherently good or bad, but I do think it can be a powerful tool. The fact
link |
that the pleo mimics distress when you, quote unquote, hurt it is a really powerful tool to
link |
get people to engage with it in a certain way. I had a research partner that I did some of the
link |
empathy work with named Palashnandi and he had built a robot for himself that had a lifespan
link |
and that would stop working after a certain amount of time just because he was interested in whether
link |
he himself would treat it differently. And we know from Tamagotchi's those little games that
link |
we used to have that were extremely primitive, that people respond to this idea of mortality
link |
and you can get people to do a lot with little design tricks like that. Now, whether it's a
link |
good thing depends on what you're trying to get them to do. Have a deeper relationship. Have a
link |
deeper connection, have a relationship. If it's for their own benefit, that sounds great. Okay.
link |
You can do that for a lot of other reasons. I see. So what kind of stuff are you worried about?
link |
So is it mostly about manipulation of your emotions for like advertisements and so on,
link |
things like that? Yeah, or data collection or, I mean, you could think of governments misusing
link |
this to extract information from people. It's, you know, just like any other technological tool,
link |
just raises a lot of questions. What's, if you look at Facebook, if you look at Twitter and
link |
social networks, there's a lot of concern of data collection now. What's from legal perspective or
link |
in general, how do we prevent the violation of sort of these companies crossing a line? It's
link |
a great area, but crossing a line, they shouldn't in terms of manipulating, like we're talking about
link |
manipulating our emotion, manipulating our behavior using tactics that are not so savory.
link |
Yeah, it's really difficult because we are starting to create technology that relies on data
link |
collection to provide functionality. And there's not a lot of incentive, even on the consumer side
link |
to curb that because the other problem is that the harms aren't tangible. They're not really
link |
apparent to a lot of people because they kind of trickle down on a societal level and then
link |
suddenly we're living in 1984, which sounds extreme, but that book was very prescient. And
link |
I'm not worried about these systems. I have Amazon's Echo at home and tell Alexa all sorts of stuff
link |
and it helps me because Alexa knows what brand of diaper we use and so I can just easily order it
link |
again. So I don't have any incentive to ask a lawmaker to curb that. But when I think about
link |
that data then being used against low income people to target them for scammy loans or education
link |
programs, that's then a societal effect that I think is very severe and legislators should be
link |
thinking about. But yeah, the gray area is the removing ourselves from consideration of explicitly
link |
defining objectives and more saying, well, we want to maximize engagement in our social network.
link |
Yeah. And then just because you're not actually doing a bad thing, it makes sense. You want
link |
people to keep a conversation going, to have more conversations, to keep coming back again and again
link |
to have conversations. And whatever happens after that, you're kind of not exactly directly responsible.
link |
You're only indirectly responsible. So I think it's a really hard problem. Do you
link |
optimistic about us ever being able to solve it? You mean the problem of capitalism? Because the
link |
problem is that the companies are acting in the company's interests and not in people's interest
link |
and when those interests are aligned, that's great. But the completely free market doesn't seem to work
link |
because of this information asymmetry. But it's hard to know how to... So say you were trying to do
link |
the right thing. I guess what I'm trying to say is it's not obvious for these companies what the
link |
good thing for society is to do. I don't think they sit there with a glass of wine and a cat,
link |
like petting a cat, evil cat. And there's two decisions and one of them is good for society,
link |
one is good for the profit and they choose the profit. I think actually there's a lot of money
link |
to be made by doing the right thing for society. Because Google, Facebook have so much cash that
link |
they actually, especially Facebook, would significantly benefit from making decisions
link |
that are good for society. It's good for their brand. But I don't know if they know what's good
link |
for society. I don't think we know what's good for society in terms of how we manage the
link |
conversation on Twitter or how we design... We're talking about robots. Should we emotionally
link |
manipulate you into having a deep connection with Alexa or not? Yeah. Do you have optimism
link |
that we'll be able to solve some of these questions? Well, I'm going to say something
link |
that's controversial in my circles, which is that I don't think that companies who are reaching out
link |
to ethicists and trying to create interdisciplinary ethics boards, I don't think that that's
link |
totally just trying to whitewash the problem and so that they look like they've done something.
link |
I think that a lot of companies actually do, like you say, care about what the right answer is.
link |
They don't know what that is and they're trying to find people to help them find them.
link |
Not in every case, but I think it's much too easy to just vilify the companies
link |
as, like you said, sitting there with their cat going, one million dollars. That's not what happens.
link |
A lot of people are well meaning even within companies. I think that what we do absolutely need
link |
is more interdisciplinarity both within companies, but also within the policymaking space because
link |
we've hurtled into the world where technological progress is much faster. It seems much faster
link |
than it was and things are getting very complex. You need people who understand the technology,
link |
but also people who understand what the societal implications are and people who are thinking
link |
about this in a more systematic way to be talking to each other. There's no other solution, I think.
link |
We've also done work on intellectual property. If you look at the algorithms that these companies
link |
are using, like YouTube, Twitter, Facebook, so on, those are mostly secretive.
link |
The recommender systems behind these algorithms. Do you think about IP and the transparency
link |
about algorithms like this? Is the responsibility of these companies to open source the algorithms
link |
or at least reveal to the public how these algorithms work?
link |
I personally don't work on that. There are a lot of people who do though, and there are a lot of
link |
people calling for transparency. In fact, Europe's even trying to legislate transparency. Maybe they
link |
even have at this point where if an algorithmic system makes some sort of decision that affects
link |
someone's life, that you need to be able to see how that decision was made, it's a tricky balance
link |
because, obviously, companies need to have some sort of competitive advantage and you can't take
link |
all of that away or you stifle innovation. For some of the ways that these systems are already
link |
being used, I think it is pretty important that people understand how they work.
link |
What are your thoughts in general on intellectual property in this weird age of software, AI,
link |
robotics? That it's broken. I mean, the system is just broken.
link |
Can you describe? Actually, I don't even know what intellectual property is in the space of
link |
software. I believe I have a patent on a piece of software from my PhD.
link |
You believe? You don't know? No, we went through a whole process. Yeah, I do.
link |
You get the spam emails like, we'll frame your patent for you.
link |
Yeah, it's much like a thesis. That's useless, right? Or not? Where does IP stand in this age?
link |
What's the right way to do it? What's the right way to protect and own ideas when it's just code
link |
and this mishmash of something that feels much softer than a piece of machinery or any idea?
link |
I mean, it's hard because there are different types of intellectual property and they're
link |
kind of these blunt instruments. It's like patent law is like a wrench. It works really well for an
link |
industry like the pharmaceutical industry, but when you try and apply it to something else,
link |
it's like, I don't know, I'll just hit this thing with a wrench and hope it works.
link |
So software, software, you have a couple of different options.
link |
Software like any code that's written down in some tangible form is automatically copyrighted.
link |
So you have that protection, but that doesn't do much because if someone takes the basic idea that
link |
the code is executing and just does it in a slightly different way, they can get around
link |
the copyright. So that's not a lot of protection. Then you can patent software, but that's kind of,
link |
I mean, getting a patent costs, I don't know if you remember what yours cost or was it through
link |
an institution? Yeah, it was through a university. It was insane. There were so many lawyers, so many
link |
meetings. It made me feel like it must have been hundreds of thousands of dollars. It must have
link |
been something crazy. It's insane the cost of getting a patent. And so this idea of protecting
link |
the inventor in their own garage came up with a great idea. That's the thing of the past.
link |
It's all just companies trying to protect things and it costs a lot of money. And then with code,
link |
it's oftentimes, by the time the patent is issued, which can take like five years,
link |
probably your code is obsolete at that point. So it's a very, again, a very blunt instrument
link |
that doesn't work well for that industry. And so at this point, we should really
link |
have something better, but we don't. Do you like open source? Yeah, it's open source good for
link |
society. You think all of us should open source code? Well, so at the Media Lab at MIT,
link |
we have an open source default because what we've noticed is that people will come in, they'll write
link |
some code and they'll be like, how do I protect this? And we're like, that's not your problem
link |
right now. Your problem isn't that someone's going to steal your project. Your problem is
link |
getting people to use it at all. There's so much stuff out there. We don't even know if
link |
you're going to get traction for your work. And so open sourcing can sometimes help get
link |
people's work out there, but ensure that they get attribution for it for the work that they've done.
link |
So I'm a fan of it in a lot of contexts. Obviously, it's not like a one size fits all solution.
link |
So what I gleaned from your Twitter is your mom. I saw a quote, a reference to Babybot.
link |
What have you learned about robotics and AI from raising a human baby bot?
link |
Well, I think that my child has just made it more apparent to me that the systems we're currently
link |
creating aren't like human intelligence. There's not a lot to compare there. He has learned and
link |
developed in such a different way than a lot of the AI systems we're creating that that's not really
link |
interesting to me to compare. But what is interesting to me is how these systems are going to shape
link |
the world that he grows up in. And so I'm even more concerned about the societal effects of
link |
developing systems that rely on massive amounts of data collection, for example.
link |
So is he going to be allowed to use like Facebook? Facebook is over. Kids don't use that anymore.
link |
Snapchat? What do they use Instagram? I don't know. I just heard that TikTok is over,
link |
which I've never even seen. So I don't know. We're old. We don't know.
link |
I'm going to start gaming and streaming my gameplay. So what do you see as the future of
link |
personal robotics, social robotics, interaction with our robots? Like, what are you excited about
link |
if you were to sort of philosophize about what might happen the next five, 10 years?
link |
That would be cool to see. Oh, I really hope that we get kind of a home robot that makes it.
link |
That's a social robot and not just Alexa. Like, it's, you know, I really love the Anki products.
link |
I thought Jibo has had some really great aspects. So I'm hoping that a company cracks that.
link |
Me too. So, Kate, it was a wonderful talking to you today. Likewise. Thank you so much. It was fun.
link |
Thanks for listening to this conversation with Kate Darling. And thank you to our sponsors,
link |
ExpressVPN and Masterclass. Please consider supporting the podcast by signing up to Masterclass
link |
and Masterclass.com slash Lex and getting ExpressVPN at expressvpn.com slash Lex pod.
link |
If you enjoy this podcast, subscribe on YouTube, review it with five stars on Apple podcast,
link |
support on Patreon or simply connect with me on Twitter and Lex Friedman.
link |
And now let me leave you with some tweets from Kate Darling. First tweet is the pandemic has
link |
fundamentally changed who I am. I now drink the leftover milk in the bottom of the cereal bowl.
link |
Second tweet is I came on here to complain that I had a really bad day and saw that a
link |
bunch of you are hurting too. Love to everyone. Thank you for listening and hope to see you next time.