back to index

Ayanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66


small model | large model

link |
00:00:00.000
The following is a conversation with Ayanna Howard.
link |
00:00:03.400
She's a roboticist, Professor Georgia Tech,
link |
00:00:06.200
and Director of the Human Automation Systems Lab.
link |
00:00:09.840
With research interests in human robot interaction,
link |
00:00:12.800
assistive robots in the home, therapy gaming apps,
link |
00:00:16.000
and remote robotic exploration of extreme environments.
link |
00:00:20.280
Like me, in her work, she cares a lot
link |
00:00:23.440
about both robots and human beings.
link |
00:00:26.360
And so I really enjoyed this conversation.
link |
00:00:29.560
This is the Artificial Intelligence Podcast.
link |
00:00:32.600
If you enjoy it, subscribe on YouTube,
link |
00:00:34.960
give it five stars on Apple Podcast,
link |
00:00:36.960
follow on Spotify, support it on Patreon,
link |
00:00:39.600
or simply connect with me on Twitter.
link |
00:00:41.720
Alex Friedman, spelled F R I D M A N.
link |
00:00:45.640
I recently started doing ads at the end of the introduction.
link |
00:00:48.680
I'll do one or two minutes after introducing the episode
link |
00:00:51.640
and never any ads in the middle
link |
00:00:53.200
that can break the flow of the conversation.
link |
00:00:55.480
I hope that works for you
link |
00:00:56.880
and doesn't hurt the listening experience.
link |
00:01:00.160
This show is presented by Cash App,
link |
00:01:02.280
the number one finance app in the App Store.
link |
00:01:04.760
I personally use Cash App to send money to friends,
link |
00:01:07.520
but you can also use it to buy, sell,
link |
00:01:09.320
and deposit Bitcoin in just seconds.
link |
00:01:11.720
Cash App also has a new investing feature.
link |
00:01:14.600
You can buy fractions of a stock, say $1 worth,
link |
00:01:17.520
no matter what the stock price is.
link |
00:01:19.640
Broker services are provided by Cash App Investing,
link |
00:01:22.560
a subsidiary of Square and member SIPC.
link |
00:01:25.840
I'm excited to be working with Cash App
link |
00:01:28.160
to support one of my favorite organizations called First,
link |
00:01:31.560
best known for their first robotics and Lego competitions.
link |
00:01:35.080
They educate and inspire hundreds of thousands of students
link |
00:01:38.360
in over 110 countries
link |
00:01:40.200
and have a perfect rating and charity navigator,
link |
00:01:42.840
which means the donated money
link |
00:01:44.120
is used to maximum effectiveness.
link |
00:01:46.880
When you get Cash App from the App Store, Google Play,
link |
00:01:49.600
and use code LEX Podcast, you'll get $10,
link |
00:01:53.520
and Cash App will also donate $10 the first,
link |
00:01:56.440
which again, is an organization
link |
00:01:58.280
that I've personally seen inspire girls and boys
link |
00:02:01.040
to dream of engineering a better world.
link |
00:02:04.280
And now, here's my conversation with Ayanna Howard.
link |
00:02:09.440
What or who is the most amazing robot you've ever met,
link |
00:02:13.640
or perhaps had the biggest impact on your career?
link |
00:02:16.720
I haven't met her, but I grew up with her,
link |
00:02:21.080
but of course, Rosie.
link |
00:02:22.720
So, and I think it's because also...
link |
00:02:25.200
Who's Rosie?
link |
00:02:26.080
Rosie from the Jetsons.
link |
00:02:27.760
She is all things to all people, right?
link |
00:02:30.920
Think about it, like anything you wanted,
link |
00:02:32.840
it was like magic, it happened.
link |
00:02:35.040
So people not only anthropomorphize,
link |
00:02:37.840
but project whatever they wish for the robot to be onto.
link |
00:02:41.880
Onto Rosie.
link |
00:02:42.880
But also, I mean, think about it,
link |
00:02:44.560
she was socially engaging.
link |
00:02:46.760
She, every so often had an attitude, right?
link |
00:02:50.000
She kept us honest.
link |
00:02:51.920
She would push back sometimes
link |
00:02:53.720
when George was doing some weird stuff.
link |
00:02:57.000
But she cared about people, especially the kids.
link |
00:03:01.200
She was like the perfect robot.
link |
00:03:03.960
And you've said that people don't want
link |
00:03:06.480
the robots to be perfect.
link |
00:03:09.760
Can you elaborate that?
link |
00:03:11.120
What do you think that is?
link |
00:03:12.000
Just like you said,
link |
00:03:13.480
Rosie pushed back a little bit every once in a while.
link |
00:03:15.720
Yeah, so I think it's that.
link |
00:03:18.240
So if you think about robotics in general,
link |
00:03:19.840
we want them because they enhance our quality of life.
link |
00:03:23.880
And usually that's linked to something that's functional,
link |
00:03:26.360
right?
link |
00:03:27.200
Even if you think of self driving cars,
link |
00:03:28.600
why is there a fascination?
link |
00:03:29.960
Because people really do hate to drive.
link |
00:03:31.480
Like there's the, like Saturday driving
link |
00:03:34.120
where I can just speed,
link |
00:03:35.240
but then there was the,
link |
00:03:36.080
I have to go to work every day
link |
00:03:37.480
and I'm in traffic for an hour.
link |
00:03:38.960
I mean, people really hate that.
link |
00:03:40.360
And so robots are designed to basically enhance
link |
00:03:45.360
our ability to increase our quality of life.
link |
00:03:49.280
And so the perfection comes from this aspect of interaction.
link |
00:03:54.840
If I think about how we drive,
link |
00:03:57.680
if we drove perfectly, we would never get anywhere, right?
link |
00:04:01.560
So think about how many times you had to run past the light
link |
00:04:06.560
because you see the car behind you
link |
00:04:08.360
is about to crash into you.
link |
00:04:09.800
Or that little kid kind of runs into the street
link |
00:04:14.800
and so you have to cross on the other side
link |
00:04:16.720
because there's no cars, right?
link |
00:04:17.920
Like if you think about it,
link |
00:04:19.040
we are not perfect drivers.
link |
00:04:20.680
Some of it is because it's our world.
link |
00:04:23.080
And so if you have a robot that is perfect
link |
00:04:26.240
in that sense of the word,
link |
00:04:28.240
they wouldn't really be able to function with us.
link |
00:04:30.720
Can you linger a little bit on the word perfection?
link |
00:04:34.000
So from the robotics perspective,
link |
00:04:36.880
what does that word mean?
link |
00:04:38.920
And how is sort of the optimal behaviors
link |
00:04:42.480
you're describing different than what we think is perfection?
link |
00:04:46.160
Yeah, so perfection, if you think about it
link |
00:04:49.000
in the more theoretical point of view,
link |
00:04:51.480
it's really tied to accuracy, right?
link |
00:04:53.560
So if I have a function,
link |
00:04:55.120
can I complete it at 100% accuracy with zero errors?
link |
00:05:00.200
And so that's kind of if you think about perfection
link |
00:05:03.680
in the sense of the word.
link |
00:05:04.760
And in a self driving car realm,
link |
00:05:07.080
do you think from a robotics perspective,
link |
00:05:10.000
we kind of think that perfection means
link |
00:05:13.440
following the rules perfectly,
link |
00:05:15.120
sort of defining, staying in the lane, changing lanes.
link |
00:05:19.120
When there's a green light, you go,
link |
00:05:20.400
when there's a red light, you stop.
link |
00:05:21.840
And that's the,
link |
00:05:24.320
and be able to perfectly see all the entities in the scene.
link |
00:05:28.680
That's the limit of what we think of as perfection.
link |
00:05:31.520
And I think that's where the problem comes,
link |
00:05:33.280
is that when people think about perfection for robotics,
link |
00:05:37.880
the ones that are the most successful
link |
00:05:40.360
are the ones that are, quote unquote, perfect.
link |
00:05:42.800
Like I said, Rosie is perfect,
link |
00:05:44.200
but she actually wasn't perfect in terms of accuracy,
link |
00:05:46.880
but she was perfect in terms of how she interacted
link |
00:05:49.880
and how she adapted.
link |
00:05:51.160
And I think that's some of the disconnect,
link |
00:05:52.800
is that we really want perfection
link |
00:05:56.000
with respect to its ability to adapt to us.
link |
00:05:59.520
We don't really want perfection with respect to 100% accuracy
link |
00:06:03.040
with respect to the rules that we just made up anyway, right?
link |
00:06:06.320
And so I think there's this disconnect sometimes
link |
00:06:09.160
between what we really want and what happens.
link |
00:06:12.920
And we see this all the time, like in my research, right?
link |
00:06:15.560
Like the optimal, quote unquote, optimal interactions
link |
00:06:20.000
are when the robot is adapting based on the person,
link |
00:06:23.960
not 100% following what's optimal based on the rules.
link |
00:06:29.200
Just a link around autonomous vehicles for a second.
link |
00:06:32.240
Just your thoughts maybe off the top of the head
link |
00:06:35.200
is how hard is that problem,
link |
00:06:37.480
do you think based on what we just talked about?
link |
00:06:39.840
You know, there's a lot of folks
link |
00:06:41.600
in the automotive industry, they're very confident
link |
00:06:43.840
from Elon Musk to Waymo to all these companies.
link |
00:06:47.600
How hard is it to solve that last piece?
link |
00:06:50.440
The last mile.
link |
00:06:51.280
The gap between the perfection
link |
00:06:54.600
and the human definition
link |
00:06:57.480
of how you actually function in this world.
link |
00:06:59.440
Yeah, so this is a moving target.
link |
00:07:00.560
So I remember when all the big companies
link |
00:07:04.440
started to heavily invest in this.
link |
00:07:06.800
And there was a number of even roboticists
link |
00:07:09.880
as well as folks who were putting in the VCs
link |
00:07:13.200
and corporations, Elon Musk being one of them,
link |
00:07:15.360
that said, self driving cars on the road with people
link |
00:07:19.480
within five years, that was a little while ago.
link |
00:07:24.560
And now people are saying five years, 10 years, 20 years,
link |
00:07:29.800
some are saying never, right?
link |
00:07:31.520
I think if you look at some of the things
link |
00:07:33.760
that are being successful is these
link |
00:07:39.480
basically fixed environments
link |
00:07:41.200
where you still have some anomalies, right?
link |
00:07:44.000
You still have people walking, you still have stores,
link |
00:07:46.520
but you don't have other drivers, right?
link |
00:07:50.120
Like other human drivers or is a dedicated space
link |
00:07:53.560
for the cars.
link |
00:07:55.640
Because if you think about robotics in general,
link |
00:07:57.200
where has always been successful?
link |
00:07:59.040
I mean, you can say manufacturing,
link |
00:08:00.600
like way back in the day, right?
link |
00:08:02.320
It was a fixed environment, humans were not part
link |
00:08:04.320
of the equation, we're a lot better than that.
link |
00:08:07.160
But like when we can carve out scenarios
link |
00:08:10.920
that are closer to that space,
link |
00:08:13.760
then I think that it's where we are.
link |
00:08:16.640
So a closed campus where you don't have self driving cars
link |
00:08:20.520
and maybe some protection so that the students
link |
00:08:23.760
don't jet in front just because they wanna see what happens.
link |
00:08:27.200
Like having a little bit, I think that's where
link |
00:08:29.920
we're gonna see the most success in the near future.
link |
00:08:32.240
And be slow moving.
link |
00:08:33.640
Right, not 55, 60, 70 miles an hour,
link |
00:08:37.840
but the speed of a golf cart, right?
link |
00:08:42.040
So that said, the most successful
link |
00:08:45.160
in the automotive industry robots operating today
link |
00:08:47.840
in the hands of real people are ones
link |
00:08:50.720
that are traveling over 55 miles an hour.
link |
00:08:53.880
And in uncle's trains environment,
link |
00:08:55.520
which is Tesla vehicles, so the Tesla autopilot.
link |
00:08:58.840
So I would love to hear sort of your,
link |
00:09:01.680
just thoughts of two things.
link |
00:09:04.240
So one, I don't know if you've gotten to see
link |
00:09:07.000
you've heard about something called smart summon.
link |
00:09:10.240
It would Tesla system autopilot system
link |
00:09:13.480
where the car drives zero occupancy, no driver
link |
00:09:17.120
in the parking lot slowly sort of tries to navigate
link |
00:09:19.960
the parking lot to find itself to you.
link |
00:09:22.680
And there's some incredible amounts of videos
link |
00:09:25.840
and just hilarity that happens.
link |
00:09:27.640
Is it awkwardly tries to navigate this environment?
link |
00:09:30.880
But it's a beautiful nonverbal communication
link |
00:09:33.520
between machine and human that I think is a from,
link |
00:09:37.360
it's like, it's some of the work that you do
link |
00:09:39.320
in this kind of interesting human robot interaction space.
link |
00:09:42.040
So what are your thoughts in general about it?
link |
00:09:43.760
So I do have that feature.
link |
00:09:46.960
Do you drive a Tesla?
link |
00:09:47.800
I do, mainly because I'm a gadget freak, right?
link |
00:09:52.120
So I say it's a gadget that happens to have some wheels.
link |
00:09:55.640
And yeah, I've seen some of the videos.
link |
00:09:58.200
But what's your experience like?
link |
00:09:59.400
I mean, you're a human robot interaction roboticist,
link |
00:10:02.680
you're a legit sort of expert in the field.
link |
00:10:05.560
So what does it feel for a machine to come to you?
link |
00:10:08.040
It's one of these very fascinating things,
link |
00:10:11.880
but also I am hyper, hyper alert, right?
link |
00:10:16.080
Like I'm hyper alert, like my, but my thumb is like,
link |
00:10:20.520
oh, okay, I'm ready to take over.
link |
00:10:23.200
Even when I'm in my car or I'm doing things
link |
00:10:26.160
like automated backing into,
link |
00:10:28.720
so there's like a feature where you can do this
link |
00:10:30.560
automating backing into our parking space
link |
00:10:33.120
or bring the car out of your garage
link |
00:10:35.640
or even, you know, pseudo autopilot on the freeway, right?
link |
00:10:40.240
I am hyper sensitive.
link |
00:10:42.200
I can feel like as I'm navigating, I'm like,
link |
00:10:44.840
yeah, that's an error right there.
link |
00:10:46.880
Like I am very aware of it,
link |
00:10:50.120
but I'm also fascinated by it and it does get better.
link |
00:10:54.280
Like I look and see it's learning
link |
00:10:57.400
from all of these people who are cutting it on.
link |
00:11:00.360
Like every time I come on, it's getting better, right?
link |
00:11:04.120
And so I think that's what's amazing about it is that.
link |
00:11:07.120
This nice dance of you're still hyper vigilant.
link |
00:11:10.320
So you're still not trusting it at all.
link |
00:11:12.720
Yeah.
link |
00:11:13.560
And yet you're using it on the highway if I were to,
link |
00:11:16.400
like what, as a roboticist,
link |
00:11:18.600
we'll talk about trust a little bit.
link |
00:11:22.640
How do you explain that?
link |
00:11:23.640
You still use it.
link |
00:11:25.040
Is it the gadget freak part?
link |
00:11:26.480
Like where you just enjoy exploring technology
link |
00:11:30.720
or is that the right actually balance
link |
00:11:33.680
between robotics and humans is where you use it,
link |
00:11:36.880
but don't trust it.
link |
00:11:38.320
And somehow there's this dance
link |
00:11:40.080
that ultimately is a positive.
link |
00:11:42.080
Yeah. So I think I'm,
link |
00:11:44.600
I just don't necessarily trust technology,
link |
00:11:48.080
but I'm an early adopter, right?
link |
00:11:50.120
So when it first comes out, I will use everything,
link |
00:11:54.280
but I will be very, very cautious of how I use it.
link |
00:11:57.440
Do you read about it or do you explore it, but just try it?
link |
00:12:01.040
Do you like crudely, to put it crudely,
link |
00:12:05.000
do you read the manual or do you learn through exploration?
link |
00:12:07.960
I'm an explorer.
link |
00:12:08.800
If I have to read the manual, then I do design,
link |
00:12:12.320
then it's a bad user interface, it's a failure.
link |
00:12:16.480
Elon Musk is very confident that you kind of take it
link |
00:12:19.560
from where it is now to full autonomy.
link |
00:12:21.800
So from this human robot interaction
link |
00:12:24.520
where you don't really trust and then you try
link |
00:12:26.720
and then you catch it when it fails to,
link |
00:12:29.200
it's going to incrementally improve itself
link |
00:12:32.320
into full, full way you don't need to participate.
link |
00:12:36.560
What's your sense of that trajectory?
link |
00:12:39.880
Is it feasible?
link |
00:12:41.080
So the promise there is by the end of next year,
link |
00:12:44.600
by the end of 2020 is the current promise.
link |
00:12:47.240
What's your sense about that journey that Tesla's on?
link |
00:12:53.600
So there's kind of three things going on though.
link |
00:12:56.600
I think in terms of will people go,
link |
00:13:01.960
like as a user, as a adopter,
link |
00:13:04.800
will you trust going to that point?
link |
00:13:08.440
I think so, right?
link |
00:13:10.080
Like there are some users and it's because what happens
link |
00:13:12.480
is when you're hypersensitive at the beginning
link |
00:13:16.680
and then the technology tends to work,
link |
00:13:19.280
your apprehensions slowly goes away.
link |
00:13:23.800
And as people, we tend to swing to the other extreme, right?
link |
00:13:28.240
Because like, oh, I was like hyper, hyper fearful
link |
00:13:30.880
or hypersensitive and it was awesome.
link |
00:13:33.920
And we just tend to swing.
link |
00:13:35.560
That's just human nature.
link |
00:13:37.320
And so you will have, I mean,
link |
00:13:38.840
That's a scary notion because most people
link |
00:13:41.480
are now extremely untrusting of autopilot.
link |
00:13:44.960
They use it, but they don't trust it.
link |
00:13:46.440
And it's a scary notion that there's a certain point
link |
00:13:48.840
where you allow yourself to look at the smartphone
link |
00:13:51.320
for like 20 seconds.
link |
00:13:53.040
And then there'll be this phase shift
link |
00:13:55.360
where it'll be like 20 seconds, 30 seconds,
link |
00:13:57.520
one minute, two minutes.
link |
00:13:59.920
It's a scary proposition.
link |
00:14:01.960
But that's people, right?
link |
00:14:03.440
That's just humans.
link |
00:14:05.520
I mean, I think of even our use of,
link |
00:14:09.920
I mean, just everything on the internet, right?
link |
00:14:12.320
Like think about how reliant we are on certain apps
link |
00:14:16.800
and certain engines, right?
link |
00:14:20.160
20 years ago, people have been like,
link |
00:14:21.640
oh yeah, that's stupid.
link |
00:14:22.600
Like that makes no sense.
link |
00:14:23.880
Like of course that's false.
link |
00:14:25.800
Like now it's just like, oh, of course I've been using it.
link |
00:14:29.000
It's been correct all this time.
link |
00:14:30.640
Of course, aliens, I didn't think they existed,
link |
00:14:34.280
but now it says they do, obviously.
link |
00:14:37.440
100% Earth is flat.
link |
00:14:39.400
So, okay, but you said three things.
link |
00:14:43.800
So one is the human.
link |
00:14:44.640
Okay, so one is the human.
link |
00:14:45.800
And I think there will be a group of individuals
link |
00:14:47.840
that will swing, right?
link |
00:14:49.560
I just...
link |
00:14:50.400
Teenagers.
link |
00:14:51.240
Teenage, I mean, it'll be adults.
link |
00:14:54.400
There's actually an age demographic
link |
00:14:56.400
that's optimal for technology adoption.
link |
00:15:00.760
And you can actually find them.
link |
00:15:02.280
And they're actually pretty easy to find.
link |
00:15:03.880
Just based on their habits, based on...
link |
00:15:07.000
So someone like me who wasn't a roboticist
link |
00:15:10.400
would probably be the optimal kind of person, right?
link |
00:15:13.520
Early adopter, okay with technology,
link |
00:15:15.600
very comfortable and not hypersensitive, right?
link |
00:15:20.000
I'm just hypersensitive because I designed this stuff.
link |
00:15:23.520
So there is a target demographic that will swing.
link |
00:15:25.880
The other one though is you still have these humans
link |
00:15:29.800
that are on the road.
link |
00:15:31.320
That one is a harder thing to do.
link |
00:15:35.560
And as long as we have people that are on the same streets,
link |
00:15:40.320
that's gonna be the big issue.
link |
00:15:42.480
And it's just because you can't possibly,
link |
00:15:45.240
you can't possibly map some of the silliness
link |
00:15:49.480
of human drivers, right?
link |
00:15:51.400
Like as an example, when you're next to that car
link |
00:15:56.240
that has that big sticker called student driver, right?
link |
00:15:59.760
Like you are like, oh, either I am going to like go around.
link |
00:16:04.600
Like we are, we know that that person
link |
00:16:06.760
is just gonna make mistakes that make no sense, right?
link |
00:16:09.280
How do you map that information?
link |
00:16:11.880
Or if I am in a car and I look over
link |
00:16:14.320
and I see two fairly young looking individuals
link |
00:16:19.240
and there's no student driver bumper
link |
00:16:21.160
and I see them chatting to each other,
link |
00:16:22.880
I'm like, oh, that's an issue, right?
link |
00:16:26.160
So how do you get that kind of information
link |
00:16:28.520
and that experience into basically an autopilot?
link |
00:16:35.280
Yeah, and there's millions of cases like that
link |
00:16:37.280
where we take little hints to establish context.
link |
00:16:41.240
I mean, you said kind of beautifully poetic human things,
link |
00:16:44.400
but there's probably subtle things about the environment,
link |
00:16:47.160
about it being maybe time for commuters
link |
00:16:52.920
to start going home from work.
link |
00:16:55.320
And therefore you can make some kind of judgment
link |
00:16:57.160
about the group behavior of pedestrians,
link |
00:16:59.400
blah, blah, blah, so on and so on.
link |
00:17:01.200
Are even cities, right?
link |
00:17:02.680
Like if you're in Boston, how people cross the street,
link |
00:17:07.120
like lights are not an issue versus other places
link |
00:17:10.680
where people will actually wait for the crosswalk.
link |
00:17:15.600
Seattle or somewhere peaceful.
link |
00:17:18.680
And but what I've also seen, so just even in Boston
link |
00:17:22.560
that intersection to intersection is different.
link |
00:17:25.520
So every intersection has a personality of its own.
link |
00:17:28.920
So certain neighborhoods of Boston are different.
link |
00:17:30.840
So we kind of end based on different timing of day
link |
00:17:35.200
at night, it's all, there's a dynamic to human behavior
link |
00:17:40.320
that we kind of figure out ourselves.
link |
00:17:42.440
We're not able to introspect and figure it out,
link |
00:17:46.080
but somehow our brain learns it.
link |
00:17:49.320
We do.
link |
00:17:50.360
And so you're saying, is there a shortcut?
link |
00:17:54.800
Is there a shortcut though for a robot?
link |
00:17:56.400
Is there something that could be done?
link |
00:17:58.080
You think that, you know, that's what we humans do.
link |
00:18:02.640
It's just like bird flight, right?
link |
00:18:04.640
This example they give for flight.
link |
00:18:06.480
Do you necessarily need to build a bird that flies
link |
00:18:09.280
or can you do an airplane?
link |
00:18:11.560
So is there a shortcut to it?
link |
00:18:13.040
So I think that the shortcut is,
link |
00:18:15.360
and I kind of, I talk about it as a fixed space.
link |
00:18:19.360
Where, so imagine that there's a neighborhood
link |
00:18:23.280
that's a new smart city or a new neighborhood that says,
link |
00:18:26.760
you know what, we are going to design this new city
link |
00:18:31.440
based on supporting self driving cars.
link |
00:18:33.800
And then doing things, knowing that there's anomalies,
link |
00:18:37.640
knowing that people are like this, right?
link |
00:18:39.600
And designing it based on that assumption
link |
00:18:42.080
that like we're gonna have this,
link |
00:18:44.000
that would be an example of a shortcut.
link |
00:18:45.520
So you still have people, but you do very specific things
link |
00:18:49.240
to try to minimize the noise a little bit.
link |
00:18:51.840
As an example.
link |
00:18:53.840
And the people themselves become accepting
link |
00:18:55.520
of the notion that there's autonomous cars, right?
link |
00:18:57.760
Right, like they move into,
link |
00:18:59.720
so right now you have like a,
link |
00:19:01.480
you will have a self selection bias, right?
link |
00:19:03.600
Like individuals will move into this neighborhood
link |
00:19:06.240
knowing like this is part of like the real estate pitch, right?
link |
00:19:10.640
And so I think that's a way to do a shortcut.
link |
00:19:14.160
When it allows you to deploy,
link |
00:19:17.600
it allows you to collect then data with these variances
link |
00:19:21.960
and anomalies, cause people are still people,
link |
00:19:24.040
but it's a safer space and is more of an accepting space.
link |
00:19:28.840
IE when something in that space might happen
link |
00:19:31.960
because things do, because you already have
link |
00:19:35.080
the self selection, like people would be,
link |
00:19:37.200
I think a little more forgiving than other places.
link |
00:19:40.760
And you said three things, did we cover all of them?
link |
00:19:43.120
The third is legal law, liability,
link |
00:19:46.360
which I don't really want to touch,
link |
00:19:47.840
but it's still of concern.
link |
00:19:50.920
And the mishmash with like, with policy as well,
link |
00:19:53.280
sort of government, all that, that whole.
link |
00:19:55.760
That big ball of stuff.
link |
00:19:57.760
Yeah, got you.
link |
00:19:59.120
So that's, so we're out of time now.
link |
00:20:03.600
Do you think from a robotics perspective,
link |
00:20:07.200
you know, if you're kind of honest with what cars do,
link |
00:20:09.800
they kind of threaten each other's life all the time.
link |
00:20:14.800
So cars are very, I mean, in order to navigate intersections,
link |
00:20:19.240
there's an assertiveness, there's a risk taking,
link |
00:20:22.240
and if you were to reduce it to an objective function,
link |
00:20:25.200
there's a probability of murder in that function,
link |
00:20:28.720
meaning you killing another human being,
link |
00:20:31.840
and you're using that.
link |
00:20:33.520
First of all, it has to be low enough
link |
00:20:36.880
to be acceptable to you on an ethical level,
link |
00:20:39.640
as an individual human being,
link |
00:20:41.240
but it has to be high enough for people to respect you,
link |
00:20:45.240
to not sort of take advantage of you completely,
link |
00:20:47.480
and jaywalk in front of you, and so on.
link |
00:20:49.560
So, I mean, I don't think there's a right answer here,
link |
00:20:53.080
but how do we solve that?
link |
00:20:56.040
How do we solve that from a robotics perspective
link |
00:20:57.960
when danger and human life is at stake?
link |
00:21:00.160
Yeah, as they say, cars don't kill people,
link |
00:21:01.960
people kill people.
link |
00:21:02.960
Kill people, kill people.
link |
00:21:05.080
Right, so I think.
link |
00:21:08.600
And now robotic algorithms would be killing people.
link |
00:21:10.760
Right, so it will be robotics algorithms that are,
link |
00:21:14.400
no, it will be robotic algorithms don't kill people,
link |
00:21:17.000
developers of robotic algorithms kill people, right?
link |
00:21:19.760
I mean, one of the things is people are still in the loop,
link |
00:21:22.960
and at least in the near and midterm,
link |
00:21:26.560
I think people will still be in the loop.
link |
00:21:28.800
At some point, even if it's the developer,
link |
00:21:30.320
like we're not necessarily at the stage
link |
00:21:31.880
where robots are programming autonomous robots
link |
00:21:36.760
with different behaviors quite yet.
link |
00:21:40.160
That's a scary notion, sorry, to interrupt,
link |
00:21:42.280
that a developer has some responsibility
link |
00:21:47.400
in the death of a human being.
link |
00:21:49.680
That's a heavy burden.
link |
00:21:50.560
I mean, I think that's why the whole aspect of ethics
link |
00:21:55.440
in our community is so, so important, right?
link |
00:21:58.480
Like, because it's true, if you think about it,
link |
00:22:03.080
you can basically say,
link |
00:22:04.840
I'm not going to work on weaponized AI, right?
link |
00:22:07.440
Like, people can say, that's not what I'm gonna do.
link |
00:22:09.840
But yet, you are programming algorithms
link |
00:22:12.720
that might be used in healthcare algorithms
link |
00:22:15.600
that might decide whether this person
link |
00:22:17.240
should get this medication or not,
link |
00:22:18.960
and they don't, and they die.
link |
00:22:21.400
Okay, so that is your responsibility, right?
link |
00:22:25.080
And if you're not conscious and aware
link |
00:22:27.320
that you do have that power when you're coding
link |
00:22:30.000
and things like that,
link |
00:22:31.680
I think that's just not a good thing.
link |
00:22:35.000
Like, we need to think about this responsibility
link |
00:22:38.040
as we program robots and computing devices
link |
00:22:41.840
much more than we are.
link |
00:22:44.320
Yeah, so it's not an option to not think about ethics.
link |
00:22:46.960
I think it's a majority, I would say, of computer science.
link |
00:22:51.360
Sort of, it's kind of a hot topic now,
link |
00:22:53.840
I think about bias and so on,
link |
00:22:55.680
but it's, and we'll talk about it,
link |
00:22:57.720
but usually it's kind of,
link |
00:23:00.400
it's like a very particular group of people
link |
00:23:02.680
that work on that.
link |
00:23:04.280
And then, people who do robotics are like,
link |
00:23:06.920
well, I don't have to think about that.
link |
00:23:09.320
There's other smart people thinking about it.
link |
00:23:11.120
It seems that everybody has to think about it.
link |
00:23:14.560
It's not, you can't escape the ethics,
link |
00:23:17.000
whether it's bias or just every aspect of ethics
link |
00:23:21.120
that has to do with human beings.
link |
00:23:22.680
Everyone.
link |
00:23:23.520
So think about, I'm gonna age myself,
link |
00:23:25.680
but I remember when we didn't have like testers, right?
link |
00:23:30.080
And so what did you do?
link |
00:23:31.040
As a developer, you had to test your own code, right?
link |
00:23:33.560
Like you had to go through all the cases
link |
00:23:35.200
and figure it out and, you know,
link |
00:23:36.600
and then they realized that, you know,
link |
00:23:38.560
like we probably need to have testing
link |
00:23:40.560
because we're not getting all the things.
link |
00:23:42.360
And so from there, what happens is like most developers,
link |
00:23:45.480
they do, you know, a little bit of testing,
link |
00:23:47.240
but it's usually like, okay, did my compiler bug out?
link |
00:23:49.720
Let me look at the warnings.
link |
00:23:51.080
Okay, is that acceptable or not?
link |
00:23:52.840
Right?
link |
00:23:53.680
Like that's how you typically think about as a developer
link |
00:23:55.760
and you're just assume that is going to go
link |
00:23:58.120
to another process and they're gonna test it out.
link |
00:24:01.000
But I think we need to go back to those early days
link |
00:24:04.280
when, you know, you're a developer, you're developing.
link |
00:24:07.520
There should be like this a, you know, okay,
link |
00:24:09.720
let me look at the ethical outcomes of this
link |
00:24:12.120
because there isn't a second like testing ethical testers,
link |
00:24:16.000
right? It's you.
link |
00:24:18.000
We did it back in the early coding days.
link |
00:24:21.120
I think that's where we are with respect to ethics.
link |
00:24:23.240
Like let's go back to what was good practices
link |
00:24:26.240
and only because we were just developing the field.
link |
00:24:30.000
Yeah, and it's a really heavy burden.
link |
00:24:34.000
I've had to feel it recently in the last few months,
link |
00:24:37.520
but I think it's a good one to feel like I've gotten
link |
00:24:39.880
a message more than one from people, you know,
link |
00:24:43.720
I've unfortunately gotten some attention recently
link |
00:24:47.440
and I've gotten messages that say that I have blood
link |
00:24:51.040
in my hands because of working on semi autonomous vehicles.
link |
00:24:56.280
So the idea that you have semi autonomy means people
link |
00:24:59.560
would become, would lose vigilance and so on.
link |
00:25:01.960
That's actually be humans as we described.
link |
00:25:05.120
And because of that, because of this idea
link |
00:25:08.080
that we're creating automation,
link |
00:25:10.000
there'll be people be hurt because of it.
link |
00:25:12.760
And I think that's a beautiful thing.
link |
00:25:14.520
I mean, it's, you know, there's many nights
link |
00:25:16.160
where I wasn't able to sleep because of this notion.
link |
00:25:18.800
You know, you really do think about people that might die
link |
00:25:22.360
because of this technology.
link |
00:25:23.800
Of course, you can then start rationalizing and saying,
link |
00:25:26.520
well, you know what, 40,000 people die
link |
00:25:28.280
in the United States every year
link |
00:25:29.640
and we're trying to ultimately try to save lives.
link |
00:25:32.400
But the reality is your code you've written
link |
00:25:35.800
might kill somebody and that's an important burden
link |
00:25:37.920
to carry with you as you design the code.
link |
00:25:41.200
I don't even think of it as a burden
link |
00:25:43.800
if we train this concept correctly from the beginning.
link |
00:25:47.560
And I use, and not to say that coding is like
link |
00:25:50.320
being a medical doctor, but think about it.
link |
00:25:52.400
Medical doctors, if they've been in situations
link |
00:25:56.080
where their patient didn't survive, right?
link |
00:25:58.320
Do they give up and go away?
link |
00:26:00.800
No, every time they come in,
link |
00:26:02.480
they know that there might be a possibility
link |
00:26:05.440
that this patient might not survive.
link |
00:26:07.240
And so when they approach every decision,
link |
00:26:10.080
like that's in the back of their head.
link |
00:26:11.920
And so why isn't that we aren't teaching,
link |
00:26:15.840
and those are tools though, right?
link |
00:26:17.200
They are given some of the tools to address that
link |
00:26:19.680
so that they don't go crazy.
link |
00:26:21.440
But we don't give those tools
link |
00:26:24.200
so that it does feel like a burden
link |
00:26:26.160
versus something of I have a great gift
link |
00:26:28.680
and I can do great, awesome good,
link |
00:26:31.080
but with it comes great responsibility.
link |
00:26:33.320
I mean, that's what we teach in terms of,
link |
00:26:35.840
you think about the medical schools, right?
link |
00:26:37.400
Great gift, great responsibility.
link |
00:26:39.520
I think if we just change the messaging a little,
link |
00:26:42.120
great gift being a developer, great responsibility.
link |
00:26:45.560
And this is how you combine those.
link |
00:26:48.360
But do you think, I mean, this is really interesting.
link |
00:26:51.160
It's outside, I actually have no friends
link |
00:26:54.320
who are sort of surgeons or doctors.
link |
00:26:58.280
I mean, what does it feel like
link |
00:27:00.000
to make a mistake in a surgery and somebody to die
link |
00:27:03.760
because of that?
link |
00:27:04.800
Like is that something you could be taught
link |
00:27:07.000
in medical school sort of how to be accepting of that risk?
link |
00:27:10.600
So because I do a lot of work with healthcare robotics,
link |
00:27:14.960
I have not lost a patient, for example.
link |
00:27:18.480
The first one's always the hardest, right?
link |
00:27:20.880
But they really teach the value, right?
link |
00:27:27.320
So they teach responsibility,
link |
00:27:28.760
but they also teach the value.
link |
00:27:30.800
Like you're saving 40,000,
link |
00:27:34.800
but in order to really feel good about that,
link |
00:27:38.280
when you come to a decision,
link |
00:27:40.120
you have to be able to say at the end,
link |
00:27:42.280
I did all that I could possibly do, right?
link |
00:27:45.320
Versus a, well, I just picked the first widget, right?
link |
00:27:49.160
Like, so every decision is actually thought through.
link |
00:27:52.240
It's not a habit, it's not a,
link |
00:27:53.800
let me just take the best algorithm
link |
00:27:55.320
that my friend gave me, right?
link |
00:27:57.080
It's a, is this it?
link |
00:27:58.640
Is this the best?
link |
00:27:59.520
Have I done my best to do good, right?
link |
00:28:03.120
And so...
link |
00:28:03.960
And I think burden is the wrong word.
link |
00:28:06.400
It's a gift, but you have to treat it extremely seriously.
link |
00:28:10.760
Correct.
link |
00:28:13.280
So on a slightly related note, in a recent paper,
link |
00:28:16.440
The Ugly Truth About Ourselves and Our Robot Creations,
link |
00:28:20.160
you discuss, you highlight some biases
link |
00:28:24.320
that may affect the function of various robotic systems.
link |
00:28:27.120
Can you talk through, if you remember, examples of some?
link |
00:28:30.120
There's a lot of examples.
link |
00:28:31.360
I usually...
link |
00:28:32.200
What is bias, first of all?
link |
00:28:33.040
Yeah, so bias is this,
link |
00:28:37.080
and so bias, which is different than prejudice.
link |
00:28:38.840
So bias is that we all have these preconceived notions
link |
00:28:41.880
about particular, everything from particular groups
link |
00:28:45.960
to habits, to identity, right?
link |
00:28:49.720
So we have these predispositions.
link |
00:28:51.400
And so when we address a problem,
link |
00:28:54.080
we look at a problem and make a decision,
link |
00:28:56.040
those preconceived notions might affect our outputs,
link |
00:29:01.320
our outcomes.
link |
00:29:02.240
So there, the bias can be positive and negative,
link |
00:29:04.680
and then it's prejudice, the negative kind of bias?
link |
00:29:07.680
Prejudice is the negative, right?
link |
00:29:09.160
So prejudice is that not only are you aware of your bias,
link |
00:29:13.520
but you are then taken and have a negative outcome,
link |
00:29:18.800
even though you are aware, like...
link |
00:29:20.640
And there could be gray areas too.
link |
00:29:22.920
There's always gray areas.
link |
00:29:24.600
That's the challenging aspect of all ethical questions.
link |
00:29:27.520
So I always like, so there's a funny one.
link |
00:29:29.960
And in fact, I think it might be in the paper
link |
00:29:31.720
because I think I talk about self driving cars.
link |
00:29:34.120
But think about this.
link |
00:29:35.440
We, for teenagers, right?
link |
00:29:39.480
Typically, insurance companies charge quite a bit of money
link |
00:29:44.520
if you have a teenage driver.
link |
00:29:46.760
So you could say that's an age bias, right?
link |
00:29:50.840
But no one will, I mean, parents will be grumpy,
link |
00:29:54.040
but no one really says that that's not fair.
link |
00:29:58.640
That's interesting.
link |
00:29:59.480
We don't, that's right.
link |
00:30:00.960
That's right.
link |
00:30:01.800
It's everybody in human factors and safety research almost,
link |
00:30:06.800
I mean, it's quite ruthlessly critical of teenagers.
link |
00:30:11.480
And we don't question, is that okay?
link |
00:30:13.680
Is that okay to be agist in this kind of way?
link |
00:30:15.960
And it is age, right?
link |
00:30:17.280
It's definitely age, there's no question about it.
link |
00:30:19.560
And so this is the gray area, right?
link |
00:30:23.560
Because you know that teenagers are more likely
link |
00:30:28.560
to be in accidents and so there's actually some data to it.
link |
00:30:31.760
But then if you take that same example and you say,
link |
00:30:34.560
well, I'm going to make the insurance higher
link |
00:30:38.160
for an area of Boston because there's a lot of accidents.
link |
00:30:43.560
And then they find out that that's correlated
link |
00:30:47.000
with socioeconomics, well, then it becomes a problem, right?
link |
00:30:51.000
Like that is not acceptable,
link |
00:30:53.640
but yet the teenager, which is age, it's against age is, right?
link |
00:31:00.160
And the way we figure that out as society
link |
00:31:02.600
by having conversations, by having discourse,
link |
00:31:04.840
I mean, throughout history, the definition
link |
00:31:07.040
of what is ethical or not has changed
link |
00:31:09.960
and hopefully always for the better.
link |
00:31:12.880
Correct, correct.
link |
00:31:14.120
So in terms of bias or prejudice in robotic,
link |
00:31:19.120
in algorithms, what examples do you sometimes think about?
link |
00:31:24.120
So I think about quite a bit the medical domain
link |
00:31:27.600
just because historically, right?
link |
00:31:29.960
The health care domain has had these biases,
link |
00:31:33.360
typically based on gender and ethnicity primarily,
link |
00:31:38.760
a little on age, but not so much.
link |
00:31:42.440
Historically, if you think about FDA and drug trials,
link |
00:31:47.960
it's harder to find a woman that aren't childbearing
link |
00:31:53.320
and so you may not test on drugs at the same level.
link |
00:31:55.680
Right, so there's these things.
link |
00:31:57.760
And so if you think about robotics, right?
link |
00:32:01.680
Something as simple as, I like to design an exoskeleton, right?
link |
00:32:06.680
What should the material be?
link |
00:32:07.960
What should the weight be?
link |
00:32:08.880
What should the form factor be?
link |
00:32:12.480
Are you, who are you gonna design it around?
link |
00:32:15.680
I will say that in the US,
link |
00:32:17.920
women average height and weight is slightly different
link |
00:32:21.360
than guys, so who are you gonna choose?
link |
00:32:24.560
Like, if you're not thinking about it from the beginning
link |
00:32:28.240
as, okay, when I design this and I look at the algorithms
link |
00:32:32.840
and I design the control system and the forces
link |
00:32:35.000
and the torques, if you're not thinking about,
link |
00:32:37.400
well, you have different types of body structure,
link |
00:32:40.800
you're gonna design to what you're used to.
link |
00:32:43.640
Oh, this fits in all the folks in my lab, right?
link |
00:32:47.400
So think about it from the very beginning as important.
link |
00:32:50.600
What about sort of algorithms that train on data?
link |
00:32:53.680
Data kind of thing.
link |
00:32:56.080
Sadly, our society already has a lot of negative bias.
link |
00:33:01.320
And so if we collect a lot of data,
link |
00:33:04.760
even if it's a balanced way,
link |
00:33:06.280
there's going to contain the same bias
link |
00:33:07.800
that a society contains.
link |
00:33:09.000
And so, yeah, is there things there that bother you?
link |
00:33:13.720
Yeah, so you actually said something.
link |
00:33:15.600
You had said how we have biases,
link |
00:33:19.920
but hopefully we learn from them and we become better, right?
link |
00:33:23.120
And so that's where we are now, right?
link |
00:33:25.120
So the data that we're collecting is historic.
link |
00:33:28.200
It's, so it's based on these things.
link |
00:33:30.120
When we knew it was bad to discriminate,
link |
00:33:32.600
but that's the data we have
link |
00:33:33.920
and we're trying to fix it now,
link |
00:33:36.080
but we're fixing it based on the data
link |
00:33:37.840
that was used in the first place.
link |
00:33:39.480
Fix it in post.
link |
00:33:40.640
Right, and so the decisions,
link |
00:33:43.720
and you can look at everything from the whole aspect
link |
00:33:46.880
of predictive policing, criminal recidivism.
link |
00:33:51.400
There was a recent paper that had the healthcare algorithms,
link |
00:33:54.320
which had a kind of a sensational titles.
link |
00:33:58.280
I'm not pro sensationalism in titles,
link |
00:34:01.200
but again, you read it, right?
link |
00:34:03.720
So it makes you read it,
link |
00:34:05.760
but I'm like really like, ah, you could have...
link |
00:34:08.960
What's the topic of the sensationalism?
link |
00:34:10.840
I mean, what's underneath it?
link |
00:34:13.320
What's, if you could sort of educate me
link |
00:34:16.320
on what kind of bias creeps into the healthcare space.
link |
00:34:19.160
Yeah, so... I mean, you already kind of mentioned...
link |
00:34:21.600
Yeah, so this one was,
link |
00:34:23.120
the headline was racist AI algorithms.
link |
00:34:27.520
Okay, like, okay, that's totally a clickbait title.
link |
00:34:30.880
And so you looked at it,
link |
00:34:32.160
and so there was data that these researchers had collected.
link |
00:34:36.680
I believe I want to say it was either science or nature.
link |
00:34:39.440
It just was just published,
link |
00:34:40.680
but they didn't have a sensational title.
link |
00:34:42.640
It was like the media.
link |
00:34:44.920
And so they had looked at demographics,
link |
00:34:47.520
I believe, between black and white women, right?
link |
00:34:52.200
And they showed that there was a discrepancy
link |
00:34:56.880
in the outcomes, right?
link |
00:34:59.240
And so, and it was tied to ethnicity, tied to race.
link |
00:35:02.440
The piece that the researchers did
link |
00:35:04.840
actually went through the whole analysis, but of course...
link |
00:35:08.840
I mean, the journals with AI are problematic
link |
00:35:12.120
across the board, let's say.
link |
00:35:14.360
And so this is a problem, right?
link |
00:35:16.200
And so there's this thing about,
link |
00:35:18.280
oh, AI, it has all these problems,
link |
00:35:20.600
we're doing it on historical data,
link |
00:35:22.920
and the outcomes aren't even based on gender
link |
00:35:26.080
or ethnicity or age.
link |
00:35:28.120
But I'm always saying, it's like, yes,
link |
00:35:30.840
we need to do better, right?
link |
00:35:32.560
We need to do better.
link |
00:35:33.680
It is our duty to do better,
link |
00:35:36.840
but the worst AI is still better than us.
link |
00:35:39.880
Like, you take the best of us,
link |
00:35:42.000
and we're still worse than the worst AI,
link |
00:35:44.200
at least in terms of these things.
link |
00:35:45.680
And that's actually not discussed, right?
link |
00:35:48.040
And so I think, and that's why the sensational title, right?
link |
00:35:52.000
And so it's like, so then you can have individuals go like,
link |
00:35:54.360
oh, we don't need to use this AI.
link |
00:35:55.600
I'm like, oh, no, no, no, no.
link |
00:35:56.840
I want the AI instead of the doctors
link |
00:36:01.000
that provided that data,
link |
00:36:02.080
because it's still better than that, right?
link |
00:36:04.240
I think that's really important to linger on.
link |
00:36:06.840
Is the idea that this AI is racist,
link |
00:36:09.640
it's like, well, compared to what?
link |
00:36:14.920
Sort of the, I think we set, unfortunately,
link |
00:36:20.120
way too high of a bar for AI algorithms.
link |
00:36:23.240
And in the ethical space,
link |
00:36:24.760
where perfect is, I would argue, probably impossible.
link |
00:36:28.920
Then if we set the bar of perfection, essentially,
link |
00:36:33.040
of it has to be perfectly fair, whatever that means,
link |
00:36:36.440
is it means we're setting it up for failure.
link |
00:36:39.640
But that's really important to say what you just said,
link |
00:36:42.000
which is, well, it's still better than it is.
link |
00:36:44.960
And one of the things I think
link |
00:36:46.920
that we don't get enough credit for just in terms of,
link |
00:36:51.240
as developers, is that you can now poke at it, right?
link |
00:36:55.880
So it's harder to say, is this hospital,
link |
00:36:58.880
is this city doing something, right?
link |
00:37:01.080
Until someone brings in a civil case, right?
link |
00:37:04.400
Well, with AI, it can process
link |
00:37:05.920
through all this data and say, hey, yes,
link |
00:37:09.720
there's an issue here, but here it is, we've identified it.
link |
00:37:14.520
And then the next step is to fix it.
link |
00:37:16.200
I mean, that's a nice feedback loop,
link |
00:37:18.120
versus like waiting for someone to sue someone else
link |
00:37:21.360
before it's fixed, right?
link |
00:37:22.800
And so I think that power,
link |
00:37:25.120
we need to capitalize on a little bit more, right?
link |
00:37:27.640
Instead of having the sensational titles, have the,
link |
00:37:31.520
okay, this is a problem, and this is how we're fixing it.
link |
00:37:34.600
And people are putting money to fix it
link |
00:37:36.560
because we can make it better.
link |
00:37:38.640
I look at like facial recognition, how Joy,
link |
00:37:43.000
she basically called out a couple of companies and said,
link |
00:37:45.840
hey, and most of them were like, oh, embarrassment.
link |
00:37:50.520
And the next time it had been fixed, right?
link |
00:37:53.360
It had been fixed better, right?
link |
00:37:54.920
And then it was like, oh, here's some more issues.
link |
00:37:56.840
And I think that conversation then moves that needle
link |
00:38:01.840
to having much more of fear and unbiased and ethical aspects.
link |
00:38:07.640
As long as both sides, the developers are willing to say,
link |
00:38:10.640
okay, I hear you, yes, we are going to improve.
link |
00:38:14.120
And you have other developers who are like,
link |
00:38:16.120
hey, AI, it's wrong, but I love it, right?
link |
00:38:19.720
Yes.
link |
00:38:20.600
So speaking of this really nice notion that AI is maybe flawed,
link |
00:38:25.480
but better than humans.
link |
00:38:27.080
So just made me think of it,
link |
00:38:29.200
one example of flawed humans is our political system.
link |
00:38:34.120
Do you think, or you said judicial as well,
link |
00:38:38.720
do you have a hope for AI sort of being elected
link |
00:38:46.160
for president or running our Congress
link |
00:38:49.800
or being able to be a powerful representative of the people?
link |
00:38:54.000
So I mentioned, and I truly believe that
link |
00:38:58.000
this whole world of AI is in partnerships with people.
link |
00:39:01.440
And so what does that mean?
link |
00:39:02.600
I don't believe or maybe I just don't,
link |
00:39:07.800
I don't believe that we should have an AI for president,
link |
00:39:11.600
but I do believe that a president should use AI
link |
00:39:14.640
as an advisor, right?
link |
00:39:16.080
Like if you think about it,
link |
00:39:17.560
every president has a cabinet of individuals
link |
00:39:22.080
that have different expertise
link |
00:39:23.840
that they should listen to, right?
link |
00:39:26.200
Like that's kind of what we do.
link |
00:39:28.160
And you put smart people with smart expertise
link |
00:39:31.280
around certain issues and you listen.
link |
00:39:33.600
I don't see why AI can't function
link |
00:39:35.840
as one of those smart individuals giving input.
link |
00:39:39.400
So maybe there's an AI on healthcare,
link |
00:39:41.200
maybe there's an AI on education and right?
link |
00:39:44.000
Like all of these things that a human is processing, right?
link |
00:39:48.920
Because at the end of the day there's people that are human
link |
00:39:53.720
that are going to be at the end of the decision.
link |
00:39:55.680
And I don't think as a world, as a culture, as a society
link |
00:39:59.480
that we would totally, and this is us,
link |
00:40:03.200
like this is some fallacy about us,
link |
00:40:05.480
but we need to see that leader, that person as human.
link |
00:40:12.000
And most people don't realize that like leaders
link |
00:40:15.600
have a whole lot of advice, right?
link |
00:40:17.160
Like when they say something,
link |
00:40:18.360
it's not that they woke up,
link |
00:40:19.760
well usually they don't wake up in the morning
link |
00:40:22.000
and be like, I have a brilliant idea, right?
link |
00:40:24.560
It's usually a, okay, let me listen.
link |
00:40:26.840
I have a brilliant idea
link |
00:40:27.680
but let me get a little bit of feedback on this, like, okay.
link |
00:40:31.160
And then it's a, yeah, that was an awesome idea
link |
00:40:33.240
or it's like, yeah, let me go back.
link |
00:40:36.000
We already talked to a bunch of them,
link |
00:40:37.520
but are there some possible solutions
link |
00:40:41.560
to the biases present in our algorithms
link |
00:40:45.320
beyond what we just talked about?
link |
00:40:46.760
So I think there's two paths.
link |
00:40:49.400
One is to figure out how to systematically
link |
00:40:53.840
do the feedback and correction.
link |
00:40:56.600
So right now it's ad hoc, right?
link |
00:40:58.240
It's a researcher identify some outcomes
link |
00:41:02.520
that are not, don't seem to be fair, right?
link |
00:41:05.480
They publish it, they write about it
link |
00:41:08.000
and the, either the developer
link |
00:41:10.600
or the companies that have adopted the algorithms
link |
00:41:13.200
may try to fix it, right?
link |
00:41:14.320
And so it's really ad hoc and it's not systematic.
link |
00:41:18.920
There's, it's just, it's kind of like, I'm a researcher,
link |
00:41:22.520
that seems like an interesting problem,
link |
00:41:24.720
which means that there's a whole lot out there
link |
00:41:26.560
that's not being looked at, right?
link |
00:41:29.160
Because it's kind of researcher driven.
link |
00:41:32.960
And I don't necessarily have a solution,
link |
00:41:35.680
but that process, I think could be done a little bit better.
link |
00:41:41.240
One way is I'm going to poke a little bit
link |
00:41:45.040
at some of the corporations, right?
link |
00:41:48.280
Like maybe the corporations, when they think about a product,
link |
00:41:51.720
they should, instead of,
link |
00:41:53.920
in addition to hiring these, bug, they give these...
link |
00:41:59.880
Oh yeah, yeah, yeah.
link |
00:42:01.600
Like awards when you find a bug.
link |
00:42:02.960
Yeah, security bug, you know, let's put it like,
link |
00:42:07.200
we will give the, whatever the award is
link |
00:42:09.760
that we give for the people who find these security holes,
link |
00:42:12.600
find an ethics hole, right?
link |
00:42:14.000
Like find an unfairness hole
link |
00:42:15.400
and we will pay you X for each one you find.
link |
00:42:17.840
I mean, why can't they do that?
link |
00:42:19.800
One is a win win, they show that they're concerned about it,
link |
00:42:23.080
that this is important,
link |
00:42:24.320
and they don't have to necessarily dedicate
link |
00:42:26.360
their own internal resources.
link |
00:42:28.920
And it also means that everyone who has their own bias lens,
link |
00:42:32.800
like I'm interested in age,
link |
00:42:34.600
and so I'll find the ones based on age,
link |
00:42:36.560
and I'm interested in gender, right?
link |
00:42:38.400
Which means that you get all of these different perspectives.
link |
00:42:41.560
But you think of it in a data driven way.
link |
00:42:43.360
So like, sort of, if we look at a company like Twitter,
link |
00:42:48.360
it's under a lot of fire for discriminating
link |
00:42:53.040
against certain political beliefs.
link |
00:42:54.840
Correct.
link |
00:42:55.920
And sort of, there's a lot of people,
link |
00:42:58.120
this is the sad thing,
link |
00:42:59.280
because I know how hard the problem is,
link |
00:43:00.760
and I know the Twitter folks are working really hard at it,
link |
00:43:03.120
even Facebook, that everyone seems to hate
link |
00:43:05.000
are working really hard at this.
link |
00:43:06.920
You know, the kind of evidence that people bring
link |
00:43:09.360
is basically anecdotal evidence.
link |
00:43:11.280
Well, me or my friend, all we said is X,
link |
00:43:15.040
and for that we got banned.
link |
00:43:17.160
And that's kind of a discussion of saying,
link |
00:43:20.920
well, look, that's usually, first of all,
link |
00:43:23.240
the whole thing is taken out of context.
link |
00:43:25.480
So they present sort of anecdotal evidence.
link |
00:43:28.640
And how are you supposed to, as a company,
link |
00:43:31.120
in a healthy way have a discourse
link |
00:43:33.040
about what is and isn't ethical?
link |
00:43:35.720
What, how do we make algorithms ethical
link |
00:43:38.040
when people are just blowing everything?
link |
00:43:40.760
Like, they're outraged about a particular
link |
00:43:45.120
anecdotal piece of evidence that's very difficult
link |
00:43:48.200
to sort of contextualize in a big data driven way.
link |
00:43:52.640
Do you have a hope for companies like Twitter and Facebook?
link |
00:43:55.760
Yeah, so I think there's a couple of things going on, right?
link |
00:43:59.800
First off, the, remember this whole aspect
link |
00:44:04.840
of we are becoming reliant on technology,
link |
00:44:09.400
we're also becoming reliant on a lot of these,
link |
00:44:14.360
the apps and the resources that are provided, right?
link |
00:44:18.000
So some of it is kind of anger, like, I need you, right?
link |
00:44:21.640
And you're not working for me, right?
link |
00:44:23.120
Yeah, you're not working for me, they're right.
link |
00:44:24.640
But I think, and so some of it,
link |
00:44:27.280
and I wish that there was a little bit
link |
00:44:31.400
of change of rethinking.
link |
00:44:32.840
So some of it is like, oh, we'll fix it in house.
link |
00:44:35.560
No, that's like, okay, I'm a fox
link |
00:44:39.000
and I'm going to watch these hens
link |
00:44:40.960
because I think it's a problem that foxes eat hens.
link |
00:44:44.080
No, right?
link |
00:44:45.160
Like use, like be good citizens and say, look,
link |
00:44:48.840
we have a problem and we are willing to open ourselves up
link |
00:44:54.800
for others to come in and look at it
link |
00:44:57.040
and not try to fix it in house.
link |
00:44:58.720
Because if you fix it in house, there's conflict of interest.
link |
00:45:01.960
If I find something, I'm probably going to want to fix it
link |
00:45:04.440
and hopefully the media won't pick it up, right?
link |
00:45:07.320
And that then caused this distrust
link |
00:45:09.320
because someone inside is going to be mad at you
link |
00:45:11.880
and go out and talk about how,
link |
00:45:13.600
yeah, they can the resume survey
link |
00:45:16.440
because it rightly be best people.
link |
00:45:19.320
Like just say, look, we had this issue.
link |
00:45:22.760
Community, help us fix it.
link |
00:45:24.440
And we will give you like, you know,
link |
00:45:25.800
the bug finder fee if you do.
link |
00:45:28.120
So do you ever hope that the community,
link |
00:45:31.280
us as a human civilization on the whole is good
link |
00:45:35.360
and can be trusted to guide the future
link |
00:45:38.520
of our civilization into positive direction?
link |
00:45:40.960
I think so.
link |
00:45:41.880
So I'm an optimist, right?
link |
00:45:44.120
And, you know, there were some dark times in history always.
link |
00:45:50.000
I think now we're in one of those dark times.
link |
00:45:52.920
I truly do.
link |
00:45:53.760
In which aspect?
link |
00:45:54.600
The polarization.
link |
00:45:56.240
And it's not just US, right?
link |
00:45:57.560
So if it was just US, I'd be like, yeah, it's a US thing.
link |
00:46:00.040
But we're seeing it like worldwide this polarization.
link |
00:46:04.360
And so I worry about that.
link |
00:46:06.560
But I do fundamentally believe that at the end of the day,
link |
00:46:12.000
people are good, right?
link |
00:46:13.440
And why do I say that?
link |
00:46:14.760
Because anytime there's a scenario
link |
00:46:17.680
where people are in danger, and I will use,
link |
00:46:20.800
so Atlanta, we had a snowmageddon
link |
00:46:24.240
and people can laugh about that.
link |
00:46:26.600
People at the time, so the city closed for, you know,
link |
00:46:30.440
little snow, but it was ice and the city closed down.
link |
00:46:33.440
But you had people opening up their homes and saying,
link |
00:46:35.680
hey, you have nowhere to go.
link |
00:46:37.760
Come to my house, right?
link |
00:46:39.000
Hotels were just saying like, sleep on the floor.
link |
00:46:41.760
Like places like, you know, the grocery stores were like,
link |
00:46:44.360
hey, here's food.
link |
00:46:45.880
There was no like, oh, how much are you gonna pay me?
link |
00:46:47.880
It was like this, such a community.
link |
00:46:50.440
And like people who didn't know each other,
link |
00:46:52.080
strangers were just like, can I give you a ride home?
link |
00:46:55.440
And that was a point I was like, you know what?
link |
00:46:57.760
Like.
link |
00:46:59.600
That reveals that the deeper thing
link |
00:47:02.000
is there's a compassion or love that we all have within us.
link |
00:47:06.840
It's just that when all of that is taken care of
link |
00:47:09.400
and get bored, we love drama.
link |
00:47:11.120
Yes.
link |
00:47:12.440
And that's, I think almost like the division is a sign
link |
00:47:15.240
of the time is being good, is that it's just entertaining
link |
00:47:18.960
on some unpleasant mammalian level to watch,
link |
00:47:24.120
to disagree with others.
link |
00:47:26.040
And Twitter and Facebook are actually taking advantage
link |
00:47:30.160
of that in a sense because it brings you back
link |
00:47:33.120
to the platform and their advertises are driven
link |
00:47:36.040
so they make a lot of money.
link |
00:47:37.520
So you go back and you flick.
link |
00:47:39.160
Love doesn't sell quite as well in terms of advertisement.
link |
00:47:43.560
It doesn't.
link |
00:47:44.800
So you've started your career
link |
00:47:46.840
at NASA Jet Propulsion Laboratory.
link |
00:47:49.000
But before I ask a few questions there,
link |
00:47:51.880
have you happened to have ever seen Space Odyssey,
link |
00:47:54.320
2001 Space Odyssey?
link |
00:47:57.080
Yes.
link |
00:47:57.840
Okay, do you think HAL 9000?
link |
00:48:01.400
So we're talking about ethics.
link |
00:48:03.360
Do you think HAL did the right thing
link |
00:48:06.640
by taking the priority of the mission
link |
00:48:08.520
over the lives of the astronauts?
link |
00:48:10.200
Do you think HAL is good or evil?
link |
00:48:15.880
Easy questions.
link |
00:48:16.880
Yeah.
link |
00:48:19.360
HAL was misguided.
link |
00:48:21.360
You're one of the people that would be in charge
link |
00:48:24.040
of an algorithm like HAL.
link |
00:48:26.280
So how would you do better?
link |
00:48:28.240
If you think about what happened was
link |
00:48:32.200
there was no fail safe, right?
link |
00:48:35.280
So we, perfection, right?
link |
00:48:37.680
Like what is that?
link |
00:48:38.520
I'm gonna make something that I think is perfect.
link |
00:48:40.760
But if my assumptions are wrong,
link |
00:48:44.520
it'll be perfect based on the wrong assumptions, right?
link |
00:48:47.480
That's something that you don't know
link |
00:48:50.400
until you deploy and then you're like, oh yeah, messed up.
link |
00:48:53.720
But what that means is that when we design software
link |
00:48:57.440
such as in Space Odyssey,
link |
00:48:59.640
when we put things out that there has to be a fail safe.
link |
00:49:03.160
There has to be the ability that once it's out there,
link |
00:49:06.560
we can grade it as an F and it fails
link |
00:49:10.280
and it doesn't continue, right?
link |
00:49:12.120
There's some way that it can be brought in
link |
00:49:15.120
and removed and that's aspect.
link |
00:49:18.600
Because that's what happened with HAL.
link |
00:49:20.120
It was like assumptions were wrong.
link |
00:49:22.600
It was perfectly correct based on those assumptions
link |
00:49:27.040
and there was no way to change it,
link |
00:49:30.280
change the assumptions at all.
link |
00:49:33.240
And the change, the fallback would be to humans.
link |
00:49:36.240
You ultimately think like humans should be,
link |
00:49:41.040
it's not turtles or AI all the way down.
link |
00:49:44.840
It's at some point, there's a human
link |
00:49:46.440
that actually makes this change.
link |
00:49:47.280
I still think that, and again,
link |
00:49:49.040
because I do human robot interaction,
link |
00:49:50.680
I still think the human needs to be part
link |
00:49:53.480
of the equation at some point.
link |
00:49:55.880
So what, just looking back,
link |
00:49:57.880
what are some fascinating things in robotic space
link |
00:50:01.280
that NASA was working at the time?
link |
00:50:02.880
Or just in general, what have you gotten to play with
link |
00:50:07.080
and what are your memories from working at NASA?
link |
00:50:09.480
Yeah, so one of my first memories was,
link |
00:50:14.000
they were working on a surgical robot system
link |
00:50:18.280
that could do eye surgery, right?
link |
00:50:21.840
And this was back in, oh my gosh,
link |
00:50:23.920
it must have been, oh, maybe 92, 93, 94.
link |
00:50:30.560
So it's like almost like a remote operation.
link |
00:50:32.840
Yeah, it was remote operation.
link |
00:50:34.520
And in fact, you can even find some old tech reports on it.
link |
00:50:38.360
So think of it, like now we have DaVinci, right?
link |
00:50:41.600
Like think of it, but these were like the late 90s, right?
link |
00:50:45.840
And I remember going into the lab one day
link |
00:50:48.200
and I was like, what's that, right?
link |
00:50:50.960
And of course it wasn't pretty, right?
link |
00:50:53.880
Cause the technology, but it was like functional.
link |
00:50:56.600
And you had this individual that could use
link |
00:50:59.160
the version of haptics to actually do the surgery.
link |
00:51:01.880
And they had this mockup of a human face and like the eyeballs
link |
00:51:05.520
and you can see this little drill.
link |
00:51:08.360
And I was like, oh, that is so cool.
link |
00:51:11.640
That one I vividly remember
link |
00:51:13.640
because it was so outside of my like possible thoughts
link |
00:51:18.560
of what could be done.
link |
00:51:19.960
It's the kind of precision.
link |
00:51:21.280
And I mean, what's the most amazing of a thing like that?
link |
00:51:26.040
I think it was the precision.
link |
00:51:28.160
It was the kind of first time
link |
00:51:31.880
that I had physically seen this robot machine,
link |
00:51:37.440
human interface, right?
link |
00:51:39.560
Versus, cause manufacturing had been,
link |
00:51:42.320
you saw those kind of big robots, right?
link |
00:51:44.480
But this was like, oh, this is in a person.
link |
00:51:48.000
There's a person and a robot like in the same space.
link |
00:51:51.360
The meeting them in person.
link |
00:51:52.960
Like for me, it was a magical moment that I can't,
link |
00:51:55.960
as life transforming that I recently met
link |
00:51:58.880
Spotmini from Boston Dynamics.
link |
00:52:00.600
Oh, see.
link |
00:52:01.440
I don't know why, but on the human robot interaction,
link |
00:52:04.640
for some reason I realized how easy it is
link |
00:52:07.720
to anthropomorphize.
link |
00:52:09.760
And it was, I don't know, it was almost like falling in love
link |
00:52:13.440
with this feeling of meeting.
link |
00:52:14.720
And I've obviously seen these robots a lot in video and so on,
link |
00:52:18.200
but meeting in person, just having that one on one time
link |
00:52:20.920
is different.
link |
00:52:21.760
It's different.
link |
00:52:22.600
So have you had a robot like that in your life
link |
00:52:25.080
that made you maybe fall in love with robotics?
link |
00:52:28.320
Sort of like meeting in person?
link |
00:52:32.160
I mean, I loved robotics.
link |
00:52:35.040
From the beginning.
link |
00:52:35.880
Yeah, so that was a 12 year old,
link |
00:52:37.920
like I'm gonna be a roboticist.
link |
00:52:39.520
Actually was, I called it cybernetics,
link |
00:52:41.240
but so my motivation was Bionic Woman.
link |
00:52:44.760
I don't know if you know that.
link |
00:52:46.320
And so, I mean, that was like a seminal moment,
link |
00:52:49.560
but I didn't meet like that was TV, right?
link |
00:52:52.400
Like it wasn't like I was in the same space and I met,
link |
00:52:54.600
I was like, oh my gosh, you're like real.
link |
00:52:56.600
Just looking at Bionic Woman, which by the way,
link |
00:52:58.880
because I read that about you, I watched a bit of it
link |
00:53:03.280
and it's just so, no offense, terrible.
link |
00:53:05.640
It's cheesy, look at it now.
link |
00:53:08.400
It's cheesy.
link |
00:53:09.240
I've seen a couple of reruns lately.
link |
00:53:11.560
But of course at the time, it's probably
link |
00:53:15.320
a captured imagination.
link |
00:53:16.640
But this is out of fix.
link |
00:53:17.480
I shouldn't.
link |
00:53:20.080
Especially when you're younger, just capture you.
link |
00:53:23.120
But which aspect, did you think of it,
link |
00:53:24.720
you mentioned cybernetics, did you think of it as robotics
link |
00:53:27.720
or did you think of it as almost constructing
link |
00:53:30.120
artificial beings?
link |
00:53:31.640
Like is it the intelligent part
link |
00:53:34.160
that captured your fascination
link |
00:53:37.000
or was it the whole thing?
link |
00:53:38.040
Like even just the limbs and just the.
link |
00:53:39.800
So for me, it would have, in another world,
link |
00:53:42.880
I probably would have been more of a biomedical engineer
link |
00:53:46.800
because what fascinated me was the parts,
link |
00:53:50.000
like the bionic parts, the limbs, those aspects of it.
link |
00:53:55.040
Are you especially drawn to humanoid
link |
00:53:57.120
or human like robots?
link |
00:53:59.600
I would say human like, not humanoid, right?
link |
00:54:03.040
And when I say human like, I think it's this aspect
link |
00:54:05.880
of that interaction, whether it's social
link |
00:54:09.160
and it's like a dog, right?
link |
00:54:10.680
Like that's human like, because it understand us,
link |
00:54:14.120
it interacts with us at that very social level
link |
00:54:18.480
to, you know, humanoid is a part of that,
link |
00:54:21.880
but only if they interact with us as if we are human.
link |
00:54:27.880
But just to linger on NASA for a little bit,
link |
00:54:30.920
what do you think maybe if you have other memories,
link |
00:54:34.080
but also what do you think is the future
link |
00:54:35.920
of robots in space?
link |
00:54:38.560
We mentioned how, but there's incredible robots
link |
00:54:41.880
that NASA is working on in general,
link |
00:54:43.400
thinking about in our, as we venture out,
link |
00:54:48.160
human civilization ventures out into space.
link |
00:54:50.440
What do you think the future of robots is there?
link |
00:54:52.240
Yeah, so I mean, there's the near term.
link |
00:54:53.680
For example, they just announced the rover
link |
00:54:57.280
that's going to the moon, which, you know,
link |
00:55:00.760
that's kind of exciting, but that's like near term.
link |
00:55:06.040
You know, my favorite, favorite, favorite series
link |
00:55:11.120
is Star Trek, right?
link |
00:55:13.280
You know, I really hope and even Star Trek,
link |
00:55:17.160
like if I calculate the years, I wouldn't be alive,
link |
00:55:20.080
but I would really, really love to be in that world.
link |
00:55:26.680
Like even if it's just at the beginning,
link |
00:55:28.440
like, you know, like voyage, like adventure one.
link |
00:55:33.160
So basically living in space.
link |
00:55:35.720
Yeah.
link |
00:55:36.560
With what robots, what are robots?
link |
00:55:39.720
The data.
link |
00:55:40.560
What role?
link |
00:55:41.400
The data would have to be, even though that wasn't,
link |
00:55:42.840
you know, that was like later, but.
link |
00:55:44.760
So data is a robot that has human like qualities.
link |
00:55:49.160
Right, without the emotion ship, yeah.
link |
00:55:51.080
You don't like emotion in your robots.
link |
00:55:52.240
Well, so data with the emotion ship was kind of a mess, right?
link |
00:55:58.560
It took a while for that, for him to adapt.
link |
00:56:04.640
But, and so why was that an issue?
link |
00:56:08.600
The issue is, is that emotions make us irrational agents.
link |
00:56:14.240
That's the problem.
link |
00:56:16.280
And yet he could think through things,
link |
00:56:20.040
even if it was based on an emotional scenario, right?
link |
00:56:23.440
Based on pros and cons.
link |
00:56:25.080
But as soon as you made him emotional,
link |
00:56:28.520
one of the metrics he used for evaluation
link |
00:56:31.160
was his own emotions.
link |
00:56:33.280
Not people around him, right?
link |
00:56:35.480
Like, and so.
link |
00:56:37.280
We do that as children, right?
link |
00:56:39.040
So we're very egocentric when we're young.
link |
00:56:40.880
We are very egocentric.
link |
00:56:42.320
And so, isn't that just an early version
link |
00:56:44.920
of the emotion ship then?
link |
00:56:46.400
I haven't watched much Star Trek.
link |
00:56:48.280
Except I have also met adults, right?
link |
00:56:52.480
And so that is a developmental process.
link |
00:56:54.640
And I'm sure there's a bunch of psychologists
link |
00:56:57.640
that can go through, like you can have a 60 year old adult
link |
00:57:00.680
who has the emotional maturity of a 10 year old, right?
link |
00:57:04.680
And so there's various phases
link |
00:57:07.000
that people should go through in order to evolve.
link |
00:57:10.000
And sometimes you don't.
link |
00:57:11.480
So how much psychology do you think
link |
00:57:14.880
a topic that's rarely mentioned in robotics,
link |
00:57:17.640
but how much does psychology come to play
link |
00:57:19.720
when you're talking about HRI, human robot interaction?
link |
00:57:23.600
When you have to have robots
link |
00:57:25.000
that actually interact with you?
link |
00:57:26.160
Tons.
link |
00:57:27.000
So we, like my group, as well as I read a lot
link |
00:57:31.360
in the cognitive science literature
link |
00:57:33.280
as well as the psychology literature.
link |
00:57:36.160
Because they understand a lot about human human relations
link |
00:57:42.720
and developmental milestones and things like that.
link |
00:57:45.960
And so we tend to look to see what's been done out there.
link |
00:57:53.120
Sometimes what we'll do is we'll try to match that to see
link |
00:57:56.520
is that human human relationship the same as human robot?
link |
00:58:01.000
Sometimes it is and sometimes it's different.
link |
00:58:03.080
And then when it's different, we have to,
link |
00:58:04.760
we try to figure out, okay,
link |
00:58:06.440
why is it different in this scenario?
link |
00:58:09.040
But it's the same in the other scenario, right?
link |
00:58:11.920
And so we try to do that quite a bit.
link |
00:58:15.320
Would you say that's,
link |
00:58:16.360
if we're looking at the future of human robot interaction,
link |
00:58:19.120
would you say the psychology piece is the hardest?
link |
00:58:22.040
Like if, I mean, it's a funny notion for you as,
link |
00:58:25.440
I don't know if you consider, yeah.
link |
00:58:27.360
I mean, one way to ask it,
link |
00:58:28.400
do you consider yourself a roboticist or a psychologist?
link |
00:58:32.000
Oh, I consider myself a roboticist
link |
00:58:33.600
that plays the act of a psychologist.
link |
00:58:36.200
But if you were to look at yourself sort of,
link |
00:58:39.760
you know, 20, 30 years from now,
link |
00:58:42.360
do you see yourself more and more wearing the psychology hat?
link |
00:58:47.200
Sort of another way to put it is,
link |
00:58:49.000
are the hard problems in human robot interactions,
link |
00:58:51.600
fundamentally psychology,
link |
00:58:53.600
or is it still robotics,
link |
00:58:55.800
the perception manipulation,
link |
00:58:57.320
planning all that kind of stuff?
link |
00:58:59.480
It's actually neither.
link |
00:59:01.680
The hardest part is the adaptation and the interaction.
link |
00:59:06.120
So it's the interface, it's the learning.
link |
00:59:08.880
And so if I think of,
link |
00:59:11.600
like I've become much more of a roboticist slash AI person
link |
00:59:17.200
than when I, like originally, again,
link |
00:59:19.040
I was about the bionics.
link |
00:59:20.160
I was electrical engineer, I was control theory, right?
link |
00:59:23.400
Like, and then I started realizing
link |
00:59:25.560
that my algorithms needed like human data, right?
link |
00:59:30.600
And so then I was like, okay, what is this human thing?
link |
00:59:32.520
Right, how do I incorporate human data?
link |
00:59:34.360
And then I realized that human perception had,
link |
00:59:38.440
like there was a lot in terms of how we perceive the world.
link |
00:59:41.040
And so trying to figure out,
link |
00:59:41.960
how do I model human perception from my,
link |
00:59:44.440
and so I became a HRI person,
link |
00:59:47.600
human robot interaction person,
link |
00:59:49.360
from being a control theory and realizing
link |
00:59:51.800
that humans actually offered quite a bit.
link |
00:59:55.240
And then when you do that,
link |
00:59:56.080
you become more of an artificial intelligence AI.
link |
00:59:59.200
And so I see myself evolving more in this AI world
link |
01:00:05.720
under the lens of robotics
link |
01:00:09.560
having hardware interacting with people.
link |
01:00:12.120
So you're a world class expert researcher in robotics
link |
01:00:17.880
and yet others, there's a few, it's a small,
link |
01:00:21.880
but fierce community of people,
link |
01:00:24.160
but most of them don't take the journey
link |
01:00:26.600
into the age of HRI, into the human.
link |
01:00:29.440
So why did you brave into the interaction with humans?
link |
01:00:34.440
It seems like a really hard problem.
link |
01:00:36.880
It's a hard problem and it's very risky as an academic.
link |
01:00:41.080
And I knew that when I started down that journey
link |
01:00:46.200
that it was very risky as an academic in this world
link |
01:00:50.600
that was nuanced, it was just developing.
link |
01:00:53.440
We didn't even have a conference, right?
link |
01:00:55.200
At the time.
link |
01:00:56.720
Because it was the interesting problems.
link |
01:01:00.080
That was what drove me.
link |
01:01:01.520
It was the fact that I looked at what interests me
link |
01:01:06.880
in terms of the application space and the problems.
link |
01:01:10.360
And that pushed me into trying to figure out
link |
01:01:14.880
what people were and what humans were
link |
01:01:16.840
and how to adapt to them.
link |
01:01:19.000
If those problems weren't so interesting,
link |
01:01:22.160
I'd probably still be sending rovers to glaciers, right?
link |
01:01:26.320
But the problems were interesting.
link |
01:01:28.040
And the other thing was that they were hard, right?
link |
01:01:30.600
So it's, I like having to go into a room
link |
01:01:34.560
and being like, I don't know what to do.
link |
01:01:37.000
And then going back and saying, okay,
link |
01:01:38.280
I'm gonna figure this out.
link |
01:01:39.800
I do not, I'm not driven when I go in like,
link |
01:01:42.320
oh, there are no surprises.
link |
01:01:44.040
Like I don't find that satisfying.
link |
01:01:47.320
If that was the case,
link |
01:01:48.160
I'd go someplace and make a lot more money, right?
link |
01:01:51.040
I think I stay in academic and choose to do this
link |
01:01:55.000
because I can go into a room and I'm like, that's hard.
link |
01:01:58.280
Yeah, I think just from my perspective,
link |
01:02:01.760
maybe you can correct me on it,
link |
01:02:03.240
but if I just look at the field of AI broadly,
link |
01:02:06.720
it seems that human robot interaction has the most,
link |
01:02:12.040
one of the most number of open problems.
link |
01:02:16.560
Like people, especially relative
link |
01:02:18.920
to how many people are willing to acknowledge that there are.
link |
01:02:23.440
This, because most people are just afraid of the humans
link |
01:02:26.160
so they don't even acknowledge how many open problems there
link |
01:02:28.200
but it's in terms of difficult problems
link |
01:02:30.400
to solve exciting spaces,
link |
01:02:32.360
it seems to be incredible for that.
link |
01:02:35.800
It is, and it's exciting.
link |
01:02:38.680
You've mentioned trust before.
link |
01:02:40.000
What role does trust from interacting with autopilot
link |
01:02:46.840
to in the medical context,
link |
01:02:48.440
what role does trust play in the human robot interaction?
link |
01:02:51.320
So some of the things I study in this domain
link |
01:02:53.920
is not just trust, but it really is over trust.
link |
01:02:56.920
How do you think about over trust?
link |
01:02:58.160
Like first of all, what is trust and what is over trust?
link |
01:03:03.360
Basically, the way I look at it is
link |
01:03:05.840
trust is not what you click on a survey,
link |
01:03:08.080
trust is about your behavior.
link |
01:03:09.600
So if you interact with the technology
link |
01:03:13.520
based on the decision or the actions of the technology
link |
01:03:17.320
as if you trust that decision, then you're trusting, right?
link |
01:03:22.400
And even in my group, we've done surveys
link |
01:03:25.600
that on the thing do you trust robots?
link |
01:03:28.280
Of course not.
link |
01:03:29.120
Would you follow this robot in a burning building?
link |
01:03:31.680
Of course not, right?
link |
01:03:32.960
And then you look at their actions
link |
01:03:34.480
and you're like, clearly your behavior
link |
01:03:37.280
does not match what you think, right?
link |
01:03:39.680
Or what you think you would like to think, right?
link |
01:03:42.040
And so I'm really concerned about the behavior
link |
01:03:44.080
because that's really at the end of the day
link |
01:03:45.840
when you're in the world,
link |
01:03:47.360
that's what will impact others around you.
link |
01:03:50.520
It's not whether before you went onto the street,
link |
01:03:52.960
you clicked on like, I don't trust self driving cars.
link |
01:03:55.640
Yeah, that from an outsider perspective,
link |
01:03:58.680
it's always frustrating to me.
link |
01:04:00.600
Well, I read a lot.
link |
01:04:01.480
So I'm insider in a certain philosophical sense.
link |
01:04:06.080
It's frustrating to me how often trust is used in surveys
link |
01:04:10.720
and how people say make claims
link |
01:04:14.440
out of any kind of finding they make
link |
01:04:16.240
while somebody clicking on answer.
link |
01:04:18.720
You just trust is, yeah, behavior just,
link |
01:04:23.760
you said it beautifully.
link |
01:04:24.640
I mean, action, your own behavior is what trust is.
link |
01:04:28.120
I mean, that everything else is not even close.
link |
01:04:30.800
It's almost like absurd comedic poetry
link |
01:04:36.120
that you weave around your actual behavior.
link |
01:04:38.560
So some people can say they trust,
link |
01:04:41.880
you know, I trust my wife, husband or not, whatever,
link |
01:04:46.080
but the actions is what speaks volumes.
link |
01:04:48.040
If you bug their car, you probably don't trust them.
link |
01:04:52.240
I trust them, I'm just making sure.
link |
01:04:53.840
No, no, that's, yeah.
link |
01:04:55.600
Like even if you think about cars,
link |
01:04:57.280
I think it's a beautiful case.
link |
01:04:58.600
I came here at some point,
link |
01:05:00.840
I'm sure on either Uber or Lyft, right?
link |
01:05:03.600
I remember when it first came out, right?
link |
01:05:06.040
I bet if they had had a survey,
link |
01:05:08.040
would you get in the car with a stranger and pay them?
link |
01:05:11.440
Yes.
link |
01:05:12.800
How many people do you think would have said, like, really?
link |
01:05:16.440
You know, wait, even worse,
link |
01:05:17.760
would you get in the car with a stranger
link |
01:05:19.840
at 1 a.m. in the morning
link |
01:05:21.960
to have them drop you home as a single female?
link |
01:05:24.800
Yeah.
link |
01:05:25.640
Like how many people would say, that's stupid?
link |
01:05:29.320
Yeah.
link |
01:05:30.160
And now look at where we are.
link |
01:05:31.600
I mean, people put kids, right?
link |
01:05:34.000
Like, oh yeah, my child has to go to school
link |
01:05:37.720
and I, yeah, I'm gonna put my kid in this car
link |
01:05:40.600
with a stranger.
link |
01:05:42.360
I mean, it's just fascinating how,
link |
01:05:45.280
like, what we think we think is not necessarily
link |
01:05:48.360
matching our behavior.
link |
01:05:49.720
Yeah, and certainly with robots, with autonomous vehicles,
link |
01:05:52.360
and all the kinds of robots you work with,
link |
01:05:54.720
that's, it's, yeah, it's the way you answer it,
link |
01:06:00.400
especially if you've never interacted
link |
01:06:01.880
with that robot before.
link |
01:06:04.400
If you haven't had the experience,
link |
01:06:05.680
you being able to respond correctly
link |
01:06:07.440
on a survey is impossible.
link |
01:06:09.640
But what role does trust play in the interaction,
link |
01:06:13.400
do you think?
link |
01:06:14.280
Like, is it good to, is it good to trust a robot?
link |
01:06:19.480
What does over trust mean?
link |
01:06:21.680
Or is it, is it good to kind of how you feel
link |
01:06:24.040
about autopilot currently, which is like,
link |
01:06:26.560
from a robotics perspective, is like,
link |
01:06:29.520
still very cautious?
link |
01:06:31.520
Yeah, so this is still an open area of research.
link |
01:06:34.960
But basically what I would like in a perfect world
link |
01:06:40.720
is that people trust the technology
link |
01:06:43.240
when it's working 100%,
link |
01:06:44.920
and people will be hypersensitive
link |
01:06:47.280
and identify when it's not.
link |
01:06:49.080
But of course we're not there.
link |
01:06:51.000
That's the ideal world.
link |
01:06:53.640
And, but we find is that people swing, right?
link |
01:06:56.480
They tend to swing, which means that if my first,
link |
01:07:01.320
and like, we have some papers,
link |
01:07:02.920
like first impressions is everything, right?
link |
01:07:05.280
If my first instance with technology with robotics
link |
01:07:08.520
is positive, it mitigates any risk,
link |
01:07:12.720
it correlates with like best outcomes.
link |
01:07:16.880
It means that I'm more likely to either not see it
link |
01:07:21.480
when it makes some mistakes or faults,
link |
01:07:24.240
or I'm more likely to forgive it.
link |
01:07:28.680
And so this is a problem
link |
01:07:30.360
because technology is not 100% accurate, right?
link |
01:07:32.640
It's not 100% accurate, although it may be perfect.
link |
01:07:35.080
How do you get that first moment right, do you think?
link |
01:07:37.680
There's also an education about the capabilities
link |
01:07:40.720
and limitations of the system.
link |
01:07:42.480
Do you have a sense of how you educate people correctly
link |
01:07:45.720
in that first interaction?
link |
01:07:47.120
Again, this is an open ended problem.
link |
01:07:50.240
So one of the study that actually has given me some hope
link |
01:07:55.000
that I was trying to figure out how to put in robotics.
link |
01:07:57.640
So there was a research study
link |
01:08:01.280
that it showed for medical AI systems,
link |
01:08:03.480
giving information to radiologists about, you know,
link |
01:08:07.840
here you need to look at these areas on the X ray.
link |
01:08:14.320
What they found was that when the system provided one choice,
link |
01:08:20.560
there was this aspect of either no trust or over trust, right?
link |
01:08:26.880
Like I'm not, I don't believe it at all,
link |
01:08:29.840
or a yes, yes, yes, yes, and they would miss things, right?
link |
01:08:34.840
Instead, when the system gave them multiple choices,
link |
01:08:38.840
like here are the three, even if it knew, like, you know,
link |
01:08:41.640
it had estimated that the top area you need to look at
link |
01:08:44.280
was some place on the X ray.
link |
01:08:48.280
If it gave like one plus others,
link |
01:08:52.240
the trust was maintained
link |
01:08:55.840
and the accuracy of the entire population increased.
link |
01:09:00.840
Right? So basically it was a, you're still trusting the system,
link |
01:09:04.840
but you're also putting in a little bit of like
link |
01:09:06.840
your human expertise,
link |
01:09:08.840
like your human decision processing into the equation.
link |
01:09:12.840
So it helps to mitigate that over trust risk.
link |
01:09:15.840
Yeah. So there's a fascinating balance at the strike.
link |
01:09:18.840
Haven't figured out again.
link |
01:09:20.840
It's exciting open area research. Exactly.
link |
01:09:23.840
So what are some exciting applications
link |
01:09:25.840
of human robot interaction?
link |
01:09:27.840
You started a company, maybe you can talk about
link |
01:09:30.840
the exciting efforts there,
link |
01:09:32.840
but in general also what other space
link |
01:09:35.840
can robots interact with humans and help?
link |
01:09:38.840
Yeah. So besides healthcare, because, you know,
link |
01:09:40.840
that's my bias lens.
link |
01:09:41.840
My other bias lens is education.
link |
01:09:44.840
I think that, well, one, we definitely, we,
link |
01:09:49.840
in the US, you know, we're doing okay with teachers,
link |
01:09:52.840
but there's a lot of school districts
link |
01:09:54.840
that don't have enough teachers.
link |
01:09:56.840
If you think about the teacher student ratio
link |
01:10:00.840
for at least public education in some districts,
link |
01:10:04.840
it's crazy.
link |
01:10:05.840
It's like, how can you have learning in that classroom?
link |
01:10:08.840
Right?
link |
01:10:09.840
Because you just don't have the human capital.
link |
01:10:11.840
And so if you think about robotics,
link |
01:10:14.840
bringing that in to classrooms,
link |
01:10:17.840
as well as the after school space,
link |
01:10:19.840
where they offset some of this lack of resources
link |
01:10:23.840
in certain communities, I think that's a good place.
link |
01:10:27.840
And then turning on the other end is using these systems then
link |
01:10:31.840
for workforce retraining and dealing with some of the things
link |
01:10:37.840
that are going to come out later on of job loss,
link |
01:10:41.840
like thinking about robots and NAI systems
link |
01:10:44.840
for retraining and workforce development.
link |
01:10:46.840
I think that's exciting areas that can be pushed even more,
link |
01:10:51.840
and it would have a huge, huge impact.
link |
01:10:54.840
What would you say are some of the open problems in education?
link |
01:10:59.840
Sort of, it's exciting.
link |
01:11:01.840
So young kids and the older folks or just folks of all ages
link |
01:11:08.840
who need to be retrained,
link |
01:11:10.840
who need to sort of open themselves up
link |
01:11:12.840
to a whole other area of work.
link |
01:11:15.840
What are the problems to be solved there?
link |
01:11:18.840
How do you think robots can help?
link |
01:11:21.840
We have the engagement aspect, right?
link |
01:11:23.840
So we can figure out the engagement.
link |
01:11:25.840
What do you mean by engagement?
link |
01:11:27.840
So identifying whether a person is focused
link |
01:11:33.840
is like that we can figure out.
link |
01:11:37.840
What we can figure out,
link |
01:11:39.840
and there's some positive results in this,
link |
01:11:43.840
is that personalized adaptation based on any concepts, right?
link |
01:11:48.840
So imagine I think about I have an agent
link |
01:11:53.840
and I'm working with a kid learning, I don't know, Algebra 2.
link |
01:12:00.840
Can that same agent then switch and teach
link |
01:12:04.840
some type of new coding skill to a displaced mechanic?
link |
01:12:10.840
What does that actually look like?
link |
01:12:13.840
Hardware might be the same,
link |
01:12:16.840
content is different,
link |
01:12:18.840
two different target demographics of engagement.
link |
01:12:21.840
How do you do that?
link |
01:12:23.840
How important do you think personalization
link |
01:12:25.840
is in human robot interaction?
link |
01:12:27.840
Not just a mechanic or student,
link |
01:12:31.840
but literally to the individual human being?
link |
01:12:34.840
I think personalization is really important,
link |
01:12:36.840
but a caveat is that I think we'd be okay
link |
01:12:41.840
if we can personalize to the group, right?
link |
01:12:43.840
And so if I can label you as along some certain dimensions,
link |
01:12:51.840
then even though it may not be you specifically,
link |
01:12:55.840
I can put you in this group.
link |
01:12:57.840
So the sample size, this is how they best learn,
link |
01:12:59.840
this is how they best engage.
link |
01:13:01.840
Even at that level, it's really important.
link |
01:13:05.840
And it's because, I mean, it's one of the reasons
link |
01:13:08.840
why educating in large classrooms is so hard, right?
link |
01:13:12.840
You teach to the median,
link |
01:13:14.840
but there's these individuals that are struggling,
link |
01:13:18.840
and then you have highly intelligent individuals,
link |
01:13:21.840
and those are the ones that are usually kind of left out.
link |
01:13:25.840
So highly intelligent individuals may be disruptive,
link |
01:13:27.840
and those who are struggling might be disruptive
link |
01:13:29.840
because they're both bored.
link |
01:13:31.840
And if you narrow the definition of the group
link |
01:13:34.840
or in the size of the group enough,
link |
01:13:36.840
you'll be able to address their individual needs,
link |
01:13:40.840
but really the most important group needs, right?
link |
01:13:44.840
And that's kind of what a lot of successful
link |
01:13:46.840
recommender systems do, Spotify and so on.
link |
01:13:49.840
It's sad to believe, but as a music listener,
link |
01:13:52.840
probably in some sort of large group,
link |
01:13:54.840
it's very sadly predictable.
link |
01:13:56.840
You have been labeled.
link |
01:13:58.840
Yeah, I've been labeled and successfully so,
link |
01:14:01.840
because they're able to recommend stuff.
link |
01:14:04.840
Yeah, but applying that to education, right?
link |
01:14:07.840
There's no reason why it can't be done.
link |
01:14:09.840
Do you have a hope for our education system?
link |
01:14:12.840
I have more hope for workforce development,
link |
01:14:15.840
and that's because I'm seeing investments.
link |
01:14:19.840
Even if you look at VC investments in education,
link |
01:14:22.840
the majority of it has lately been going to workforce retraining,
link |
01:14:27.840
right? And so I think that government investments
link |
01:14:31.840
is increasing. There's like a claim.
link |
01:14:33.840
And some of this based on fear, right?
link |
01:14:35.840
Like AI is going to come and take over all these jobs.
link |
01:14:37.840
What are we going to do with all these nonpaying taxes
link |
01:14:40.840
that aren't coming to us by our citizens?
link |
01:14:43.840
And so I think I'm more hopeful for that.
link |
01:14:46.840
Not so hopeful for early education,
link |
01:14:50.840
because it's this, it's still a who's going to pay for it,
link |
01:14:55.840
and you won't see the results for like 16 to 18 years.
link |
01:15:02.840
It's hard for people to wrap their heads around that.
link |
01:15:06.840
But on the retraining part, what are your thoughts?
link |
01:15:09.840
There's a candidate, Andrew Yang, running for president,
link |
01:15:13.840
saying that sort of AI automation, robots,
link |
01:15:18.840
universal basic income.
link |
01:15:20.840
Universal basic income in order to support us
link |
01:15:23.840
as we kind of automation takes people's jobs
link |
01:15:26.840
and allows you to explore and find other means.
link |
01:15:29.840
Like, do you have a concern of society transforming effects
link |
01:15:36.840
of automation and robots and so on?
link |
01:15:39.840
I do. I do know that AI robotics will displace workers.
link |
01:15:45.840
Like, we do know that.
link |
01:15:47.840
But there'll be other workers that will be defined new jobs.
link |
01:15:54.840
What I worry about is, that's not what I worry about.
link |
01:15:56.840
Like, will all the jobs go away?
link |
01:15:58.840
What I worry about is a type of jobs that will come out, right?
link |
01:16:01.840
Like, people who graduate from Georgia Tech will be okay, right?
link |
01:16:05.840
We give them the skills, they will adapt,
link |
01:16:07.840
even if their current job goes away.
link |
01:16:09.840
I do worry about those that don't have that quality of an education, right?
link |
01:16:14.840
Will they have the ability, the background to adapt to those new jobs?
link |
01:16:20.840
That, I don't know. That I worry about.
link |
01:16:23.840
Which will create even more polarization in our society, internationally,
link |
01:16:29.840
and everywhere. I worry about that.
link |
01:16:31.840
I also worry about not having equal access to all these wonderful things
link |
01:16:37.840
that AI can do and robotics can do.
link |
01:16:40.840
I worry about that.
link |
01:16:42.840
People like me from Georgia Tech, from say MIT, will be okay, right?
link |
01:16:49.840
But that's such a small part of the population
link |
01:16:52.840
that we need to think much more globally of having access to the beautiful things,
link |
01:16:57.840
whether it's AI in healthcare, AI in education, AI in politics, right?
link |
01:17:04.840
I worry about that.
link |
01:17:05.840
And that's part of the thing that you were talking about
link |
01:17:07.840
is people that build the technology had to be thinking about, ethics had to be thinking about access
link |
01:17:13.840
and all those things, and not just a small subset.
link |
01:17:17.840
Let me ask some philosophical, slightly romantic questions.
link |
01:17:21.840
People that listen to this will be like, here he goes again.
link |
01:17:25.840
Okay. Do you think one day we'll build an AI system
link |
01:17:31.840
that a person can fall in love with and it would love them back?
link |
01:17:37.840
Like in a movie, Her, for example.
link |
01:17:39.840
Oh, yeah.
link |
01:17:40.840
Although she kind of didn't fall in love with him.
link |
01:17:43.840
She fell in love with like a million other people, something like that.
link |
01:17:46.840
You're the jealous type, I see.
link |
01:17:48.840
We humans are the jealous type.
link |
01:17:50.840
Yes.
link |
01:17:51.840
So I do believe that we can design systems where people would fall in love
link |
01:17:57.840
with their robot, with their AI partner.
link |
01:18:02.840
That I do believe.
link |
01:18:04.840
Because it's actually, and I don't like to use the word manipulate,
link |
01:18:08.840
but as we see, there are certain individuals that can be manipulated
link |
01:18:12.840
if you understand the cognitive science about it, right?
link |
01:18:15.840
Right.
link |
01:18:16.840
So I mean, if you could think of all close relationship and love in general
link |
01:18:20.840
as a kind of mutual manipulation, that dance, the human dance.
link |
01:18:26.840
I mean, manipulation is a negative connotation.
link |
01:18:29.840
And that's why I don't like to use that word particularly.
link |
01:18:32.840
I guess another way to phrase it is you're getting at it,
link |
01:18:34.840
it could be algorithmatized or something.
link |
01:18:37.840
The relationship building part can be.
link |
01:18:39.840
Yeah.
link |
01:18:40.840
I mean, just think about it.
link |
01:18:41.840
We have, and I don't use dating sites, but from what I heard,
link |
01:18:46.840
there are some individuals that have been dating that have never saw each other, right?
link |
01:18:52.840
In fact, there's a show, I think, that tries to weed out fake people.
link |
01:18:57.840
There's a show that comes out, right?
link |
01:18:59.840
Because people start faking.
link |
01:19:01.840
What's the difference of that person on the other end being an AI agent, right?
link |
01:19:07.840
And having a communication and you building a relationship remotely,
link |
01:19:11.840
there's no reason why that can't happen.
link |
01:19:15.840
In terms of human robot interaction,
link |
01:19:17.840
what role, you've kind of mentioned with data, emotion being,
link |
01:19:22.840
can be problematic if not implemented well, I suppose.
link |
01:19:25.840
What role does emotion and some other human like things,
link |
01:19:29.840
the imperfect things come into play here for good human robot interaction
link |
01:19:34.840
and something like love?
link |
01:19:36.840
Yeah.
link |
01:19:37.840
So in this case, and you had asked, can an AI agent love a human back?
link |
01:19:42.840
I think they can emulate love back, right?
link |
01:19:46.840
And so what does that actually mean?
link |
01:19:48.840
It just means that if you think about their programming,
link |
01:19:51.840
they might put the other person's needs in front of theirs in certain situations, right?
link |
01:19:57.840
Think about it as return on investment.
link |
01:19:59.840
Like, was my return on investment as part of that equation,
link |
01:20:02.840
that person's happiness has some type of algorithm waiting to it.
link |
01:20:07.840
And the reason why is because I care about them, right?
link |
01:20:10.840
That's the only reason, right?
link |
01:20:12.840
But if I care about them and I show that, then my final objective function
link |
01:20:17.840
is length of time of the engagement, right?
link |
01:20:19.840
So you can think of how to do this actually quite easily.
link |
01:20:23.840
But that's not love?
link |
01:20:26.840
Well, so that's the thing.
link |
01:20:28.840
I think it emulates love because we don't have a classical definition of love.
link |
01:20:37.840
Right.
link |
01:20:38.840
And we don't have the ability to look into each other's minds to see the algorithm.
link |
01:20:44.840
And I guess what I'm getting at is, is it possible that,
link |
01:20:49.840
especially if that's learned,
link |
01:20:50.840
especially if there's some mystery and black box nature to the system, how is that, you know,
link |
01:20:57.840
How is it any different?
link |
01:20:58.840
How is it any different?
link |
01:20:59.840
And in terms of sort of, if the system says, I'm conscious, I'm afraid of death,
link |
01:21:04.840
and it does indicate that it loves you.
link |
01:21:10.840
Another way to sort of phrase it, I'd be curious to see what you think.
link |
01:21:13.840
Do you think there'll be a time when robots should have rights?
link |
01:21:19.840
You've kind of phrased the robot in a very roboticist way.
link |
01:21:22.840
It's just a really good way.
link |
01:21:24.840
But saying, okay, well, there's an objective function,
link |
01:21:27.840
and I could see how you can create a compelling human robot interaction experience
link |
01:21:32.840
that makes you believe that the robot cares for your needs
link |
01:21:35.840
and even something like loves you.
link |
01:21:38.840
But what if the robot says, please don't turn me off?
link |
01:21:43.840
What if the robot starts making you feel like there's an entity of being a soul there?
link |
01:21:48.840
Right.
link |
01:21:49.840
Do you think there'll be a future?
link |
01:21:52.840
Hopefully you won't laugh too much at this,
link |
01:21:55.840
but where they do ask for rights.
link |
01:21:59.840
So I can see a future if we don't address it in the near term,
link |
01:22:07.840
where these agents, as they adapt and learn, could say, hey,
link |
01:22:12.840
this should be something that's fundamental.
link |
01:22:15.840
I hopefully think that we would address it before it gets to that point.
link |
01:22:19.840
You think that's a bad future.
link |
01:22:21.840
Is that a negative thing where they ask or being discriminated against?
link |
01:22:26.840
I guess it depends on what role have they attained at that point.
link |
01:22:33.840
And so if I think about now...
link |
01:22:35.840
Careful what you say, because the robot's 50 years when I'll be listening to this,
link |
01:22:39.840
and you'll be on TV saying, this is what roboticists used to believe.
link |
01:22:43.840
Well, right?
link |
01:22:44.840
And so this is my, and as I said, I have a biased lens,
link |
01:22:47.840
and my robot friends will understand that.
link |
01:22:50.840
But so if you think about it,
link |
01:22:53.840
and I actually put this in kind of the...
link |
01:22:57.840
As a roboticist, you don't necessarily think of robots as human with human rights,
link |
01:23:03.840
but you could think of them either in the category of property,
link |
01:23:08.840
or you could think of them in the category of animals, right?
link |
01:23:13.840
And so both of those have different types of rights.
link |
01:23:17.840
So animals have their own rights as a living being,
link |
01:23:22.840
but they can't vote, they can't write, they can be euthanized.
link |
01:23:27.840
But as humans, if we abuse them, we go to jail, right?
link |
01:23:32.840
So they do have some rights that protect them,
link |
01:23:35.840
but don't give them the rights of citizenship.
link |
01:23:39.840
And then if you think about property,
link |
01:23:41.840
property, the rights are associated with the person, right?
link |
01:23:45.840
So if someone vandalizes your property,
link |
01:23:49.840
or steals your property, like there are some rights,
link |
01:23:53.840
but it's associated with the person who owns that.
link |
01:23:57.840
If you think about it, back in the day,
link |
01:24:01.840
and remember we talked about how society has changed,
link |
01:24:05.840
women were property, right?
link |
01:24:08.840
They were not thought of as having rights,
link |
01:24:11.840
they were thought of as property of...
link |
01:24:15.840
Assaulting a woman meant assaulting the property of somebody else.
link |
01:24:19.840
Exactly.
link |
01:24:20.840
And so what I envision is that we will establish
link |
01:24:24.840
some type of norm at some point, but that it might evolve, right?
link |
01:24:29.840
If you look at women's rights now,
link |
01:24:31.840
there are still some countries that don't have,
link |
01:24:35.840
and the rest of the world is like, why?
link |
01:24:37.840
That makes no sense, right?
link |
01:24:39.840
And so I do see a world where we do establish some type of grounding.
link |
01:24:44.840
It might be based on property rights.
link |
01:24:46.840
It might be based on animal rights.
link |
01:24:48.840
And if it evolves that way,
link |
01:24:51.840
I think we will have this conversation at that time,
link |
01:24:55.840
because that's the way our society traditionally has evolved.
link |
01:25:00.840
Beautifully put.
link |
01:25:02.840
Just out of curiosity,
link |
01:25:04.840
Anki, Gibo, Mayfield Robotics,
link |
01:25:07.840
with the Robot Curie, SciFile Works,
link |
01:25:09.840
we think Robotics were all these amazing robotics companies
link |
01:25:12.840
created by incredible roboticists,
link |
01:25:15.840
and they've all went out of business recently.
link |
01:25:20.840
Why do you think they didn't last long?
link |
01:25:23.840
Why is it so hard to run a robotics company,
link |
01:25:26.840
especially one like these,
link |
01:25:29.840
which are fundamentally HRI,
link |
01:25:32.840
human robot interaction robots?
link |
01:25:35.840
Yeah, each one has a story.
link |
01:25:38.840
Only one of them I don't understand.
link |
01:25:40.840
And that was Anki.
link |
01:25:42.840
That's actually the only one I don't understand.
link |
01:25:44.840
I don't understand it either.
link |
01:25:46.840
I mean, from the outside,
link |
01:25:48.840
I've looked at their sheets,
link |
01:25:50.840
I've looked at the data that's...
link |
01:25:52.840
Oh, you mean like business wise?
link |
01:25:54.840
Yeah, and I look at that data,
link |
01:25:58.840
and I'm like, they seem to have product market fit.
link |
01:26:02.840
So that's the only one I don't understand.
link |
01:26:05.840
The rest of it was product market fit.
link |
01:26:07.840
What's product market fit?
link |
01:26:10.840
How do you think about it?
link |
01:26:12.840
Yeah, so although we think Robotics was getting there,
link |
01:26:15.840
but I think it's just the timing,
link |
01:26:17.840
their clock just timed out.
link |
01:26:20.840
I think if they had been given a couple more years,
link |
01:26:22.840
they would have been okay.
link |
01:26:24.840
But the other ones were still fairly early
link |
01:26:28.840
by the time they got into the market.
link |
01:26:30.840
And so product market fit is,
link |
01:26:32.840
I have a product that I want to sell at a certain price.
link |
01:26:36.840
Are there enough people out there, the market,
link |
01:26:39.840
that are willing to buy the product at that market price
link |
01:26:42.840
for me to be a functional, viable, profit bearing company?
link |
01:26:47.840
Right?
link |
01:26:48.840
So product market fit.
link |
01:26:50.840
If it costs you $1,000 and everyone wants it
link |
01:26:54.840
and only is willing to pay a dollar,
link |
01:26:56.840
you have no product market fit.
link |
01:26:58.840
Even if you could sell it for, you know,
link |
01:27:01.840
it's enough for a dollar because you can't...
link |
01:27:03.840
So how hard is it for robots?
link |
01:27:05.840
Maybe if you look at iRobot,
link |
01:27:07.840
the company that makes Roombas vacuum cleaners,
link |
01:27:10.840
can you comment on did they find the right product,
link |
01:27:13.840
a market product fit?
link |
01:27:15.840
Like are people willing to pay for robots?
link |
01:27:18.840
It's also another kind of question.
link |
01:27:20.840
So if you think about iRobot and their story, right?
link |
01:27:23.840
Like when they first, they had enough of a runway, right?
link |
01:27:28.840
When they first started, they weren't doing vacuum cleaners, right?
link |
01:27:31.840
They were contracts, primarily government contracts,
link |
01:27:36.840
designing robots.
link |
01:27:37.840
Military robots.
link |
01:27:38.840
Yeah, I mean, that's what they were.
link |
01:27:39.840
That's how they started, right?
link |
01:27:40.840
They still do a lot of incredible work there.
link |
01:27:42.840
But yeah, that was the initial thing
link |
01:27:44.840
that gave them enough funding to...
link |
01:27:46.840
To then try to...
link |
01:27:47.840
The vacuum cleaner is what I've been told
link |
01:27:50.840
was not like their first rendezvous
link |
01:27:53.840
in terms of designing a product, right?
link |
01:27:56.840
And so they were able to survive
link |
01:27:58.840
until they got to the point that they found a product,
link |
01:28:02.840
price market, right?
link |
01:28:04.840
And even with, if you look at the Roomba,
link |
01:28:07.840
the price point now is different
link |
01:28:09.840
than when it was first released, right?
link |
01:28:11.840
It was an early adopter price,
link |
01:28:12.840
but they found enough people who were willing to fund it.
link |
01:28:15.840
And I mean, I forgot what their loss profile was
link |
01:28:19.840
for the first couple of years,
link |
01:28:21.840
but they became profitable in sufficient time
link |
01:28:24.840
that they didn't have to close their doors.
link |
01:28:27.840
So they found the right,
link |
01:28:29.840
there's still people willing to pay
link |
01:28:31.840
a large amount of money,
link |
01:28:32.840
sort of over $1,000 for a vacuum cleaner.
link |
01:28:35.840
Unfortunately for them,
link |
01:28:37.840
now that they've proved everything out,
link |
01:28:38.840
figured it all out, now there's competitors.
link |
01:28:40.840
Yeah, and so that's the next thing, right?
link |
01:28:43.840
The competition, and they have quite a number,
link |
01:28:46.840
even internationally,
link |
01:28:47.840
like there's some products out there,
link |
01:28:49.840
you can go to Europe and be like,
link |
01:28:52.840
oh, I didn't even know this one existed.
link |
01:28:54.840
So this is the thing though,
link |
01:28:56.840
like with any market,
link |
01:28:58.840
I would, this is not a bad time,
link |
01:29:02.840
although, you know, as a roboticist,
link |
01:29:04.840
it's kind of depressing,
link |
01:29:05.840
but I actually think about things like with,
link |
01:29:10.840
I would say that all of the companies
link |
01:29:12.840
that are now in the top five or six,
link |
01:29:15.840
they weren't the first to the stage, right?
link |
01:29:19.840
Like Google was not the first search engine,
link |
01:29:22.840
sorry, Alta Vista, right?
link |
01:29:24.840
Facebook was not the first, sorry, MySpace, right?
link |
01:29:27.840
Like, think about it,
link |
01:29:28.840
they were not the first players.
link |
01:29:30.840
Those first players, like,
link |
01:29:32.840
they're not in the top five, 10 of Fortune 500 companies, right?
link |
01:29:38.840
They proved, they started to prove out the market,
link |
01:29:43.840
they started to get people interested,
link |
01:29:45.840
they started the buzz,
link |
01:29:47.840
but they didn't make it to that next level.
link |
01:29:49.840
But the second batch, right?
link |
01:29:51.840
The second batch, I think, might make it to the next level.
link |
01:29:56.840
When do you think the Facebook of...
link |
01:30:01.840
The Facebook of robotics.
link |
01:30:03.840
Sorry, I take that phrase back
link |
01:30:06.840
because people deeply, for some reason,
link |
01:30:08.840
well, I know why,
link |
01:30:09.840
but it's, I think, exaggerated distrust Facebook
link |
01:30:12.840
because of the privacy concerns and so on.
link |
01:30:14.840
And with robotics,
link |
01:30:15.840
one of the things you have to make sure
link |
01:30:17.840
is all the things we talked about
link |
01:30:19.840
is to be transparent and have people deeply trust you
link |
01:30:22.840
to let a robot into their lives, into their home.
link |
01:30:25.840
But when do you think the second batch of robots...
link |
01:30:28.840
Is it five, 10 years, 20 years
link |
01:30:31.840
that we'll have robots in our homes
link |
01:30:34.840
and robots in our hearts?
link |
01:30:36.840
So if I think about...
link |
01:30:37.840
Because I try to follow the VC kind of space
link |
01:30:40.840
in terms of robotic investments.
link |
01:30:42.840
And right now,
link |
01:30:43.840
and I don't know if they're going to be successful,
link |
01:30:45.840
I don't know if this is the second batch,
link |
01:30:48.840
but there's only one batch that's focused on the first batch, right?
link |
01:30:52.840
And then there's all these self driving Xs, right?
link |
01:30:55.840
And so I don't know if they're a first batch of something
link |
01:30:58.840
or if, like, I don't know quite where they fit in,
link |
01:31:02.840
but there's a number of companies,
link |
01:31:04.840
the co robot, I would call them co robots,
link |
01:31:07.840
that are still getting VC investments.
link |
01:31:11.840
They, some of them have some of the flavor
link |
01:31:13.840
of, like, rethink robotics.
link |
01:31:15.840
Some of them have some of the flavor of, like, Curie.
link |
01:31:17.840
What's a co robot?
link |
01:31:19.840
So basically a robot and human working in the same space.
link |
01:31:25.840
So some of the companies are focused on manufacturing.
link |
01:31:29.840
So having a robot and human working together in a factory,
link |
01:31:35.840
some of these co robots are robots
link |
01:31:38.840
and humans working in the home, working in clinics.
link |
01:31:41.840
Like there's different versions of these companies
link |
01:31:43.840
in terms of their products,
link |
01:31:44.840
but they're all, so rethink robotics would be, like,
link |
01:31:48.840
one of the first, at least well known,
link |
01:31:51.840
companies focused on this space.
link |
01:31:53.840
So I don't know if this is a second batch
link |
01:31:56.840
or if this is still part of the first batch,
link |
01:32:00.840
that I don't know.
link |
01:32:01.840
And then you have all these other companies
link |
01:32:03.840
in this self driving, you know, space.
link |
01:32:06.840
And I don't know if that's a first batch
link |
01:32:08.840
or, again, a second batch.
link |
01:32:10.840
Yeah.
link |
01:32:11.840
So there's a lot of mystery about this now.
link |
01:32:13.840
Of course, it's hard to say that this is the second batch
link |
01:32:15.840
until it proves out, right?
link |
01:32:17.840
Correct.
link |
01:32:18.840
Yeah, exactly.
link |
01:32:19.840
We need a unicorn.
link |
01:32:20.840
Yeah, exactly.
link |
01:32:22.840
Why do you think people are so afraid,
link |
01:32:26.840
at least in popular culture of legged robots
link |
01:32:29.840
like those worked in Boston Dynamics
link |
01:32:31.840
or just robotics in general,
link |
01:32:33.840
if you were to psychoanalyze that fear,
link |
01:32:35.840
what do you make of it?
link |
01:32:37.840
And should they be afraid?
link |
01:32:38.840
Sorry.
link |
01:32:39.840
So should people be afraid?
link |
01:32:40.840
I don't think people should be afraid.
link |
01:32:42.840
But with a caveat.
link |
01:32:44.840
I don't think people should be afraid
link |
01:32:46.840
given that most of us in this world
link |
01:32:50.840
understand that we need to change something, right?
link |
01:32:54.840
So given that.
link |
01:32:56.840
Now, if things don't change, be very afraid.
link |
01:33:00.840
Which is the dimension of change that's needed?
link |
01:33:03.840
So changing, thinking about the ramifications,
link |
01:33:06.840
thinking about like the ethics,
link |
01:33:08.840
thinking about like the conversation is going on, right?
link |
01:33:11.840
It's no longer a...
link |
01:33:13.840
We're going to deploy it and forget that, you know,
link |
01:33:16.840
this is a car that can kill pedestrians
link |
01:33:19.840
that are walking across the street, right?
link |
01:33:21.840
We're not in that stage.
link |
01:33:22.840
We're putting these roads out.
link |
01:33:24.840
There are people out there.
link |
01:33:26.840
A car could be a weapon.
link |
01:33:28.840
People are now, solutions aren't there yet.
link |
01:33:31.840
But people are thinking about this
link |
01:33:34.840
as we need to be ethically responsible
link |
01:33:37.840
as we send these systems out,
link |
01:33:39.840
robotics, medical, self driving.
link |
01:33:42.840
And military too.
link |
01:33:43.840
And military.
link |
01:33:44.840
Which is not as often talked about,
link |
01:33:46.840
but it's really where probably these robots
link |
01:33:49.840
will have a significant impact as well.
link |
01:33:51.840
Correct, correct.
link |
01:33:52.840
Right, making sure that they can think rationally,
link |
01:33:56.840
even having the conversations,
link |
01:33:58.840
who should pull the trigger, right?
link |
01:34:00.840
But overall, you're saying if we start to think
link |
01:34:02.840
more and more as a community about these ethical issues,
link |
01:34:04.840
people should not be afraid.
link |
01:34:06.840
Yeah, I don't think people should be afraid.
link |
01:34:08.840
I think that the return on investment,
link |
01:34:10.840
the impact, positive impact will outweigh
link |
01:34:13.840
any of the potentially negative impacts.
link |
01:34:16.840
Do you have worries of existential threats
link |
01:34:19.840
of robots or AI that some people kind of
link |
01:34:23.840
talk about and romanticize about
link |
01:34:26.840
in the next decade, in the next few decades?
link |
01:34:29.840
No, I don't.
link |
01:34:30.840
Singularity would be an example.
link |
01:34:32.840
So my concept is that, so remember,
link |
01:34:35.840
robots, AI is designed by people.
link |
01:34:38.840
Yes.
link |
01:34:39.840
It has our values.
link |
01:34:40.840
And I always correlate this with
link |
01:34:42.840
a parent and a child, right?
link |
01:34:44.840
So think about it.
link |
01:34:45.840
As a parent, what do we want?
link |
01:34:46.840
We want our kids to have a better life than us.
link |
01:34:49.840
We want them to expand.
link |
01:34:51.840
We want them to experience the world.
link |
01:34:55.840
And then as we grow older, our kids think
link |
01:34:58.840
and know they're smarter and better
link |
01:35:00.840
and more intelligent and have better opportunities
link |
01:35:04.840
and they may even stop listening to us.
link |
01:35:07.840
They don't go out and then kill us, right?
link |
01:35:10.840
Think about it.
link |
01:35:11.840
It's because it's instilled in them values.
link |
01:35:14.840
We instilled in them this whole aspect of community.
link |
01:35:17.840
And yes, even though you're maybe smarter
link |
01:35:19.840
and have more money and da, da, da,
link |
01:35:22.840
it's still about this love, caring relationship.
link |
01:35:26.840
And so that's what I believe.
link |
01:35:27.840
So even if we've created the Singularity
link |
01:35:30.840
and some archaic system back in 1980
link |
01:35:33.840
that suddenly evolves, the fact is it might say,
link |
01:35:36.840
I am smarter, I am sentient.
link |
01:35:39.840
These humans are really stupid.
link |
01:35:42.840
But I think it'll be like, yeah,
link |
01:35:45.840
but I just can't destroy them.
link |
01:35:47.840
Yeah, percent of mental value.
link |
01:35:49.840
Still just to come back
link |
01:35:51.840
for Thanksgiving dinner every once in a while.
link |
01:35:53.840
Exactly.
link |
01:35:54.840
That's so beautifully put.
link |
01:35:56.840
You've also said that The Matrix may be
link |
01:35:59.840
one of your more favorite A.I. related movies.
link |
01:36:02.840
Can you elaborate why?
link |
01:36:04.840
Yeah, it is one of my favorite movies.
link |
01:36:07.840
And it's because it represents
link |
01:36:10.840
kind of all the things I think about.
link |
01:36:13.840
So there's a symbiotic relationship
link |
01:36:15.840
between robots and humans, right?
link |
01:36:19.840
That symbiotic relationship is that they don't destroy us.
link |
01:36:21.840
They enslave us, right?
link |
01:36:23.840
But think about it.
link |
01:36:27.840
Even though they enslaved us,
link |
01:36:29.840
they needed us to be happy, right?
link |
01:36:31.840
And in order to be happy,
link |
01:36:33.840
they had to create this crudy world
link |
01:36:34.840
that they then had to live in, right?
link |
01:36:36.840
That's the whole premise.
link |
01:36:38.840
But then there were humans that had a choice, right?
link |
01:36:43.840
Like you had a choice to stay in this horrific,
link |
01:36:46.840
horrific world where it was your fantasied life
link |
01:36:50.840
with all of the anomalies, perfection, but not accurate.
link |
01:36:53.840
Or you can choose to be on your own
link |
01:36:57.840
and have maybe no food for a couple of days,
link |
01:37:01.840
but you were totally autonomous.
link |
01:37:04.840
And so I think of that as...
link |
01:37:06.840
And that's why.
link |
01:37:07.840
So it's not necessarily us being enslaved,
link |
01:37:09.840
but I think about us having the symbiotic relationship.
link |
01:37:12.840
Robots and A.I., even if they become sentient,
link |
01:37:15.840
they're still part of our society,
link |
01:37:16.840
and they will suffer just as much as we.
link |
01:37:19.840
Just as us.
link |
01:37:20.840
And there will be some kind of equilibrium
link |
01:37:23.840
that we'll have to find, some symbiotic relationship.
link |
01:37:26.840
And then you have the ethicists, the robotics folks,
link |
01:37:28.840
that are like, no, this has got to stop.
link |
01:37:31.840
I will take the other pill in order to make a difference.
link |
01:37:35.840
So if you could hang out for a day with a robot,
link |
01:37:40.840
real or from science fiction, movies, books,
link |
01:37:43.840
safely and get to pick his or her, their brain,
link |
01:37:48.840
who would you pick?
link |
01:37:50.840
I got to say it's data.
link |
01:37:54.840
Data.
link |
01:37:55.840
I was going to say Rosie,
link |
01:37:57.840
but I don't, I'm not really interested in her brain.
link |
01:38:00.840
I'm interested in data's brain.
link |
01:38:02.840
Data, pre or post emotion chip?
link |
01:38:05.840
Pre.
link |
01:38:06.840
But don't you think it'd be a more interesting conversation,
link |
01:38:11.840
post emotion chip?
link |
01:38:12.840
Yeah, it would be drama.
link |
01:38:14.840
And I, you know, I'm human.
link |
01:38:16.840
I deal with drama all the time.
link |
01:38:20.840
But the reason why I want to pick data's brain
link |
01:38:22.840
is because I could have a conversation with him
link |
01:38:26.840
and ask, for example, how can we fix this ethics problem?
link |
01:38:32.840
And he could go through like the rational thinking
link |
01:38:35.840
and through that, he could also help me think through it as well.
link |
01:38:39.840
And so that's, there's like these questions,
link |
01:38:41.840
fundamental questions I think I could ask him.
link |
01:38:43.840
That he would help me also learn from.
link |
01:38:46.840
And that fascinates me.
link |
01:38:49.840
I don't think there's a better place to end it.
link |
01:38:52.840
Ayanna, thank you so much for talking to us in honor.
link |
01:38:54.840
Thank you.
link |
01:38:55.840
Thank you.
link |
01:38:56.840
This was fun.
link |
01:38:57.840
Thanks for listening to this conversation.
link |
01:38:59.840
And thank you to our presenting sponsor, Cash App.
link |
01:39:02.840
Download it.
link |
01:39:03.840
Use code Lex podcast.
link |
01:39:05.840
You'll get $10 and $10 will go to first a STEM education nonprofit
link |
01:39:09.840
that inspires hundreds of thousands of young minds
link |
01:39:13.840
to become future leaders and innovators.
link |
01:39:16.840
If you enjoy this podcast, subscribe on YouTube,
link |
01:39:19.840
give it five stars on Apple podcast,
link |
01:39:21.840
follow on Spotify, support on Patreon,
link |
01:39:23.840
or simply connect with me on Twitter.
link |
01:39:26.840
And now let me leave you with some words of wisdom
link |
01:39:29.840
from Arthur C. Clarke.
link |
01:39:32.840
Whether we are based on carbon or on silicon
link |
01:39:35.840
makes no fundamental difference.
link |
01:39:37.840
We should each be treated with appropriate respect.
link |
01:39:41.840
Thank you for listening and hope to see you next time.