back to index

Ayanna Howard: Human-Robot Interaction & Ethics of Safety-Critical Systems | Lex Fridman Podcast #66


small model | large model

link |
00:00:00.000
The following is a conversation with Ayana Howard.
link |
00:00:03.380
She's a roboticist, professor Georgia Tech,
link |
00:00:06.180
and director of the Human Automation Systems Lab,
link |
00:00:09.820
with research interests in human robot interaction,
link |
00:00:12.780
assisted robots in the home, therapy gaming apps,
link |
00:00:15.980
and remote robotic exploration of extreme environments.
link |
00:00:20.260
Like me, in her work, she cares a lot
link |
00:00:23.420
about both robots and human beings,
link |
00:00:26.340
and so I really enjoyed this conversation.
link |
00:00:29.540
This is the Artificial Intelligence Podcast.
link |
00:00:32.580
If you enjoy it, subscribe on YouTube,
link |
00:00:34.940
give it five stars on Apple Podcast,
link |
00:00:36.940
follow on Spotify, support it on Patreon,
link |
00:00:39.580
or simply connect with me on Twitter
link |
00:00:41.700
at Lex Friedman, spelled F R I D M A N.
link |
00:00:45.640
I recently started doing ads
link |
00:00:47.140
at the end of the introduction.
link |
00:00:48.700
I'll do one or two minutes after introducing the episode,
link |
00:00:51.660
and never any ads in the middle
link |
00:00:53.180
that can break the flow of the conversation.
link |
00:00:55.500
I hope that works for you
link |
00:00:56.860
and doesn't hurt the listening experience.
link |
00:01:00.140
This show is presented by Cash App,
link |
00:01:02.260
the number one finance app in the App Store.
link |
00:01:04.740
I personally use Cash App to send money to friends,
link |
00:01:07.540
but you can also use it to buy, sell,
link |
00:01:09.300
and deposit Bitcoin in just seconds.
link |
00:01:11.700
Cash App also has a new investing feature.
link |
00:01:14.580
You can buy fractions of a stock, say $1 worth,
link |
00:01:17.520
no matter what the stock price is.
link |
00:01:19.640
Broker services are provided by Cash App Investing,
link |
00:01:22.560
a subsidiary of Square and Member SIPC.
link |
00:01:25.840
I'm excited to be working with Cash App
link |
00:01:28.140
to support one of my favorite organizations called First,
link |
00:01:31.540
best known for their FIRST Robotics and Lego competitions.
link |
00:01:35.060
They educate and inspire hundreds of thousands of students
link |
00:01:38.340
in over 110 countries,
link |
00:01:40.060
and have a perfect rating at Charity Navigator,
link |
00:01:42.820
which means that donated money
link |
00:01:44.100
is used to maximum effectiveness.
link |
00:01:46.860
When you get Cash App from the App Store or Google Play
link |
00:01:49.580
and use code LEXPODCAST, you'll get $10,
link |
00:01:53.500
and Cash App will also donate $10 to FIRST,
link |
00:01:56.420
which again, is an organization
link |
00:01:58.260
that I've personally seen inspire girls and boys
link |
00:02:01.060
to dream of engineering a better world.
link |
00:02:04.260
And now, here's my conversation with Ayanna Howard.
link |
00:02:09.420
What or who is the most amazing robot you've ever met,
link |
00:02:13.620
or perhaps had the biggest impact on your career?
link |
00:02:16.700
I haven't met her, but I grew up with her,
link |
00:02:21.060
but of course, Rosie.
link |
00:02:22.740
So, and I think it's because also.
link |
00:02:25.220
Who's Rosie?
link |
00:02:26.120
Rosie from the Jetsons.
link |
00:02:27.780
She is all things to all people, right?
link |
00:02:30.940
Think about it.
link |
00:02:31.780
Like anything you wanted, it was like magic, it happened.
link |
00:02:35.060
So people not only anthropomorphize,
link |
00:02:37.860
but project whatever they wish for the robot to be onto.
link |
00:02:41.940
Onto Rosie.
link |
00:02:42.920
But also, I mean, think about it.
link |
00:02:44.580
She was socially engaging.
link |
00:02:46.780
She every so often had an attitude, right?
link |
00:02:50.020
She kept us honest.
link |
00:02:51.940
She would push back sometimes
link |
00:02:53.740
when George was doing some weird stuff.
link |
00:02:56.980
But she cared about people, especially the kids.
link |
00:03:01.180
She was like the perfect robot.
link |
00:03:03.980
And you've said that people don't want
link |
00:03:06.460
their robots to be perfect.
link |
00:03:09.740
Can you elaborate that?
link |
00:03:11.140
What do you think that is?
link |
00:03:11.980
Just like you said, Rosie pushed back a little bit
link |
00:03:14.780
every once in a while.
link |
00:03:15.720
Yeah, so I think it's that.
link |
00:03:18.260
So if you think about robotics in general,
link |
00:03:19.860
we want them because they enhance our quality of life.
link |
00:03:23.900
And usually that's linked to something that's functional.
link |
00:03:27.000
Even if you think of self driving cars,
link |
00:03:28.640
why is there a fascination?
link |
00:03:29.980
Because people really do hate to drive.
link |
00:03:31.500
Like there's the like Saturday driving
link |
00:03:34.140
where I can just speed,
link |
00:03:35.300
but then there's the I have to go to work every day
link |
00:03:37.500
and I'm in traffic for an hour.
link |
00:03:38.980
I mean, people really hate that.
link |
00:03:40.380
And so robots are designed to basically enhance
link |
00:03:45.380
our ability to increase our quality of life.
link |
00:03:49.740
And so the perfection comes from this aspect of interaction.
link |
00:03:55.460
If I think about how we drive, if we drove perfectly,
link |
00:04:00.020
we would never get anywhere, right?
link |
00:04:02.140
So think about how many times you had to run past the light
link |
00:04:07.140
because you see the car behind you
link |
00:04:09.020
is about to crash into you.
link |
00:04:10.380
Or that little kid kind of runs into the street
link |
00:04:15.320
and so you have to cross on the other side
link |
00:04:17.300
because there's no cars, right?
link |
00:04:18.460
Like if you think about it, we are not perfect drivers.
link |
00:04:21.220
Some of it is because it's our world.
link |
00:04:23.580
And so if you have a robot that is perfect
link |
00:04:26.780
in that sense of the word,
link |
00:04:28.740
they wouldn't really be able to function with us.
link |
00:04:31.180
Can you linger a little bit on the word perfection?
link |
00:04:34.520
So from the robotics perspective,
link |
00:04:37.380
what does that word mean
link |
00:04:39.460
and how is sort of the optimal behavior
link |
00:04:42.900
as you're describing different
link |
00:04:44.460
than what we think is perfection?
link |
00:04:46.620
Yeah, so perfection, if you think about it
link |
00:04:49.460
in the more theoretical point of view,
link |
00:04:51.980
it's really tied to accuracy, right?
link |
00:04:54.060
So if I have a function,
link |
00:04:55.620
can I complete it at 100% accuracy with zero errors?
link |
00:05:00.660
And so that's kind of, if you think about perfection
link |
00:05:04.180
in the sense of the word.
link |
00:05:05.220
And in the self driving car realm,
link |
00:05:07.500
do you think from a robotics perspective,
link |
00:05:10.460
we kind of think that perfection means
link |
00:05:13.940
following the rules perfectly,
link |
00:05:15.580
sort of defining, staying in the lane, changing lanes.
link |
00:05:19.580
When there's a green light, you go.
link |
00:05:20.900
When there's a red light, you stop.
link |
00:05:22.300
And that's the, and be able to perfectly see
link |
00:05:26.660
all the entities in the scene.
link |
00:05:29.140
That's the limit of what we think of as perfection.
link |
00:05:31.980
And I think that's where the problem comes
link |
00:05:33.740
is that when people think about perfection for robotics,
link |
00:05:38.340
the ones that are the most successful
link |
00:05:40.820
are the ones that are quote unquote perfect.
link |
00:05:43.260
Like I said, Rosie is perfect,
link |
00:05:44.660
but she actually wasn't perfect in terms of accuracy,
link |
00:05:47.380
but she was perfect in terms of how she interacted
link |
00:05:50.380
and how she adapted.
link |
00:05:51.540
And I think that's some of the disconnect
link |
00:05:53.300
is that we really want perfection
link |
00:05:56.460
with respect to its ability to adapt to us.
link |
00:05:59.980
We don't really want perfection with respect to 100% accuracy
link |
00:06:03.500
with respect to the rules that we just made up anyway, right?
link |
00:06:06.780
And so I think there's this disconnect sometimes
link |
00:06:09.500
between what we really want and what happens.
link |
00:06:13.260
And we see this all the time, like in my research, right?
link |
00:06:15.940
Like the optimal, quote unquote optimal interactions
link |
00:06:20.340
are when the robot is adapting based on the person,
link |
00:06:24.300
not 100% following what's optimal based on the rules.
link |
00:06:29.540
Just to link on autonomous vehicles for a second,
link |
00:06:32.580
just your thoughts, maybe off the top of the head,
link |
00:06:36.180
how hard is that problem do you think
link |
00:06:37.940
based on what we just talked about?
link |
00:06:40.100
There's a lot of folks in the automotive industry,
link |
00:06:42.900
they're very confident from Elon Musk to Waymo
link |
00:06:45.900
to all these companies.
link |
00:06:47.620
How hard is it to solve that last piece?
link |
00:06:50.420
The last mile.
link |
00:06:51.340
The gap between the perfection and the human definition
link |
00:06:57.500
of how you actually function in this world.
link |
00:06:59.460
Yeah, so this is a moving target.
link |
00:07:00.580
So I remember when all the big companies
link |
00:07:04.460
started to heavily invest in this
link |
00:07:06.780
and there was a number of even roboticists
link |
00:07:09.860
as well as folks who were putting in the VCs
link |
00:07:13.180
and corporations, Elon Musk being one of them that said,
link |
00:07:16.660
self driving cars on the road with people
link |
00:07:19.460
within five years, that was a little while ago.
link |
00:07:24.180
And now people are saying five years, 10 years, 20 years,
link |
00:07:29.780
some are saying never, right?
link |
00:07:31.500
I think if you look at some of the things
link |
00:07:33.700
that are being successful is these
link |
00:07:39.420
basically fixed environments
link |
00:07:41.140
where you still have some anomalies, right?
link |
00:07:43.980
You still have people walking, you still have stores,
link |
00:07:46.460
but you don't have other drivers, right?
link |
00:07:50.060
Like other human drivers are,
link |
00:07:51.700
it's a dedicated space for the cars.
link |
00:07:55.580
Because if you think about robotics in general,
link |
00:07:57.140
where has always been successful?
link |
00:07:59.020
I mean, you can say manufacturing,
link |
00:08:00.580
like way back in the day, right?
link |
00:08:02.260
It was a fixed environment, humans were not part
link |
00:08:04.340
of the equation, we're a lot better than that.
link |
00:08:07.180
But like when we can carve out scenarios
link |
00:08:10.940
that are closer to that space,
link |
00:08:13.780
then I think that it's where we are.
link |
00:08:16.660
So a closed campus where you don't have self driving cars
link |
00:08:20.540
and maybe some protection so that the students
link |
00:08:23.780
don't jet in front just because they wanna see what happens.
link |
00:08:27.220
Like having a little bit, I think that's where
link |
00:08:29.940
we're gonna see the most success in the near future.
link |
00:08:32.300
And be slow moving.
link |
00:08:33.660
Right, not 55, 60, 70 miles an hour,
link |
00:08:37.900
but the speed of a golf cart, right?
link |
00:08:42.100
So that said, the most successful
link |
00:08:45.220
in the automotive industry robots operating today
link |
00:08:47.900
in the hands of real people are ones that are traveling
link |
00:08:51.600
over 55 miles an hour and in unconstrained environments,
link |
00:08:55.540
which is Tesla vehicles, so Tesla autopilot.
link |
00:08:58.880
So I would love to hear sort of your,
link |
00:09:01.720
just thoughts of two things.
link |
00:09:04.300
So one, I don't know if you've gotten to see,
link |
00:09:07.020
you've heard about something called smart summon
link |
00:09:10.020
where Tesla system, autopilot system,
link |
00:09:13.520
where the car drives zero occupancy, no driver
link |
00:09:17.140
in the parking lot slowly sort of tries to navigate
link |
00:09:19.980
the parking lot to find itself to you.
link |
00:09:22.720
And there's some incredible amounts of videos
link |
00:09:25.900
and just hilarity that happens as it awkwardly tries
link |
00:09:28.860
to navigate this environment, but it's a beautiful
link |
00:09:32.340
nonverbal communication between machine and human
link |
00:09:35.180
that I think is a, it's like, it's some of the work
link |
00:09:38.780
that you do in this kind of interesting
link |
00:09:40.660
human robot interaction space.
link |
00:09:42.060
So what are your thoughts in general about it?
link |
00:09:43.780
So I do have that feature.
link |
00:09:46.980
Do you drive a Tesla?
link |
00:09:47.820
I do, mainly because I'm a gadget freak, right?
link |
00:09:52.100
So I say it's a gadget that happens to have some wheels.
link |
00:09:55.620
And yeah, I've seen some of the videos.
link |
00:09:58.220
But what's your experience like?
link |
00:09:59.420
I mean, you're a human robot interaction roboticist,
link |
00:10:02.700
you're a legit sort of expert in the field.
link |
00:10:05.580
So what does it feel for a machine to come to you?
link |
00:10:08.260
It's one of these very fascinating things,
link |
00:10:11.900
but also I am hyper, hyper alert, right?
link |
00:10:16.100
Like I'm hyper alert, like my butt, my thumb is like,
link |
00:10:20.540
oh, okay, I'm ready to take over.
link |
00:10:23.220
Even when I'm in my car or I'm doing things like automated
link |
00:10:27.080
backing into, so there's like a feature where you can do
link |
00:10:30.420
this automating backing into a parking space,
link |
00:10:33.140
or bring the car out of your garage,
link |
00:10:35.660
or even, you know, pseudo autopilot on the freeway, right?
link |
00:10:40.260
I am hypersensitive.
link |
00:10:42.220
I can feel like as I'm navigating,
link |
00:10:44.720
like, yeah, that's an error right there.
link |
00:10:46.900
Like I am very aware of it, but I'm also fascinated by it.
link |
00:10:52.260
And it does get better.
link |
00:10:54.300
Like I look and see it's learning from all of these people
link |
00:10:58.980
who are cutting it on, like every time I cut it on,
link |
00:11:02.700
it's getting better, right?
link |
00:11:04.120
And so I think that's what's amazing about it is that.
link |
00:11:07.100
This nice dance of you're still hyper vigilant.
link |
00:11:10.340
So you're still not trusting it at all.
link |
00:11:12.780
Yeah.
link |
00:11:13.600
And yet you're using it.
link |
00:11:14.580
On the highway, if I were to, like what,
link |
00:11:17.580
as a roboticist, we'll talk about trust a little bit.
link |
00:11:22.640
How do you explain that?
link |
00:11:23.640
You still use it.
link |
00:11:25.020
Is it the gadget freak part?
link |
00:11:26.460
Like where you just enjoy exploring technology?
link |
00:11:30.700
Or is that the right actually balance
link |
00:11:33.680
between robotics and humans is where you use it,
link |
00:11:36.860
but don't trust it.
link |
00:11:38.340
And somehow there's this dance
link |
00:11:40.100
that ultimately is a positive.
link |
00:11:42.100
Yeah, so I think I'm,
link |
00:11:44.620
I just don't necessarily trust technology,
link |
00:11:48.080
but I'm an early adopter, right?
link |
00:11:50.140
So when it first comes out,
link |
00:11:51.960
I will use everything,
link |
00:11:54.260
but I will be very, very cautious of how I use it.
link |
00:11:57.420
Do you read about it or do you explore it by just try it?
link |
00:12:01.020
Do you like crudely, to put it crudely,
link |
00:12:04.980
do you read the manual or do you learn through exploration?
link |
00:12:07.960
I'm an explorer.
link |
00:12:08.800
If I have to read the manual, then I do design.
link |
00:12:12.320
Then it's a bad user interface.
link |
00:12:14.180
It's a failure.
link |
00:12:16.460
Elon Musk is very confident that you kind of take it
link |
00:12:19.540
from where it is now to full autonomy.
link |
00:12:21.780
So from this human robot interaction,
link |
00:12:24.500
where you don't really trust and then you try
link |
00:12:26.700
and then you catch it when it fails to,
link |
00:12:29.180
it's going to incrementally improve itself
link |
00:12:32.300
into full where you don't need to participate.
link |
00:12:36.500
What's your sense of that trajectory?
link |
00:12:39.860
Is it feasible?
link |
00:12:41.040
So the promise there is by the end of next year,
link |
00:12:44.580
by the end of 2020 is the current promise.
link |
00:12:47.180
What's your sense about that journey that Tesla's on?
link |
00:12:53.620
So there's kind of three things going on though.
link |
00:12:56.580
I think in terms of will people go like as a user,
link |
00:13:03.260
as a adopter, will you trust going to that point?
link |
00:13:08.460
I think so, right?
link |
00:13:10.080
Like there are some users and it's because what happens is
link |
00:13:13.020
when you're hypersensitive at the beginning
link |
00:13:16.700
and then the technology tends to work,
link |
00:13:19.300
your apprehension slowly goes away.
link |
00:13:23.820
And as people, we tend to swing to the other extreme, right?
link |
00:13:28.260
Because it's like, oh, I was like hyper, hyper fearful
link |
00:13:30.900
or hypersensitive and it was awesome.
link |
00:13:33.940
And we just tend to swing.
link |
00:13:35.600
That's just human nature.
link |
00:13:37.380
And so you will have, I mean, and I...
link |
00:13:38.860
That's a scary notion because most people
link |
00:13:41.520
are now extremely untrusting of autopilot.
link |
00:13:44.980
They use it, but they don't trust it.
link |
00:13:46.460
And it's a scary notion that there's a certain point
link |
00:13:48.900
where you allow yourself to look at the smartphone
link |
00:13:51.340
for like 20 seconds.
link |
00:13:53.100
And then there'll be this phase shift
link |
00:13:55.300
where it'll be like 20 seconds, 30 seconds,
link |
00:13:57.580
one minute, two minutes.
link |
00:13:59.980
It's a scary proposition.
link |
00:14:02.020
But that's people, right?
link |
00:14:03.460
That's just, that's humans.
link |
00:14:05.560
I mean, I think of even our use of,
link |
00:14:09.980
I mean, just everything on the internet, right?
link |
00:14:12.380
Like think about how reliant we are on certain apps
link |
00:14:16.860
and certain engines, right?
link |
00:14:20.260
20 years ago, people have been like, oh yeah, that's stupid.
link |
00:14:22.680
Like that makes no sense.
link |
00:14:23.940
Like, of course that's false.
link |
00:14:25.900
Like now it's just like, oh, of course I've been using it.
link |
00:14:29.100
It's been correct all this time.
link |
00:14:30.740
Of course aliens, I didn't think they existed,
link |
00:14:34.340
but now it says they do, obviously.
link |
00:14:37.620
100%, earth is flat.
link |
00:14:39.500
So, okay, but you said three things.
link |
00:14:43.860
So one is the human.
link |
00:14:44.700
Okay, so one is the human.
link |
00:14:45.820
And I think there will be a group of individuals
link |
00:14:47.820
that will swing, right?
link |
00:14:49.580
I just.
link |
00:14:50.420
Teenagers.
link |
00:14:51.260
Teenage, I mean, it'll be, it'll be adults.
link |
00:14:54.380
There's actually an age demographic
link |
00:14:56.400
that's optimal for technology adoption.
link |
00:15:00.140
And you can actually find them.
link |
00:15:02.260
And they're actually pretty easy to find.
link |
00:15:03.940
Just based on their habits, based on,
link |
00:15:06.100
so if someone like me who wasn't a roboticist
link |
00:15:10.420
would probably be the optimal kind of person, right?
link |
00:15:13.580
Early adopter, okay with technology,
link |
00:15:15.660
very comfortable and not hypersensitive, right?
link |
00:15:20.020
I'm just hypersensitive cause I designed this stuff.
link |
00:15:23.580
So there is a target demographic that will swing.
link |
00:15:25.940
The other one though,
link |
00:15:26.820
is you still have these humans that are on the road.
link |
00:15:31.380
That one is a harder, harder thing to do.
link |
00:15:35.100
And as long as we have people that are on the same streets,
link |
00:15:40.660
that's gonna be the big issue.
link |
00:15:42.480
And it's just because you can't possibly,
link |
00:15:45.260
I wanna say you can't possibly map the,
link |
00:15:48.020
some of the silliness of human drivers, right?
link |
00:15:51.380
Like as an example, when you're next to that car
link |
00:15:56.240
that has that big sticker called student driver, right?
link |
00:15:59.780
Like you are like, oh, either I'm going to like go around.
link |
00:16:04.580
Like we are, we know that that person
link |
00:16:06.740
is just gonna make mistakes that make no sense, right?
link |
00:16:09.260
How do you map that information?
link |
00:16:11.860
Or if I am in a car and I look over
link |
00:16:14.300
and I see two fairly young looking individuals
link |
00:16:19.220
and there's no student driver bumper
link |
00:16:21.100
and I see them chit chatting to each other,
link |
00:16:22.820
I'm like, oh, that's an issue, right?
link |
00:16:26.140
So how do you get that kind of information
link |
00:16:28.420
and that experience into basically an autopilot?
link |
00:16:35.660
And there's millions of cases like that
link |
00:16:37.260
where we take little hints to establish context.
link |
00:16:41.220
I mean, you said kind of beautifully poetic human things,
link |
00:16:44.360
but there's probably subtle things about the environment
link |
00:16:47.120
about it being maybe time for commuters
link |
00:16:52.900
to start going home from work
link |
00:16:55.220
and therefore you can make some kind of judgment
link |
00:16:57.140
about the group behavior of pedestrians, blah, blah, blah,
link |
00:17:00.060
and so on and so on.
link |
00:17:01.180
Or even cities, right?
link |
00:17:02.660
Like if you're in Boston, how people cross the street,
link |
00:17:07.100
like lights are not an issue versus other places
link |
00:17:10.660
where people will actually wait for the crosswalk.
link |
00:17:15.580
Seattle or somewhere peaceful.
link |
00:17:18.940
But what I've also seen sort of just even in Boston
link |
00:17:22.540
that intersection to intersection is different.
link |
00:17:25.500
So every intersection has a personality of its own.
link |
00:17:28.940
So certain neighborhoods of Boston are different.
link |
00:17:30.860
So we kind of, and based on different timing of day,
link |
00:17:35.220
at night, it's all, there's a dynamic to human behavior
link |
00:17:40.320
that we kind of figure out ourselves.
link |
00:17:42.420
We're not able to introspect and figure it out,
link |
00:17:46.100
but somehow our brain learns it.
link |
00:17:49.340
We do.
link |
00:17:50.340
And so you're saying, is there a shortcut?
link |
00:17:54.860
Is there a shortcut, though, for a robot?
link |
00:17:56.420
Is there something that could be done, you think,
link |
00:17:59.060
that, you know, that's what we humans do.
link |
00:18:02.660
It's just like bird flight, right?
link |
00:18:04.660
That's the example they give for flight.
link |
00:18:06.500
Do you necessarily need to build a bird that flies
link |
00:18:09.260
or can you do an airplane?
link |
00:18:11.860
Is there a shortcut to it?
link |
00:18:13.020
So I think the shortcut is, and I kind of,
link |
00:18:16.700
I talk about it as a fixed space,
link |
00:18:19.340
where, so imagine that there's a neighborhood
link |
00:18:23.280
that's a new smart city or a new neighborhood
link |
00:18:26.500
that says, you know what?
link |
00:18:27.540
We are going to design this new city
link |
00:18:31.460
based on supporting self driving cars.
link |
00:18:33.660
And then doing things, knowing that there's anomalies,
link |
00:18:37.660
knowing that people are like this, right?
link |
00:18:39.620
And designing it based on that assumption
link |
00:18:42.080
that like, we're gonna have this.
link |
00:18:43.940
That would be an example of a shortcut.
link |
00:18:45.540
So you still have people,
link |
00:18:47.140
but you do very specific things
link |
00:18:49.260
to try to minimize the noise a little bit
link |
00:18:51.740
as an example.
link |
00:18:53.820
And the people themselves become accepting of the notion
link |
00:18:56.180
that there's autonomous cars, right?
link |
00:18:57.740
Right, like they move into,
link |
00:18:59.700
so right now you have like a,
link |
00:19:01.420
you will have a self selection bias, right?
link |
00:19:03.580
Like individuals will move into this neighborhood
link |
00:19:06.180
knowing like this is part of like the real estate pitch,
link |
00:19:09.420
right?
link |
00:19:10.620
And so I think that's a way to do a shortcut.
link |
00:19:14.140
One, it allows you to deploy.
link |
00:19:17.540
It allows you to collect then data with these variances
link |
00:19:21.900
and anomalies, cause people are still people,
link |
00:19:24.020
but it's a safer space and it's more of an accepting space.
link |
00:19:28.820
I.e. when something in that space might happen
link |
00:19:31.900
because things do,
link |
00:19:34.100
because you already have the self selection,
link |
00:19:36.060
like people would be, I think a little more forgiving
link |
00:19:39.220
than other places.
link |
00:19:40.700
And you said three things, did we cover all of them?
link |
00:19:43.100
The third is legal law, liability,
link |
00:19:46.340
which I don't really want to touch,
link |
00:19:47.820
but it's still of concern.
link |
00:19:50.900
And the mishmash with like with policy as well,
link |
00:19:53.260
sort of government, all that whole.
link |
00:19:55.740
That big ball of stuff.
link |
00:19:57.740
Yeah, gotcha.
link |
00:19:59.100
So that's, so we're out of time now.
link |
00:20:03.540
Do you think from a robotics perspective,
link |
00:20:07.180
you know, if you're kind of honest of what cars do,
link |
00:20:09.820
they kind of threaten each other's life all the time.
link |
00:20:14.860
So cars are various.
link |
00:20:17.340
I mean, in order to navigate intersections,
link |
00:20:19.300
there's an assertiveness, there's a risk taking.
link |
00:20:22.300
And if you were to reduce it to an objective function,
link |
00:20:25.300
there's a probability of murder in that function,
link |
00:20:28.740
meaning you killing another human being
link |
00:20:31.900
and you're using that.
link |
00:20:33.580
First of all, it has to be low enough
link |
00:20:36.940
to be acceptable to you on an ethical level
link |
00:20:39.700
as an individual human being,
link |
00:20:41.300
but it has to be high enough for people to respect you
link |
00:20:45.300
to not sort of take advantage of you completely
link |
00:20:47.540
and jaywalk in front of you and so on.
link |
00:20:49.620
So, I mean, I don't think there's a right answer here,
link |
00:20:53.100
but what's, how do we solve that?
link |
00:20:56.100
How do we solve that from a robotics perspective
link |
00:20:57.940
when danger and human life is at stake?
link |
00:21:00.140
Yeah, as they say, cars don't kill people,
link |
00:21:01.980
people kill people.
link |
00:21:02.940
People kill people.
link |
00:21:05.100
Right.
link |
00:21:07.100
So I think.
link |
00:21:08.620
And now robotic algorithms would be killing people.
link |
00:21:10.780
Right, so it will be robotics algorithms that are pro,
link |
00:21:14.380
no, it will be robotic algorithms don't kill people.
link |
00:21:16.980
Developers of robotic algorithms kill people, right?
link |
00:21:19.740
I mean, one of the things is people are still in the loop
link |
00:21:22.940
and at least in the near and midterm,
link |
00:21:26.540
I think people will still be in the loop at some point,
link |
00:21:29.420
even if it's a developer.
link |
00:21:30.300
Like we're not necessarily at the stage
link |
00:21:31.860
where robots are programming autonomous robots
link |
00:21:36.740
with different behaviors quite yet.
link |
00:21:39.980
It's a scary notion, sorry to interrupt,
link |
00:21:42.260
that a developer has some responsibility
link |
00:21:47.420
in the death of a human being.
link |
00:21:49.700
That's a heavy burden.
link |
00:21:50.620
I mean, I think that's why the whole aspect of ethics
link |
00:21:55.460
in our community is so, so important, right?
link |
00:21:58.500
Like, because it's true.
link |
00:22:00.060
If you think about it, you can basically say,
link |
00:22:04.820
I'm not going to work on weaponized AI, right?
link |
00:22:07.460
Like people can say, that's not what I'm gonna do.
link |
00:22:09.860
But yet you are programming algorithms
link |
00:22:12.740
that might be used in healthcare algorithms
link |
00:22:15.620
that might decide whether this person
link |
00:22:17.260
should get this medication or not.
link |
00:22:18.980
And they don't and they die.
link |
00:22:21.420
Okay, so that is your responsibility, right?
link |
00:22:25.100
And if you're not conscious and aware
link |
00:22:27.340
that you do have that power when you're coding
link |
00:22:30.020
and things like that, I think that's just not a good thing.
link |
00:22:35.020
Like we need to think about this responsibility
link |
00:22:38.020
as we program robots and computing devices
link |
00:22:41.820
much more than we are.
link |
00:22:44.340
Yeah, so it's not an option to not think about ethics.
link |
00:22:46.980
I think it's a majority, I would say, of computer science.
link |
00:22:51.340
Sort of, it's kind of a hot topic now,
link |
00:22:53.860
I think about bias and so on, but it's,
link |
00:22:56.620
and we'll talk about it, but usually it's kind of,
link |
00:23:00.380
it's like a very particular group of people
link |
00:23:02.700
that work on that.
link |
00:23:04.260
And then people who do like robotics are like,
link |
00:23:06.940
well, I don't have to think about that.
link |
00:23:09.380
There's other smart people thinking about it.
link |
00:23:11.180
It seems that everybody has to think about it.
link |
00:23:14.580
It's not, you can't escape the ethics,
link |
00:23:17.060
whether it's bias or just every aspect of ethics
link |
00:23:21.140
that has to do with human beings.
link |
00:23:22.700
Everyone.
link |
00:23:23.540
So think about, I'm gonna age myself,
link |
00:23:25.700
but I remember when we didn't have like testers, right?
link |
00:23:30.140
And so what did you do?
link |
00:23:31.100
As a developer, you had to test your own code, right?
link |
00:23:33.580
Like you had to go through all the cases and figure it out
link |
00:23:36.140
and then they realized that,
link |
00:23:39.140
we probably need to have testing
link |
00:23:40.620
because we're not getting all the things.
link |
00:23:42.460
And so from there, what happens is like most developers,
link |
00:23:45.540
they do a little bit of testing, but it's usually like,
link |
00:23:48.100
okay, did my compiler bug out?
link |
00:23:49.780
Let me look at the warnings.
link |
00:23:51.140
Okay, is that acceptable or not, right?
link |
00:23:53.260
Like that's how you typically think about as a developer
link |
00:23:55.820
and you'll just assume that it's going to go
link |
00:23:58.220
to another process and they're gonna test it out.
link |
00:24:01.100
But I think we need to go back to those early days
link |
00:24:04.340
when you're a developer, you're developing,
link |
00:24:07.540
there should be like the say,
link |
00:24:09.500
okay, let me look at the ethical outcomes of this
link |
00:24:12.180
because there isn't a second like testing ethical testers,
link |
00:24:16.020
right, it's you.
link |
00:24:18.060
We did it back in the early coding days.
link |
00:24:21.180
I think that's where we are with respect to ethics.
link |
00:24:23.300
Like let's go back to what was good practices
link |
00:24:26.300
and only because we were just developing the field.
link |
00:24:30.060
Yeah, and it's a really heavy burden.
link |
00:24:33.980
I've had to feel it recently in the last few months,
link |
00:24:37.500
but I think it's a good one to feel like
link |
00:24:39.420
I've gotten a message, more than one from people.
link |
00:24:43.380
You know, I've unfortunately gotten some attention recently
link |
00:24:47.420
and I've gotten messages that say that
link |
00:24:50.380
I have blood on my hands
link |
00:24:52.300
because of working on semi autonomous vehicles.
link |
00:24:56.260
So the idea that you have semi autonomy means
link |
00:24:59.220
people will become, will lose vigilance and so on.
link |
00:25:02.020
That's actually be humans, as we described.
link |
00:25:05.140
And because of that, because of this idea
link |
00:25:08.100
that we're creating automation,
link |
00:25:10.060
there'll be people be hurt because of it.
link |
00:25:12.780
And I think that's a beautiful thing.
link |
00:25:14.540
I mean, it's, you know, there's many nights
link |
00:25:16.220
where I wasn't able to sleep because of this notion.
link |
00:25:18.820
You know, you really do think about people that might die
link |
00:25:22.380
because of this technology.
link |
00:25:23.860
Of course, you can then start rationalizing saying,
link |
00:25:26.580
well, you know what, 40,000 people die in the United States
link |
00:25:29.100
every year and we're trying to ultimately try to save lives.
link |
00:25:32.380
But the reality is your code you've written
link |
00:25:35.780
might kill somebody.
link |
00:25:36.700
And that's an important burden to carry with you
link |
00:25:38.900
as you design the code.
link |
00:25:41.180
I don't even think of it as a burden
link |
00:25:43.820
if we train this concept correctly from the beginning.
link |
00:25:47.540
And I use, and not to say that coding is like
link |
00:25:50.300
being a medical doctor, but think about it.
link |
00:25:52.420
Medical doctors, if they've been in situations
link |
00:25:56.100
where their patient didn't survive, right?
link |
00:25:58.300
Do they give up and go away?
link |
00:26:00.820
No, every time they come in,
link |
00:26:02.540
they know that there might be a possibility
link |
00:26:05.460
that this patient might not survive.
link |
00:26:07.260
And so when they approach every decision,
link |
00:26:10.140
like that's in the back of their head.
link |
00:26:11.980
And so why isn't that we aren't teaching,
link |
00:26:15.860
and those are tools though, right?
link |
00:26:17.220
They are given some of the tools to address that
link |
00:26:19.740
so that they don't go crazy.
link |
00:26:21.500
But we don't give those tools
link |
00:26:24.220
so that it does feel like a burden
link |
00:26:26.180
versus something of I have a great gift
link |
00:26:28.700
and I can do great, awesome good,
link |
00:26:31.100
but with it comes great responsibility.
link |
00:26:33.340
I mean, that's what we teach in terms of
link |
00:26:35.820
if you think about the medical schools, right?
link |
00:26:37.420
Great gift, great responsibility.
link |
00:26:39.540
I think if we just change the messaging a little,
link |
00:26:42.140
great gift, being a developer, great responsibility.
link |
00:26:45.580
And this is how you combine those.
link |
00:26:48.340
But do you think, I mean, this is really interesting.
link |
00:26:52.180
It's outside, I actually have no friends
link |
00:26:54.300
who are sort of surgeons or doctors.
link |
00:26:58.260
I mean, what does it feel like
link |
00:27:00.020
to make a mistake in a surgery and somebody to die
link |
00:27:03.780
because of that?
link |
00:27:04.780
Like, is that something you could be taught
link |
00:27:07.020
in medical school, sort of how to be accepting of that risk?
link |
00:27:10.580
So, because I do a lot of work with healthcare robotics,
link |
00:27:14.940
I have not lost a patient, for example.
link |
00:27:18.460
The first one's always the hardest, right?
link |
00:27:20.900
But they really teach the value, right?
link |
00:27:27.300
So, they teach responsibility,
link |
00:27:28.740
but they also teach the value.
link |
00:27:30.780
Like, you're saving 40,000,
link |
00:27:34.700
but in order to really feel good about that,
link |
00:27:38.260
when you come to a decision,
link |
00:27:40.100
you have to be able to say at the end,
link |
00:27:42.220
I did all that I could possibly do, right?
link |
00:27:45.300
Versus a, well, I just picked the first widget, right?
link |
00:27:49.100
Like, so every decision is actually thought through.
link |
00:27:52.220
It's not a habit, it's not a,
link |
00:27:53.780
let me just take the best algorithm
link |
00:27:55.340
that my friend gave me, right?
link |
00:27:57.060
It's a, is this it, is this the best?
link |
00:27:59.540
Have I done my best to do good, right?
link |
00:28:03.100
And so...
link |
00:28:03.940
You're right, and I think burden is the wrong word.
link |
00:28:06.500
It's a gift, but you have to treat it extremely seriously.
link |
00:28:10.740
Correct.
link |
00:28:13.260
So, on a slightly related note,
link |
00:28:15.500
in a recent paper,
link |
00:28:16.420
The Ugly Truth About Ourselves and Our Robot Creations,
link |
00:28:20.140
you discuss, you highlight some biases
link |
00:28:24.300
that may affect the function of various robotic systems.
link |
00:28:27.100
Can you talk through, if you remember, examples of some?
link |
00:28:30.100
There's a lot of examples.
link |
00:28:31.300
I usually... What is bias, first of all?
link |
00:28:33.060
Yeah, so bias is this,
link |
00:28:37.060
and so bias, which is different than prejudice.
link |
00:28:38.820
So, bias is that we all have these preconceived notions
link |
00:28:41.860
about particular, everything from particular groups
link |
00:28:45.940
to habits to identity, right?
link |
00:28:49.700
So, we have these predispositions,
link |
00:28:51.420
and so when we address a problem,
link |
00:28:54.100
we look at a problem and make a decision,
link |
00:28:56.020
those preconceived notions might affect our outputs,
link |
00:29:01.340
our outcomes.
link |
00:29:02.220
So, there the bias can be positive and negative,
link |
00:29:04.700
and then is prejudice the negative kind of bias?
link |
00:29:07.980
Prejudice is the negative, right?
link |
00:29:09.180
So, prejudice is that not only are you aware of your bias,
link |
00:29:13.540
but you are then take it and have a negative outcome,
link |
00:29:18.820
even though you're aware, like...
link |
00:29:20.660
And there could be gray areas too.
link |
00:29:22.980
There's always gray areas.
link |
00:29:24.620
That's the challenging aspect of all ethical questions.
link |
00:29:27.580
So, I always like...
link |
00:29:28.620
So, there's a funny one,
link |
00:29:30.020
and in fact, I think it might be in the paper,
link |
00:29:31.740
because I think I talk about self driving cars,
link |
00:29:34.180
but think about this.
link |
00:29:35.460
We, for teenagers, right?
link |
00:29:39.500
Typically, insurance companies charge quite a bit of money
link |
00:29:44.540
if you have a teenage driver.
link |
00:29:46.740
So, you could say that's an age bias, right?
link |
00:29:50.860
But no one will claim...
link |
00:29:52.380
I mean, parents will be grumpy,
link |
00:29:54.060
but no one really says that that's not fair.
link |
00:29:58.660
That's interesting.
link |
00:29:59.500
We don't...
link |
00:30:00.340
That's right, that's right.
link |
00:30:01.580
It's everybody in human factors and safety research almost...
link |
00:30:06.580
I mean, it's quite ruthlessly critical of teenagers.
link |
00:30:12.780
And we don't question, is that okay?
link |
00:30:15.020
Is that okay to be ageist in this kind of way?
link |
00:30:17.140
It is, and it is ageist, right?
link |
00:30:18.580
It's definitely ageist, there's no question about it.
link |
00:30:20.780
And so, this is the gray area, right?
link |
00:30:24.940
Because you know that teenagers are more likely
link |
00:30:29.820
to be in accidents,
link |
00:30:30.860
and so, there's actually some data to it.
link |
00:30:33.060
But then, if you take that same example,
link |
00:30:34.980
and you say, well, I'm going to make the insurance higher
link |
00:30:39.380
for an area of Boston,
link |
00:30:43.380
because there's a lot of accidents.
link |
00:30:45.020
And then, they find out that that's correlated
link |
00:30:48.260
with socioeconomics.
link |
00:30:50.220
Well, then it becomes a problem, right?
link |
00:30:52.420
Like, that is not acceptable,
link |
00:30:55.180
but yet, the teenager, which is age...
link |
00:30:58.940
It's against age, is, right?
link |
00:31:01.820
We figure that out as a society by having conversations,
link |
00:31:05.260
by having discourse.
link |
00:31:06.180
I mean, throughout history,
link |
00:31:07.540
the definition of what is ethical or not has changed,
link |
00:31:11.340
and hopefully, always for the better.
link |
00:31:14.300
Correct, correct.
link |
00:31:15.420
So, in terms of bias or prejudice in algorithms,
link |
00:31:22.300
what examples do you sometimes think about?
link |
00:31:25.540
So, I think about quite a bit the medical domain,
link |
00:31:28.940
just because historically, right?
link |
00:31:31.260
The healthcare domain has had these biases,
link |
00:31:34.500
typically based on gender and ethnicity, primarily.
link |
00:31:40.220
A little in age, but not so much.
link |
00:31:43.660
Historically, if you think about FDA and drug trials,
link |
00:31:49.260
it's harder to find a woman that aren't childbearing,
link |
00:31:54.540
and so you may not test on drugs at the same level.
link |
00:31:56.900
Right, so there's these things.
link |
00:31:58.940
And so, if you think about robotics, right?
link |
00:32:02.900
Something as simple as,
link |
00:32:04.860
I'd like to design an exoskeleton, right?
link |
00:32:07.740
What should the material be?
link |
00:32:09.180
What should the weight be?
link |
00:32:10.140
What should the form factor be?
link |
00:32:14.260
Who are you gonna design it around?
link |
00:32:16.940
I will say that in the US,
link |
00:32:19.620
women average height and weight
link |
00:32:21.620
is slightly different than guys.
link |
00:32:23.380
So, who are you gonna choose?
link |
00:32:25.820
Like, if you're not thinking about it from the beginning,
link |
00:32:28.900
as, okay, when I design this and I look at the algorithms
link |
00:32:33.420
and I design the control system and the forces
link |
00:32:35.540
and the torques, if you're not thinking about,
link |
00:32:38.060
well, you have different types of body structure,
link |
00:32:41.500
you're gonna design to what you're used to.
link |
00:32:44.380
Oh, this fits all the folks in my lab, right?
link |
00:32:48.060
So, think about it from the very beginning is important.
link |
00:32:51.300
What about sort of algorithms that train on data
link |
00:32:54.500
kind of thing?
link |
00:32:55.940
Sadly, our society already has a lot of negative bias.
link |
00:33:01.140
And so, if we collect a lot of data,
link |
00:33:04.540
even if it's a balanced way,
link |
00:33:06.100
that's going to contain the same bias
link |
00:33:07.620
that our society contains.
link |
00:33:08.820
And so, yeah, is there things there that bother you?
link |
00:33:13.540
Yeah, so you actually said something.
link |
00:33:15.420
You had said how we have biases,
link |
00:33:19.740
but hopefully we learn from them and we become better, right?
link |
00:33:22.940
And so, that's where we are now, right?
link |
00:33:24.940
So, the data that we're collecting is historic.
link |
00:33:28.420
So, it's based on these things
link |
00:33:29.940
when we knew it was bad to discriminate,
link |
00:33:32.420
but that's the data we have and we're trying to fix it now,
link |
00:33:35.900
but we're fixing it based on the data
link |
00:33:37.660
that was used in the first place.
link |
00:33:39.260
Fix it in post.
link |
00:33:40.460
Right, and so the decisions,
link |
00:33:43.580
and you can look at everything from the whole aspect
link |
00:33:46.700
of predictive policing, criminal recidivism.
link |
00:33:51.220
There was a recent paper that had the healthcare algorithms,
link |
00:33:54.100
which had a kind of a sensational titles.
link |
00:33:58.020
I'm not pro sensationalism in titles,
link |
00:34:00.980
but again, you read it, right?
link |
00:34:03.540
So, it makes you read it,
link |
00:34:05.540
but I'm like, really?
link |
00:34:06.780
Like, ugh, you could have.
link |
00:34:08.740
What's the topic of the sensationalism?
link |
00:34:10.580
I mean, what's underneath it?
link |
00:34:13.100
What's, if you could sort of educate me
link |
00:34:16.100
on what kind of bias creeps into the healthcare space.
link |
00:34:18.940
Yeah, so.
link |
00:34:19.780
I mean, you already kind of mentioned.
link |
00:34:21.260
Yeah, so this one was the headline was
link |
00:34:24.820
racist AI algorithms.
link |
00:34:27.300
Okay, like, okay, that's totally a clickbait title.
link |
00:34:30.700
And so you looked at it and so there was data
link |
00:34:34.060
that these researchers had collected.
link |
00:34:36.460
I believe, I wanna say it was either Science or Nature.
link |
00:34:39.220
It just was just published,
link |
00:34:40.460
but they didn't have a sensational title.
link |
00:34:42.420
It was like the media.
link |
00:34:44.700
And so they had looked at demographics,
link |
00:34:47.300
I believe, between black and white women, right?
link |
00:34:51.940
And they showed that there was a discrepancy
link |
00:34:56.660
in the outcomes, right?
link |
00:34:58.980
And so, and it was tied to ethnicity, tied to race.
link |
00:35:02.220
The piece that the researchers did
link |
00:35:04.620
actually went through the whole analysis, but of course.
link |
00:35:08.620
I mean, the journalists with AI are problematic
link |
00:35:11.900
across the board, let's say.
link |
00:35:14.140
And so this is a problem, right?
link |
00:35:15.980
And so there's this thing about,
link |
00:35:18.100
oh, AI, it has all these problems.
link |
00:35:20.420
We're doing it on historical data
link |
00:35:22.740
and the outcomes are uneven based on gender
link |
00:35:25.900
or ethnicity or age.
link |
00:35:27.940
But I am always saying is like, yes,
link |
00:35:30.660
we need to do better, right?
link |
00:35:32.340
We need to do better.
link |
00:35:33.460
It is our duty to do better.
link |
00:35:36.620
But the worst AI is still better than us.
link |
00:35:39.700
Like, you take the best of us
link |
00:35:41.820
and we're still worse than the worst AI,
link |
00:35:44.020
at least in terms of these things.
link |
00:35:45.500
And that's actually not discussed, right?
link |
00:35:47.820
And so I think, and that's why the sensational title, right?
link |
00:35:51.780
And so it's like, so then you can have individuals go like,
link |
00:35:54.180
oh, we don't need to use this AI.
link |
00:35:55.340
I'm like, oh, no, no, no, no.
link |
00:35:56.620
I want the AI instead of the doctors
link |
00:36:00.780
that provided that data,
link |
00:36:01.860
because it's still better than that, right?
link |
00:36:04.060
I think that's really important to linger on,
link |
00:36:06.660
is the idea that this AI is racist.
link |
00:36:10.300
It's like, well, compared to what?
link |
00:36:14.020
Sort of, I think we set, unfortunately,
link |
00:36:20.100
way too high of a bar for AI algorithms.
link |
00:36:23.220
And in the ethical space where perfect is,
link |
00:36:25.940
I would argue, probably impossible.
link |
00:36:28.940
Then if we set the bar of perfection, essentially,
link |
00:36:33.020
of it has to be perfectly fair, whatever that means,
link |
00:36:37.500
it means we're setting it up for failure.
link |
00:36:39.580
But that's really important to say what you just said,
link |
00:36:41.940
which is, well, it's still better than it is.
link |
00:36:44.900
And one of the things I think
link |
00:36:46.860
that we don't get enough credit for,
link |
00:36:50.260
just in terms of as developers,
link |
00:36:52.140
is that you can now poke at it, right?
link |
00:36:55.820
So it's harder to say, is this hospital,
link |
00:36:58.820
is this city doing something, right?
link |
00:37:01.020
Until someone brings in a civil case, right?
link |
00:37:04.380
Well, with AI, it can process through all this data
link |
00:37:07.100
and say, hey, yes, there was an issue here,
link |
00:37:12.500
but here it is, we've identified it,
link |
00:37:14.460
and then the next step is to fix it.
link |
00:37:16.140
I mean, that's a nice feedback loop
link |
00:37:18.060
versus waiting for someone to sue someone else
link |
00:37:21.300
before it's fixed, right?
link |
00:37:22.740
And so I think that power,
link |
00:37:25.060
we need to capitalize on a little bit more, right?
link |
00:37:27.580
Instead of having the sensational titles,
link |
00:37:29.660
have the, okay, this is a problem,
link |
00:37:33.300
and this is how we're fixing it,
link |
00:37:34.540
and people are putting money to fix it
link |
00:37:36.500
because we can make it better.
link |
00:37:38.580
I look at like facial recognition,
link |
00:37:40.340
how Joy, she basically called out a couple of companies
link |
00:37:45.460
and said, hey, and most of them were like,
link |
00:37:48.220
oh, embarrassment, and the next time it had been fixed,
link |
00:37:53.020
right, it had been fixed better, right?
link |
00:37:54.860
And then it was like, oh, here's some more issues.
link |
00:37:56.740
And I think that conversation then moves that needle
link |
00:38:01.740
to having much more fair and unbiased and ethical aspects,
link |
00:38:07.540
as long as both sides, the developers are willing to say,
link |
00:38:10.580
okay, I hear you, yes, we are going to improve,
link |
00:38:14.020
and you have other developers who are like,
link |
00:38:16.100
hey, AI, it's wrong, but I love it, right?
link |
00:38:19.620
Yes, so speaking of this really nice notion
link |
00:38:23.020
that AI is maybe flawed but better than humans,
link |
00:38:26.980
so just made me think of it,
link |
00:38:29.140
one example of flawed humans is our political system.
link |
00:38:34.100
Do you think, or you said judicial as well,
link |
00:38:38.700
do you have a hope for AI sort of being elected
link |
00:38:46.140
for president or running our Congress
link |
00:38:49.780
or being able to be a powerful representative of the people?
link |
00:38:53.940
So I mentioned, and I truly believe that this whole world
link |
00:38:58.940
of AI is in partnerships with people.
link |
00:39:01.340
And so what does that mean?
link |
00:39:02.420
I don't believe, or maybe I just don't,
link |
00:39:07.620
I don't believe that we should have an AI for president,
link |
00:39:11.420
but I do believe that a president
link |
00:39:13.540
should use AI as an advisor, right?
link |
00:39:15.900
Like, if you think about it,
link |
00:39:17.420
every president has a cabinet of individuals
link |
00:39:21.900
that have different expertise
link |
00:39:23.660
that they should listen to, right?
link |
00:39:26.060
Like, that's kind of what we do.
link |
00:39:27.980
And you put smart people with smart expertise
link |
00:39:31.100
around certain issues, and you listen.
link |
00:39:33.420
I don't see why AI can't function
link |
00:39:35.700
as one of those smart individuals giving input.
link |
00:39:39.260
So maybe there's an AI on healthcare,
link |
00:39:41.020
maybe there's an AI on education and right,
link |
00:39:43.820
like all of these things that a human is processing, right?
link |
00:39:48.780
Because at the end of the day,
link |
00:39:51.380
there's people that are human
link |
00:39:53.540
that are going to be at the end of the decision.
link |
00:39:55.500
And I don't think as a world, as a culture, as a society,
link |
00:39:59.260
that we would totally, and this is us,
link |
00:40:02.980
like this is some fallacy about us,
link |
00:40:05.260
but we need to see that leader, that person as human.
link |
00:40:11.780
And most people don't realize
link |
00:40:13.180
that like leaders have a whole lot of advice, right?
link |
00:40:16.940
Like when they say something, it's not that they woke up,
link |
00:40:19.500
well, usually they don't wake up in the morning
link |
00:40:21.780
and be like, I have a brilliant idea, right?
link |
00:40:24.340
It's usually a, okay, let me listen.
link |
00:40:26.620
I have a brilliant idea,
link |
00:40:27.460
but let me get a little bit of feedback on this.
link |
00:40:29.780
Like, okay.
link |
00:40:30.900
And then it's a, yeah, that was an awesome idea
link |
00:40:33.020
or it's like, yeah, let me go back.
link |
00:40:35.780
We already talked through a bunch of them,
link |
00:40:37.300
but are there some possible solutions
link |
00:40:41.380
to the bias that's present in our algorithms
link |
00:40:45.100
beyond what we just talked about?
link |
00:40:46.540
So I think there's two paths.
link |
00:40:49.180
One is to figure out how to systematically
link |
00:40:53.620
do the feedback and corrections.
link |
00:40:56.380
So right now it's ad hoc, right?
link |
00:40:57.980
It's a researcher identify some outcomes
link |
00:41:02.300
that are not, don't seem to be fair, right?
link |
00:41:05.260
They publish it, they write about it.
link |
00:41:07.780
And the, either the developer or the companies
link |
00:41:11.260
that have adopted the algorithms may try to fix it, right?
link |
00:41:14.100
And so it's really ad hoc and it's not systematic.
link |
00:41:18.700
There's, it's just, it's kind of like,
link |
00:41:21.260
I'm a researcher, that seems like an interesting problem,
link |
00:41:24.460
which means that there's a whole lot out there
link |
00:41:26.340
that's not being looked at, right?
link |
00:41:28.900
Cause it's kind of researcher driven.
link |
00:41:32.740
And I don't necessarily have a solution,
link |
00:41:35.460
but that process I think could be done a little bit better.
link |
00:41:41.020
One way is I'm going to poke a little bit
link |
00:41:44.820
at some of the corporations, right?
link |
00:41:48.060
Like maybe the corporations when they think
link |
00:41:50.660
about a product, they should, instead of,
link |
00:41:53.660
in addition to hiring these, you know, bug,
link |
00:41:57.780
they give these.
link |
00:41:59.660
Oh yeah, yeah, yeah.
link |
00:42:01.420
Like awards when you find a bug.
link |
00:42:02.780
Yeah, security bug, you know, let's put it
link |
00:42:06.620
like we will give the, whatever the award is
link |
00:42:09.580
that we give for the people who find these security holes,
link |
00:42:12.460
find an ethics hole, right?
link |
00:42:13.820
Like find an unfairness hole
link |
00:42:15.220
and we will pay you X for each one you find.
link |
00:42:17.660
I mean, why can't they do that?
link |
00:42:19.620
One is a win win.
link |
00:42:20.900
They show that they're concerned about it,
link |
00:42:22.940
that this is important and they don't have
link |
00:42:24.980
to necessarily dedicate it their own like internal resources.
link |
00:42:28.660
And it also means that everyone who has
link |
00:42:30.780
like their own bias lens, like I'm interested in age.
link |
00:42:34.460
And so I'll find the ones based on age
link |
00:42:36.420
and I'm interested in gender and right,
link |
00:42:38.260
which means that you get like all
link |
00:42:39.860
of these different perspectives.
link |
00:42:41.420
But you think of it in a data driven way.
link |
00:42:43.220
So like sort of, if we look at a company like Twitter,
link |
00:42:48.220
it gets, it's under a lot of fire
link |
00:42:51.660
for discriminating against certain political beliefs.
link |
00:42:54.820
Correct.
link |
00:42:55.880
And sort of, there's a lot of people,
link |
00:42:58.060
this is the sad thing,
link |
00:42:59.260
cause I know how hard the problem is
link |
00:43:00.700
and I know the Twitter folks are working really hard at it.
link |
00:43:03.060
Even Facebook that everyone seems to hate
link |
00:43:04.980
are working really hard at this.
link |
00:43:06.860
You know, the kind of evidence that people bring
link |
00:43:09.320
is basically anecdotal evidence.
link |
00:43:11.240
Well, me or my friend, all we said is X
link |
00:43:15.020
and for that we got banned.
link |
00:43:17.100
And that's kind of a discussion of saying,
link |
00:43:20.980
well, look, that's usually, first of all,
link |
00:43:23.260
the whole thing is taken out of context.
link |
00:43:25.500
So they present sort of anecdotal evidence.
link |
00:43:28.660
And how are you supposed to, as a company,
link |
00:43:31.140
in a healthy way, have a discourse
link |
00:43:33.080
about what is and isn't ethical?
link |
00:43:35.980
How do we make algorithms ethical
link |
00:43:38.060
when people are just blowing everything?
link |
00:43:40.780
Like they're outraged about a particular
link |
00:43:45.140
anecdotal piece of evidence that's very difficult
link |
00:43:48.220
to sort of contextualize in the big data driven way.
link |
00:43:52.660
Do you have a hope for companies like Twitter and Facebook?
link |
00:43:55.900
Yeah, so I think there's a couple of things going on, right?
link |
00:43:59.820
First off, remember this whole aspect
link |
00:44:04.860
of we are becoming reliant on technology.
link |
00:44:09.420
We're also becoming reliant on a lot of these,
link |
00:44:14.380
the apps and the resources that are provided, right?
link |
00:44:17.980
So some of it is kind of anger, like I need you, right?
link |
00:44:21.660
And you're not working for me, right?
link |
00:44:23.220
Not working for me, all right.
link |
00:44:24.660
But I think, and so some of it,
link |
00:44:27.300
and I wish that there was a little bit
link |
00:44:31.380
of change of rethinking.
link |
00:44:32.860
So some of it is like, oh, we'll fix it in house.
link |
00:44:35.560
No, that's like, okay, I'm a fox
link |
00:44:38.980
and I'm going to watch these hens
link |
00:44:40.940
because I think it's a problem that foxes eat hens.
link |
00:44:44.060
No, right?
link |
00:44:45.180
Like be good citizens and say, look, we have a problem.
link |
00:44:50.860
And we are willing to open ourselves up
link |
00:44:54.820
for others to come in and look at it
link |
00:44:57.060
and not try to fix it in house.
link |
00:44:58.740
Because if you fix it in house,
link |
00:45:00.460
there's conflict of interest.
link |
00:45:01.940
If I find something, I'm probably going to want to fix it
link |
00:45:04.440
and hopefully the media won't pick it up, right?
link |
00:45:07.300
And that then causes distrust
link |
00:45:09.320
because someone inside is going to be mad at you
link |
00:45:11.880
and go out and talk about how,
link |
00:45:13.580
yeah, they canned the resume survey because it, right?
link |
00:45:17.780
Like be nice people.
link |
00:45:19.320
Like just say, look, we have this issue.
link |
00:45:22.760
Community, help us fix it.
link |
00:45:24.420
And we will give you like, you know,
link |
00:45:25.780
the bug finder fee if you do.
link |
00:45:28.100
Did you ever hope that the community,
link |
00:45:31.260
us as a human civilization on the whole is good
link |
00:45:35.340
and can be trusted to guide the future of our civilization
link |
00:45:39.500
into a positive direction?
link |
00:45:40.940
I think so.
link |
00:45:41.880
So I'm an optimist, right?
link |
00:45:44.100
And, you know, there were some dark times in history always.
link |
00:45:49.980
I think now we're in one of those dark times.
link |
00:45:52.900
I truly do.
link |
00:45:53.740
In which aspect?
link |
00:45:54.620
The polarization.
link |
00:45:56.260
And it's not just US, right?
link |
00:45:57.560
So if it was just US, I'd be like, yeah, it's a US thing,
link |
00:46:00.020
but we're seeing it like worldwide, this polarization.
link |
00:46:04.380
And so I worry about that.
link |
00:46:06.540
But I do fundamentally believe that at the end of the day,
link |
00:46:11.980
people are good, right?
link |
00:46:13.420
And why do I say that?
link |
00:46:14.780
Because anytime there's a scenario
link |
00:46:17.700
where people are in danger and I will use,
link |
00:46:20.820
so Atlanta, we had a snowmageddon
link |
00:46:24.260
and people can laugh about that.
link |
00:46:26.620
People at the time, so the city closed for, you know,
link |
00:46:30.460
little snow, but it was ice and the city closed down.
link |
00:46:33.420
But you had people opening up their homes and saying,
link |
00:46:35.720
hey, you have nowhere to go, come to my house, right?
link |
00:46:39.060
Hotels were just saying like, sleep on the floor.
link |
00:46:41.820
Like places like, you know, the grocery stores were like,
link |
00:46:44.420
hey, here's food.
link |
00:46:45.940
There was no like, oh, how much are you gonna pay me?
link |
00:46:47.940
It was like this, such a community.
link |
00:46:50.500
And like people who didn't know each other,
link |
00:46:52.140
strangers were just like, can I give you a ride home?
link |
00:46:55.540
And that was a point I was like, you know what, like.
link |
00:46:59.420
That reveals that the deeper thing is,
link |
00:47:03.100
there's a compassionate love that we all have within us.
link |
00:47:06.940
It's just that when all of that is taken care of
link |
00:47:09.500
and get bored, we love drama.
link |
00:47:11.300
And that's, I think almost like the division
link |
00:47:14.820
is a sign of the times being good,
link |
00:47:17.100
is that it's just entertaining
link |
00:47:19.060
on some unpleasant mammalian level to watch,
link |
00:47:24.220
to disagree with others.
link |
00:47:26.140
And Twitter and Facebook are actually taking advantage
link |
00:47:30.260
of that in a sense because it brings you back
link |
00:47:33.220
to the platform and they're advertiser driven,
link |
00:47:36.180
so they make a lot of money.
link |
00:47:37.620
So you go back and you click.
link |
00:47:39.300
Love doesn't sell quite as well in terms of advertisement.
link |
00:47:43.700
It doesn't.
link |
00:47:44.940
So you've started your career
link |
00:47:46.980
at NASA Jet Propulsion Laboratory,
link |
00:47:49.100
but before I ask a few questions there,
link |
00:47:51.980
have you happened to have ever seen Space Odyssey,
link |
00:47:54.460
2001 Space Odyssey?
link |
00:47:57.220
Yes.
link |
00:47:58.060
Okay, do you think HAL 9000,
link |
00:48:01.420
so we're talking about ethics.
link |
00:48:03.420
Do you think HAL did the right thing
link |
00:48:06.700
by taking the priority of the mission
link |
00:48:08.580
over the lives of the astronauts?
link |
00:48:10.260
Do you think HAL is good or evil?
link |
00:48:15.900
Easy questions.
link |
00:48:16.900
Yeah.
link |
00:48:19.420
HAL was misguided.
link |
00:48:21.380
You're one of the people that would be in charge
link |
00:48:24.060
of an algorithm like HAL.
link |
00:48:26.140
Yeah.
link |
00:48:26.980
What would you do better?
link |
00:48:28.340
If you think about what happened
link |
00:48:31.180
was there was no fail safe, right?
link |
00:48:35.380
So perfection, right?
link |
00:48:37.780
Like what is that?
link |
00:48:38.620
I'm gonna make something that I think is perfect,
link |
00:48:40.840
but if my assumptions are wrong,
link |
00:48:44.620
it'll be perfect based on the wrong assumptions, right?
link |
00:48:47.560
That's something that you don't know until you deploy
link |
00:48:51.700
and then you're like, oh yeah, messed up.
link |
00:48:53.820
But what that means is that when we design software,
link |
00:48:58.340
such as in Space Odyssey,
link |
00:49:00.300
when we put things out,
link |
00:49:02.100
that there has to be a fail safe.
link |
00:49:04.000
There has to be the ability that once it's out there,
link |
00:49:07.700
we can grade it as an F and it fails
link |
00:49:11.360
and it doesn't continue, right?
link |
00:49:13.060
There's some way that it can be brought in
link |
00:49:16.020
and removed in that aspect.
link |
00:49:19.620
Because that's what happened with HAL.
link |
00:49:21.060
It was like assumptions were wrong.
link |
00:49:23.740
It was perfectly correct based on those assumptions
link |
00:49:27.820
and there was no way to change it,
link |
00:49:31.020
change the assumptions at all.
link |
00:49:34.020
And the change to fall back would be to a human.
link |
00:49:37.020
So you ultimately think like human should be,
link |
00:49:42.340
it's not turtles or AI all the way down.
link |
00:49:45.580
It's at some point, there's a human that actually.
link |
00:49:47.820
I still think that,
link |
00:49:48.860
and again, because I do human robot interaction,
link |
00:49:51.420
I still think the human needs to be part of the equation
link |
00:49:54.980
at some point.
link |
00:49:56.440
So what, just looking back,
link |
00:49:58.460
what are some fascinating things in robotic space
link |
00:50:01.900
that NASA was working at the time?
link |
00:50:03.460
Or just in general, what have you gotten to play with
link |
00:50:07.700
and what are your memories from working at NASA?
link |
00:50:10.060
Yeah, so one of my first memories
link |
00:50:13.580
was they were working on a surgical robot system
link |
00:50:18.580
that could do eye surgery, right?
link |
00:50:21.880
And this was back in, oh my gosh, it must've been,
link |
00:50:25.700
oh, maybe 92, 93, 94.
link |
00:50:30.580
So it's like almost like a remote operation.
link |
00:50:32.880
Yeah, it was remote operation.
link |
00:50:34.720
In fact, you can even find some old tech reports on it.
link |
00:50:38.400
So think of it, like now we have DaVinci, right?
link |
00:50:41.620
Like think of it, but these were like the late 90s, right?
link |
00:50:45.880
And I remember going into the lab one day
link |
00:50:48.240
and I was like, what's that, right?
link |
00:50:51.000
And of course it wasn't pretty, right?
link |
00:50:53.960
Because the technology, but it was like functional
link |
00:50:56.640
and you had this individual that could use
link |
00:50:59.240
a version of haptics to actually do the surgery
link |
00:51:01.960
and they had this mockup of a human face
link |
00:51:04.360
and like the eyeballs and you can see this little drill.
link |
00:51:08.480
And I was like, oh, that is so cool.
link |
00:51:11.680
That one I vividly remember
link |
00:51:13.720
because it was so outside of my like possible thoughts
link |
00:51:18.640
of what could be done.
link |
00:51:20.040
It's the kind of precision
link |
00:51:21.360
and I mean, what's the most amazing of a thing like that?
link |
00:51:26.120
I think it was the precision.
link |
00:51:28.240
It was the kind of first time
link |
00:51:31.960
that I had physically seen
link |
00:51:34.880
this robot machine human interface, right?
link |
00:51:39.640
Versus, cause manufacturing had been,
link |
00:51:42.400
you saw those kind of big robots, right?
link |
00:51:44.520
But this was like, oh, this is in a person.
link |
00:51:48.040
There's a person and a robot like in the same space.
link |
00:51:51.400
I'm meeting them in person.
link |
00:51:53.000
Like for me, it was a magical moment
link |
00:51:55.440
that I can't, it was life transforming
link |
00:51:57.900
that I recently met Spot Mini from Boston Dynamics.
link |
00:52:00.560
Oh, see.
link |
00:52:01.400
I don't know why, but on the human robot interaction
link |
00:52:04.680
for some reason I realized how easy it is to anthropomorphize
link |
00:52:09.680
and it was, I don't know, it was almost
link |
00:52:12.580
like falling in love, this feeling of meeting.
link |
00:52:14.700
And I've obviously seen these robots a lot
link |
00:52:17.300
on video and so on, but meeting in person,
link |
00:52:19.180
just having that one on one time is different.
link |
00:52:22.340
So have you had a robot like that in your life
link |
00:52:25.020
that made you maybe fall in love with robotics?
link |
00:52:28.300
Sort of like meeting in person.
link |
00:52:32.140
I mean, I loved robotics since, yeah.
link |
00:52:35.860
So I was a 12 year old.
link |
00:52:37.900
Like I'm gonna be a roboticist, actually was,
link |
00:52:40.020
I called it cybernetics.
link |
00:52:41.180
But so my motivation was Bionic Woman.
link |
00:52:44.700
I don't know if you know that.
link |
00:52:46.260
And so, I mean, that was like a seminal moment,
link |
00:52:49.500
but I didn't meet, like that was TV, right?
link |
00:52:52.340
Like it wasn't like I was in the same space and I met
link |
00:52:54.500
and I was like, oh my gosh, you're like real.
link |
00:52:56.540
Just linking on Bionic Woman, which by the way,
link |
00:52:58.820
because I read that about you.
link |
00:53:01.100
I watched bits of it and it's just so,
link |
00:53:04.340
no offense, terrible.
link |
00:53:05.520
It's cheesy if you look at it now.
link |
00:53:08.500
It's cheesy, no.
link |
00:53:09.340
I've seen a couple of reruns lately.
link |
00:53:10.900
But it's, but of course at the time it's probably
link |
00:53:15.100
captured the imagination.
link |
00:53:16.740
But the sound effects.
link |
00:53:18.100
Especially when you're younger, it just catch you.
link |
00:53:23.100
But which aspect, did you think of it,
link |
00:53:24.720
you mentioned cybernetics, did you think of it as robotics
link |
00:53:27.700
or did you think of it as almost constructing
link |
00:53:30.140
artificial beings?
link |
00:53:31.620
Like, is it the intelligent part that captured
link |
00:53:36.200
your fascination or was it the whole thing?
link |
00:53:38.060
Like even just the limbs and just the.
link |
00:53:39.820
So for me, it would have, in another world,
link |
00:53:42.900
I probably would have been more of a biomedical engineer
link |
00:53:46.820
because what fascinated me was the parts,
link |
00:53:50.040
like the bionic parts, the limbs, those aspects of it.
link |
00:53:55.060
Are you especially drawn to humanoid or humanlike robots?
link |
00:53:59.620
I would say humanlike, not humanoid, right?
link |
00:54:03.060
And when I say humanlike, I think it's this aspect
link |
00:54:05.900
of that interaction, whether it's social
link |
00:54:09.140
and it's like a dog, right?
link |
00:54:10.660
Like that's humanlike because it understand us,
link |
00:54:14.100
it interacts with us at that very social level
link |
00:54:18.500
to, you know, humanoids are part of that,
link |
00:54:21.860
but only if they interact with us as if we are human.
link |
00:54:26.860
Okay, but just to linger on NASA for a little bit,
link |
00:54:30.980
what do you think, maybe if you have other memories,
link |
00:54:34.100
but also what do you think is the future of robots in space?
link |
00:54:38.580
We'll mention how, but there's incredible robots
link |
00:54:41.900
that NASA's working on in general thinking about
link |
00:54:44.100
in our, as we venture out, human civilization ventures out
link |
00:54:49.820
into space, what do you think the future of robots is there?
link |
00:54:52.260
Yeah, so I mean, there's the near term.
link |
00:54:53.700
For example, they just announced the rover
link |
00:54:57.300
that's going to the moon, which, you know,
link |
00:55:00.780
that's kind of exciting, but that's like near term.
link |
00:55:06.100
You know, my favorite, favorite, favorite series
link |
00:55:11.180
is Star Trek, right?
link |
00:55:13.340
You know, I really hope, and even Star Trek,
link |
00:55:17.200
like if I calculate the years, I wouldn't be alive,
link |
00:55:20.100
but I would really, really love to be in that world.
link |
00:55:26.700
Like, even if it's just at the beginning,
link |
00:55:28.460
like, you know, like voyage, like adventure one.
link |
00:55:33.180
So basically living in space.
link |
00:55:35.740
Yeah.
link |
00:55:36.580
With, what robots, what are robots?
link |
00:55:39.740
With data.
link |
00:55:40.580
What role?
link |
00:55:41.400
The data would have to be, even though that wasn't,
link |
00:55:42.820
you know, that was like later, but.
link |
00:55:44.740
So data is a robot that has human like qualities.
link |
00:55:49.160
Right, without the emotion chip.
link |
00:55:50.500
Yeah.
link |
00:55:51.340
You don't like emotion.
link |
00:55:52.220
Well, so data with the emotion chip
link |
00:55:54.220
was kind of a mess, right?
link |
00:55:58.580
It took a while for that thing to adapt,
link |
00:56:04.660
but, and so why was that an issue?
link |
00:56:08.580
The issue is that emotions make us irrational agents.
link |
00:56:14.240
That's the problem.
link |
00:56:15.240
And yet he could think through things,
link |
00:56:20.040
even if it was based on an emotional scenario, right?
link |
00:56:23.440
Based on pros and cons.
link |
00:56:25.080
But as soon as you made him emotional,
link |
00:56:28.520
one of the metrics he used for evaluation
link |
00:56:31.160
was his own emotions, not people around him, right?
link |
00:56:35.480
Like, and so.
link |
00:56:37.280
We do that as children, right?
link |
00:56:39.000
So we're very egocentric when we're young.
link |
00:56:40.920
We are very egocentric.
link |
00:56:42.320
And so isn't that just an early version of the emotion chip
link |
00:56:45.800
then, I haven't watched much Star Trek.
link |
00:56:48.280
Except I have also met adults, right?
link |
00:56:52.460
And so that is a developmental process.
link |
00:56:54.600
And I'm sure there's a bunch of psychologists
link |
00:56:57.600
that can go through, like you can have a 60 year old adult
link |
00:57:00.640
who has the emotional maturity of a 10 year old, right?
link |
00:57:04.640
And so there's various phases that people should go through
link |
00:57:08.880
in order to evolve and sometimes you don't.
link |
00:57:11.480
So how much psychology do you think,
link |
00:57:14.840
a topic that's rarely mentioned in robotics,
link |
00:57:17.600
but how much does psychology come to play
link |
00:57:19.700
when you're talking about HRI, human robot interaction?
link |
00:57:23.600
When you have to have robots
link |
00:57:25.000
that actually interact with humans.
link |
00:57:26.120
Tons.
link |
00:57:26.960
So we, like my group, as well as I read a lot
link |
00:57:31.360
in the cognitive science literature,
link |
00:57:33.280
as well as the psychology literature.
link |
00:57:36.160
Because they understand a lot about human, human relations
link |
00:57:42.720
and developmental milestones and things like that.
link |
00:57:45.920
And so we tend to look to see what's been done out there.
link |
00:57:53.120
Sometimes what we'll do is we'll try to match that to see,
link |
00:57:56.500
is that human, human relationship the same as human robot?
link |
00:58:00.980
Sometimes it is, and sometimes it's different.
link |
00:58:03.080
And then when it's different, we have to,
link |
00:58:04.740
we try to figure out, okay,
link |
00:58:06.440
why is it different in this scenario?
link |
00:58:09.040
But it's the same in the other scenario, right?
link |
00:58:11.900
And so we try to do that quite a bit.
link |
00:58:15.320
Would you say that's, if we're looking at the future
link |
00:58:17.800
of human robot interaction,
link |
00:58:19.140
would you say the psychology piece is the hardest?
link |
00:58:22.040
Like if, I mean, it's a funny notion for you as,
link |
00:58:25.640
I don't know if you consider, yeah.
link |
00:58:27.360
I mean, one way to ask it,
link |
00:58:28.400
do you consider yourself a roboticist or a psychologist?
link |
00:58:32.000
Oh, I consider myself a roboticist
link |
00:58:33.600
that plays the act of a psychologist.
link |
00:58:36.240
But if you were to look at yourself sort of,
link |
00:58:40.120
20, 30 years from now,
link |
00:58:42.360
do you see yourself more and more
link |
00:58:43.880
wearing the psychology hat?
link |
00:58:47.560
Another way to put it is,
link |
00:58:49.000
are the hard problems in human robot interactions
link |
00:58:51.600
fundamentally psychology, or is it still robotics,
link |
00:58:55.800
the perception manipulation, planning,
link |
00:58:57.720
all that kind of stuff?
link |
00:58:59.460
It's actually neither.
link |
00:59:01.680
The hardest part is the adaptation and the interaction.
link |
00:59:06.120
So it's the interface, it's the learning.
link |
00:59:08.840
And so if I think of,
link |
00:59:11.600
like I've become much more of a roboticist slash AI person
link |
00:59:17.180
than when I, like originally, again,
link |
00:59:19.040
I was about the bionics.
link |
00:59:20.160
I was electrical engineer, I was control theory, right?
link |
00:59:24.040
And then I started realizing that my algorithms
link |
00:59:28.780
needed like human data, right?
link |
00:59:30.600
And so then I was like, okay, what is this human thing?
link |
00:59:32.760
How do I incorporate human data?
link |
00:59:34.360
And then I realized that human perception had,
link |
00:59:38.440
like there was a lot in terms of how we perceive the world.
link |
00:59:41.040
And so trying to figure out
link |
00:59:41.940
how do I model human perception for my,
link |
00:59:44.400
and so I became a HRI person,
link |
00:59:47.600
human robot interaction person,
link |
00:59:49.320
from being a control theory and realizing
link |
00:59:51.760
that humans actually offered quite a bit.
link |
00:59:55.220
And then when you do that,
link |
00:59:56.060
you become more of an artificial intelligence, AI.
link |
00:59:59.280
And so I see myself evolving more in this AI world
link |
01:00:05.680
under the lens of robotics,
link |
01:00:09.560
having hardware, interacting with people.
link |
01:00:12.100
So you're a world class expert researcher in robotics,
link |
01:00:17.840
and yet others, you know, there's a few,
link |
01:00:21.120
it's a small but fierce community of people,
link |
01:00:24.160
but most of them don't take the journey
link |
01:00:26.600
into the H of HRI, into the human.
link |
01:00:29.440
So why did you brave into the interaction with humans?
link |
01:00:34.440
It seems like a really hard problem.
link |
01:00:36.880
It's a hard problem, and it's very risky as an academic.
link |
01:00:41.080
And I knew that when I started down that journey,
link |
01:00:46.200
that it was very risky as an academic
link |
01:00:49.880
in this world that was nuance, it was just developing.
link |
01:00:53.440
We didn't even have a conference, right, at the time.
link |
01:00:56.720
Because it was the interesting problems.
link |
01:01:00.120
That was what drove me.
link |
01:01:01.560
It was the fact that I looked at what interests me
link |
01:01:06.920
in terms of the application space and the problems.
link |
01:01:10.400
And that pushed me into trying to figure out
link |
01:01:14.900
what people were and what humans were
link |
01:01:16.840
and how to adapt to them.
link |
01:01:19.040
If those problems weren't so interesting,
link |
01:01:21.280
I'd probably still be sending rovers to glaciers, right?
link |
01:01:26.280
But the problems were interesting.
link |
01:01:28.080
And the other thing was that they were hard, right?
link |
01:01:30.600
So it's, I like having to go into a room
link |
01:01:34.560
and being like, I don't know what to do.
link |
01:01:37.000
And then going back and saying, okay,
link |
01:01:38.280
I'm gonna figure this out.
link |
01:01:39.800
I do not, I'm not driven when I go in like,
link |
01:01:42.320
oh, there are no surprises.
link |
01:01:44.040
Like, I don't find that satisfying.
link |
01:01:47.320
If that was the case,
link |
01:01:48.160
I'd go someplace and make a lot more money, right?
link |
01:01:51.020
I think I stay in academic because and choose to do this
link |
01:01:55.000
because I can go into a room and like, that's hard.
link |
01:01:58.280
Yeah, I think just from my perspective,
link |
01:02:01.720
maybe you can correct me on it,
link |
01:02:03.200
but if I just look at the field of AI broadly,
link |
01:02:06.720
it seems that human robot interaction has the most,
link |
01:02:12.020
one of the most number of open problems.
link |
01:02:16.540
Like people, especially relative to how many people
link |
01:02:20.280
are willing to acknowledge that there are this,
link |
01:02:23.920
because most people are just afraid of the humans
link |
01:02:26.160
so they don't even acknowledge
link |
01:02:27.240
how many open problems there are.
link |
01:02:28.200
But it's in terms of difficult problems
link |
01:02:30.440
to solve exciting spaces,
link |
01:02:32.400
it seems to be incredible for that.
link |
01:02:35.840
It is, and it's exciting.
link |
01:02:38.680
You've mentioned trust before.
link |
01:02:40.040
What role does trust from interacting with autopilot
link |
01:02:46.860
to in the medical context,
link |
01:02:48.480
what role does trust play in the human robot interactions?
link |
01:02:51.320
So some of the things I study in this domain
link |
01:02:53.920
is not just trust, but it really is over trust.
link |
01:02:56.920
How do you think about over trust?
link |
01:02:58.160
Like what is, first of all, what is trust
link |
01:03:02.280
and what is over trust?
link |
01:03:03.360
Basically, the way I look at it is,
link |
01:03:05.780
trust is not what you click on a survey,
link |
01:03:08.040
trust is about your behavior.
link |
01:03:09.560
So if you interact with the technology
link |
01:03:13.460
based on the decision or the actions of the technology
link |
01:03:17.280
as if you trust that decision, then you're trusting.
link |
01:03:22.360
And even in my group, we've done surveys
link |
01:03:25.560
that on the thing, do you trust robots?
link |
01:03:28.240
Of course not.
link |
01:03:29.080
Would you follow this robot in a burdening building?
link |
01:03:31.640
Of course not.
link |
01:03:32.920
And then you look at their actions and you're like,
link |
01:03:35.480
clearly your behavior does not match what you think
link |
01:03:39.640
or what you think you would like to think.
link |
01:03:42.000
And so I'm really concerned about the behavior
link |
01:03:44.040
because that's really at the end of the day,
link |
01:03:45.800
when you're in the world,
link |
01:03:47.340
that's what will impact others around you.
link |
01:03:50.500
It's not whether before you went onto the street,
link |
01:03:52.920
you clicked on like, I don't trust self driving cars.
link |
01:03:55.640
Yeah, that from an outsider perspective,
link |
01:03:58.680
it's always frustrating to me.
link |
01:04:00.600
Well, I read a lot, so I'm insider
link |
01:04:02.480
in a certain philosophical sense.
link |
01:04:06.040
It's frustrating to me how often trust is used in surveys
link |
01:04:10.680
and how people say, make claims out of any kind of finding
link |
01:04:15.680
they make while somebody clicking on answer.
link |
01:04:18.680
You just trust is a, yeah, behavior just,
link |
01:04:23.700
you said it beautifully.
link |
01:04:24.580
I mean, the action, your own behavior is what trust is.
link |
01:04:28.080
I mean, that everything else is not even close.
link |
01:04:30.740
It's almost like absurd comedic poetry
link |
01:04:36.040
that you weave around your actual behavior.
link |
01:04:38.500
So some people can say their trust,
link |
01:04:41.780
you know, I trust my wife, husband or not,
link |
01:04:45.620
whatever, but the actions is what speaks volumes.
link |
01:04:48.260
You bug their car, you probably don't trust them.
link |
01:04:52.260
I trust them, I'm just making sure.
link |
01:04:53.820
No, no, that's, yeah.
link |
01:04:55.620
Like even if you think about cars,
link |
01:04:57.260
I think it's a beautiful case.
link |
01:04:58.580
I came here at some point, I'm sure,
link |
01:05:01.260
on either Uber or Lyft, right?
link |
01:05:03.580
I remember when it first came out, right?
link |
01:05:06.020
I bet if they had had a survey,
link |
01:05:08.020
would you get in the car with a stranger and pay them?
link |
01:05:11.420
Yes.
link |
01:05:12.660
How many people do you think would have said,
link |
01:05:15.300
like, really?
link |
01:05:16.620
Wait, even worse, would you get in the car
link |
01:05:18.660
with a stranger at 1 a.m. in the morning
link |
01:05:21.900
to have them drop you home as a single female?
link |
01:05:24.780
Yeah.
link |
01:05:25.620
Like how many people would say, that's stupid.
link |
01:05:29.280
Yeah.
link |
01:05:30.120
And now look at where we are.
link |
01:05:31.540
I mean, people put kids, right?
link |
01:05:33.940
Like, oh yeah, my child has to go to school
link |
01:05:37.660
and yeah, I'm gonna put my kid in this car with a stranger.
link |
01:05:42.300
I mean, it's just fascinating how, like,
link |
01:05:45.580
what we think we think is not necessarily
link |
01:05:48.260
matching our behavior.
link |
01:05:49.620
Yeah, and certainly with robots, with autonomous vehicles
link |
01:05:52.260
and all the kinds of robots you work with,
link |
01:05:54.620
that's, it's, yeah, it's, the way you answer it,
link |
01:06:00.340
especially if you've never interacted with that robot before,
link |
01:06:04.300
if you haven't had the experience,
link |
01:06:05.620
you being able to respond correctly on a survey is impossible.
link |
01:06:09.540
But what do you, what role does trust play
link |
01:06:12.460
in the interaction, do you think?
link |
01:06:14.220
Like, is it good to, is it good to trust a robot?
link |
01:06:19.380
What does over trust mean?
link |
01:06:21.620
Or is it, is it good to kind of how you feel
link |
01:06:23.980
about autopilot currently, which is like,
link |
01:06:26.460
from a roboticist's perspective, is like,
link |
01:06:29.380
oh, still very cautious?
link |
01:06:31.460
Yeah, so this is still an open area of research,
link |
01:06:34.860
but basically what I would like in a perfect world
link |
01:06:40.700
is that people trust the technology when it's working 100%,
link |
01:06:44.900
and people will be hypersensitive
link |
01:06:47.260
and identify when it's not.
link |
01:06:49.060
But of course we're not there.
link |
01:06:50.940
That's the ideal world.
link |
01:06:53.620
And, but we find is that people swing, right?
link |
01:06:56.460
They tend to swing, which means that if my first,
link |
01:07:01.300
and like, we have some papers,
link |
01:07:02.900
like first impressions is everything, right?
link |
01:07:05.260
If my first instance with technology,
link |
01:07:07.620
with robotics is positive, it mitigates any risk,
link |
01:07:12.700
it correlates with like best outcomes,
link |
01:07:16.860
it means that I'm more likely to either not see it
link |
01:07:21.460
when it makes some mistakes or faults,
link |
01:07:24.180
or I'm more likely to forgive it.
link |
01:07:28.660
And so this is a problem
link |
01:07:30.340
because technology is not 100% accurate, right?
link |
01:07:32.620
It's not 100% accurate, although it may be perfect.
link |
01:07:35.100
How do you get that first moment right, do you think?
link |
01:07:37.700
There's also an education about the capabilities
link |
01:07:40.740
and limitations of the system.
link |
01:07:42.500
Do you have a sense of how do you educate people correctly
link |
01:07:45.740
in that first interaction?
link |
01:07:47.140
Again, this is an open ended problem.
link |
01:07:50.260
So one of the study that actually has given me some hope
link |
01:07:55.020
that I were trying to figure out how to put in robotics.
link |
01:07:57.660
So there was a research study
link |
01:08:01.300
that it showed for medical AI systems,
link |
01:08:03.460
giving information to radiologists about,
link |
01:08:07.820
here you need to look at these areas on the X ray.
link |
01:08:13.980
What they found was that when the system provided
link |
01:08:18.900
one choice, there was this aspect of either no trust
link |
01:08:25.340
or over trust, right?
link |
01:08:26.860
Like I don't believe it at all,
link |
01:08:29.820
or a yes, yes, yes, yes.
link |
01:08:33.580
And they would miss things, right?
link |
01:08:36.380
Instead, when the system gave them multiple choices,
link |
01:08:40.580
like here are the three, even if it knew like,
link |
01:08:43.260
it had estimated that the top area you need to look at
link |
01:08:45.940
was some place on the X ray.
link |
01:08:49.780
If it gave like one plus others,
link |
01:08:54.060
the trust was maintained and the accuracy of the entire
link |
01:09:00.420
population increased, right?
link |
01:09:03.580
So basically it was a, you're still trusting the system,
link |
01:09:07.500
but you're also putting in a little bit of like,
link |
01:09:09.580
your human expertise, like your human decision processing
link |
01:09:13.660
into the equation.
link |
01:09:15.540
So it helps to mitigate that over trust risk.
link |
01:09:18.540
Yeah, so there's a fascinating balance that the strike.
link |
01:09:21.580
Haven't figured out again, robotics is still an open research.
link |
01:09:24.420
This is exciting open area research, exactly.
link |
01:09:26.740
So what are some exciting applications
link |
01:09:28.940
of human robot interaction?
link |
01:09:30.180
You started a company, maybe you can talk about
link |
01:09:33.060
the exciting efforts there, but in general also
link |
01:09:36.740
what other space can robots interact with humans and help?
link |
01:09:41.020
Yeah, so besides healthcare,
link |
01:09:42.340
cause you know, that's my bias lens.
link |
01:09:44.540
My other bias lens is education.
link |
01:09:47.100
I think that, well, one, we definitely,
link |
01:09:51.260
we in the US, you know, we're doing okay with teachers,
link |
01:09:54.780
but there's a lot of school districts
link |
01:09:56.860
that don't have enough teachers.
link |
01:09:58.300
If you think about the teacher student ratio
link |
01:10:01.940
for at least public education in some districts, it's crazy.
link |
01:10:06.700
It's like, how can you have learning in that classroom,
link |
01:10:10.020
right?
link |
01:10:10.860
Because you just don't have the human capital.
link |
01:10:12.980
And so if you think about robotics,
link |
01:10:15.500
bringing that in to classrooms,
link |
01:10:18.460
as well as the afterschool space,
link |
01:10:20.340
where they offset some of this lack of resources
link |
01:10:25.100
in certain communities, I think that's a good place.
link |
01:10:28.460
And then turning on the other end
link |
01:10:30.900
is using these systems then for workforce retraining
link |
01:10:35.260
and dealing with some of the things
link |
01:10:38.940
that are going to come out later on of job loss,
link |
01:10:43.020
like thinking about robots and in AI systems
link |
01:10:45.900
for retraining and workforce development.
link |
01:10:48.340
I think that's exciting areas that can be pushed even more,
link |
01:10:53.220
and it would have a huge, huge impact.
link |
01:10:56.780
What would you say are some of the open problems
link |
01:10:59.620
in education, sort of, it's exciting.
link |
01:11:03.220
So young kids and the older folks
link |
01:11:08.740
or just folks of all ages who need to be retrained,
link |
01:11:12.580
who need to sort of open themselves up
link |
01:11:14.260
to a whole nother area of work.
link |
01:11:17.700
What are the problems to be solved there?
link |
01:11:20.060
How do you think robots can help?
link |
01:11:22.460
We have the engagement aspect, right?
link |
01:11:24.820
So we can figure out the engagement.
link |
01:11:26.460
That's not a...
link |
01:11:27.300
What do you mean by engagement?
link |
01:11:28.900
So identifying whether a person is focused,
link |
01:11:34.940
is like that we can figure out.
link |
01:11:38.740
What we can figure out and there's some positive results
link |
01:11:43.900
in this is that personalized adaptation
link |
01:11:47.180
based on any concepts, right?
link |
01:11:49.660
So imagine I think about, I have an agent
link |
01:11:54.580
and I'm working with a kid learning, I don't know,
link |
01:11:59.620
algebra two, can that same agent then switch
link |
01:12:03.820
and teach some type of new coding skill
link |
01:12:07.980
to a displaced mechanic?
link |
01:12:11.420
Like, what does that actually look like, right?
link |
01:12:14.500
Like hardware might be the same, content is different,
link |
01:12:19.540
two different target demographics of engagement.
link |
01:12:22.700
Like how do you do that?
link |
01:12:24.580
How important do you think personalization
link |
01:12:26.820
is in human robot interaction?
link |
01:12:28.580
And not just a mechanic or student,
link |
01:12:31.980
but like literally to the individual human being.
link |
01:12:35.340
I think personalization is really important,
link |
01:12:37.540
but a caveat is that I think we'd be okay
link |
01:12:42.140
if we can personalize to the group, right?
link |
01:12:44.700
And so if I can label you
link |
01:12:49.700
as along some certain dimensions,
link |
01:12:52.780
then even though it may not be you specifically,
link |
01:12:56.500
I can put you in this group.
link |
01:12:58.220
So the sample size, this is how they best learn,
link |
01:13:00.500
this is how they best engage.
link |
01:13:03.220
Even at that level, it's really important.
link |
01:13:06.780
And it's because, I mean, it's one of the reasons
link |
01:13:09.620
why educating in large classrooms is so hard, right?
link |
01:13:13.340
You teach to the median,
link |
01:13:15.980
but there's these individuals that are struggling
link |
01:13:19.780
and then you have highly intelligent individuals
link |
01:13:22.340
and those are the ones that are usually kind of left out.
link |
01:13:26.340
So highly intelligent individuals may be disruptive
link |
01:13:28.900
and those who are struggling might be disruptive
link |
01:13:30.860
because they're both bored.
link |
01:13:32.980
Yeah, and if you narrow the definition of the group
link |
01:13:35.580
or in the size of the group enough,
link |
01:13:37.900
you'll be able to address their individual,
link |
01:13:40.380
it's not individual needs, but really the most important
link |
01:13:44.580
group needs, right?
link |
01:13:45.980
And that's kind of what a lot of successful
link |
01:13:47.780
recommender systems do with Spotify and so on.
link |
01:13:50.980
So it's sad to believe, but as a music listener,
link |
01:13:53.820
probably in some sort of large group,
link |
01:13:55.940
it's very sadly predictable.
link |
01:13:58.300
You have been labeled.
link |
01:13:59.260
Yeah, I've been labeled and successfully so
link |
01:14:02.100
because they're able to recommend stuff that I like.
link |
01:14:04.820
Yeah, but applying that to education, right?
link |
01:14:07.740
There's no reason why it can't be done.
link |
01:14:09.780
Do you have a hope for our education system?
link |
01:14:13.060
I have more hope for workforce development.
link |
01:14:16.180
And that's because I'm seeing investments.
link |
01:14:19.660
Even if you look at VC investments in education,
link |
01:14:23.300
the majority of it has lately been going
link |
01:14:26.140
to workforce retraining, right?
link |
01:14:28.540
And so I think that government investments is increasing.
link |
01:14:32.860
There's like a claim and some of it's based on fear, right?
link |
01:14:36.060
Like AI is gonna come and take over all these jobs.
link |
01:14:37.980
What are we gonna do with all these nonpaying taxes
link |
01:14:41.500
that aren't coming to us by our citizens?
link |
01:14:44.340
And so I think I'm more hopeful for that.
link |
01:14:48.060
Not so hopeful for early education
link |
01:14:51.780
because it's still a who's gonna pay for it.
link |
01:14:56.380
And you won't see the results for like 16 to 18 years.
link |
01:15:01.380
It's hard for people to wrap their heads around that.
link |
01:15:07.180
But on the retraining part, what are your thoughts?
link |
01:15:10.580
There's a candidate, Andrew Yang running for president
link |
01:15:13.860
and saying that sort of AI, automation, robots.
link |
01:15:18.940
Universal basic income.
link |
01:15:20.940
Universal basic income in order to support us
link |
01:15:23.900
as we kind of automation takes people's jobs
link |
01:15:26.740
and allows you to explore and find other means.
link |
01:15:30.180
Like do you have a concern of society
link |
01:15:35.660
transforming effects of automation and robots and so on?
link |
01:15:40.500
I do.
link |
01:15:41.340
I do know that AI robotics will displace workers.
link |
01:15:46.180
Like we do know that.
link |
01:15:47.980
But there'll be other workers
link |
01:15:49.500
that will be defined new jobs.
link |
01:15:54.980
What I worry about is, that's not what I worry about.
link |
01:15:57.460
Like will all the jobs go away?
link |
01:15:59.500
What I worry about is the type of jobs that will come out.
link |
01:16:02.460
Like people who graduate from Georgia Tech will be okay.
link |
01:16:06.340
We give them the skills,
link |
01:16:07.660
they will adapt even if their current job goes away.
link |
01:16:10.660
I do worry about those
link |
01:16:12.620
that don't have that quality of an education.
link |
01:16:15.460
Will they have the ability,
link |
01:16:18.300
the background to adapt to those new jobs?
link |
01:16:21.700
That I don't know.
link |
01:16:22.980
That I worry about,
link |
01:16:24.220
which will create even more polarization
link |
01:16:27.220
in our society, internationally and everywhere.
link |
01:16:31.220
I worry about that.
link |
01:16:32.940
I also worry about not having equal access
link |
01:16:36.820
to all these wonderful things that AI can do
link |
01:16:39.540
and robotics can do.
link |
01:16:41.100
I worry about that.
link |
01:16:43.620
People like me from Georgia Tech from say MIT
link |
01:16:48.860
will be okay, right?
link |
01:16:50.340
But that's such a small part of the population
link |
01:16:53.340
that we need to think much more globally
link |
01:16:55.940
of having access to the beautiful things,
link |
01:16:58.500
whether it's AI in healthcare, AI in education,
link |
01:17:01.580
AI in politics, right?
link |
01:17:05.140
I worry about that.
link |
01:17:05.980
And that's part of the thing that you were talking about
link |
01:17:08.140
is people that build the technology
link |
01:17:09.660
have to be thinking about ethics,
link |
01:17:12.420
have to be thinking about access and all those things.
link |
01:17:15.220
And not just a small subset.
link |
01:17:17.900
Let me ask some philosophical,
link |
01:17:20.300
slightly romantic questions.
link |
01:17:22.460
People that listen to this will be like,
link |
01:17:24.900
here he goes again.
link |
01:17:26.180
Okay, do you think one day we'll build an AI system
link |
01:17:31.940
that a person can fall in love with
link |
01:17:35.500
and it would love them back?
link |
01:17:37.900
Like in the movie, Her, for example.
link |
01:17:39.780
Yeah, although she kind of didn't fall in love with him
link |
01:17:43.260
or she fell in love with like a million other people,
link |
01:17:45.500
something like that.
link |
01:17:47.060
You're the jealous type, I see.
link |
01:17:48.460
We humans are the jealous type.
link |
01:17:50.820
Yes, so I do believe that we can design systems
link |
01:17:55.060
where people would fall in love with their robot,
link |
01:17:59.420
with their AI partner.
link |
01:18:03.220
That I do believe.
link |
01:18:05.100
Because it's actually,
link |
01:18:06.300
and I don't like to use the word manipulate,
link |
01:18:08.900
but as we see, there are certain individuals
link |
01:18:12.300
that can be manipulated
link |
01:18:13.340
if you understand the cognitive science about it, right?
link |
01:18:16.260
Right, so I mean, if you could think of all close
link |
01:18:19.620
relationship and love in general
link |
01:18:21.380
as a kind of mutual manipulation,
link |
01:18:24.700
that dance, the human dance.
link |
01:18:27.100
I mean, manipulation is a negative connotation.
link |
01:18:30.180
And that's why I don't like to use that word particularly.
link |
01:18:32.820
I guess another way to phrase it is,
link |
01:18:34.220
you're getting at is it could be algorithmatized
link |
01:18:36.900
or something, it could be a.
link |
01:18:38.380
The relationship building part can be.
link |
01:18:40.620
I mean, just think about it.
link |
01:18:41.820
We have, and I don't use dating sites,
link |
01:18:44.780
but from what I heard, there are some individuals
link |
01:18:48.940
that have been dating that have never saw each other, right?
link |
01:18:52.780
In fact, there's a show I think
link |
01:18:54.100
that tries to like weed out fake people.
link |
01:18:57.540
Like there's a show that comes out, right?
link |
01:18:59.460
Because like people start faking.
link |
01:19:01.940
Like, what's the difference of that person
link |
01:19:05.140
on the other end being an AI agent, right?
link |
01:19:08.020
And having a communication
link |
01:19:09.340
and you building a relationship remotely,
link |
01:19:12.180
like there's no reason why that can't happen.
link |
01:19:15.660
In terms of human robot interaction,
link |
01:19:17.620
so what role, you've kind of mentioned
link |
01:19:19.660
with data emotion being, can be problematic
link |
01:19:23.940
if not implemented well, I suppose.
link |
01:19:26.220
What role does emotion and some other human like things,
link |
01:19:30.500
the imperfect things come into play here
link |
01:19:32.820
for good human robot interaction and something like love?
link |
01:19:37.300
Yeah, so in this case, and you had asked,
link |
01:19:39.780
can an AI agent love a human back?
link |
01:19:43.700
I think they can emulate love back, right?
link |
01:19:47.340
And so what does that actually mean?
link |
01:19:48.980
It just means that if you think about their programming,
link |
01:19:52.260
they might put the other person's needs
link |
01:19:55.220
in front of theirs in certain situations, right?
link |
01:19:57.980
You look at, think about it as a return on investment.
link |
01:20:00.380
Like, what's my return on investment?
link |
01:20:01.740
As part of that equation, that person's happiness,
link |
01:20:04.540
has some type of algorithm waiting to it.
link |
01:20:07.940
And the reason why is because I care about them, right?
link |
01:20:11.380
That's the only reason, right?
link |
01:20:13.700
But if I care about them and I show that,
link |
01:20:15.540
then my final objective function
link |
01:20:18.300
is length of time of the engagement, right?
link |
01:20:20.580
So you can think of how to do this actually quite easily.
link |
01:20:24.020
And so.
link |
01:20:24.860
But that's not love?
link |
01:20:27.420
Well, so that's the thing.
link |
01:20:29.940
I think it emulates love
link |
01:20:32.580
because we don't have a classical definition of love.
link |
01:20:38.540
Right, but, and we don't have the ability
link |
01:20:41.660
to look into each other's minds to see the algorithm.
link |
01:20:45.500
And I mean, I guess what I'm getting at is,
link |
01:20:48.740
is it possible that, especially if that's learned,
link |
01:20:51.020
especially if there's some mystery
link |
01:20:52.580
and black box nature to the system,
link |
01:20:55.220
how is that, you know?
link |
01:20:57.660
How is it any different?
link |
01:20:58.580
How is it any different in terms of sort of
link |
01:21:00.660
if the system says, I'm conscious, I'm afraid of death,
link |
01:21:05.180
and it does indicate that it loves you.
link |
01:21:10.860
Another way to sort of phrase it,
link |
01:21:12.780
be curious to see what you think.
link |
01:21:14.180
Do you think there'll be a time
link |
01:21:16.700
when robots should have rights?
link |
01:21:20.140
You've kind of phrased the robot in a very roboticist way
link |
01:21:23.420
and just a really good way, but saying, okay,
link |
01:21:25.940
well, there's an objective function
link |
01:21:27.940
and I could see how you can create
link |
01:21:30.620
a compelling human robot interaction experience
link |
01:21:33.380
that makes you believe that the robot cares for your needs
link |
01:21:36.300
and even something like loves you.
link |
01:21:38.940
But what if the robot says, please don't turn me off?
link |
01:21:43.740
What if the robot starts making you feel
link |
01:21:46.460
like there's an entity, a being, a soul there, right?
link |
01:21:50.060
Do you think there'll be a future,
link |
01:21:53.420
hopefully you won't laugh too much at this,
link |
01:21:55.700
but where they do ask for rights?
link |
01:22:00.020
So I can see a future
link |
01:22:03.980
if we don't address it in the near term
link |
01:22:08.500
where these agents, as they adapt and learn,
link |
01:22:11.820
could say, hey, this should be something that's fundamental.
link |
01:22:15.820
I hopefully think that we would address it
link |
01:22:18.860
before it gets to that point.
link |
01:22:20.580
So you think that's a bad future?
link |
01:22:22.140
Is that a negative thing where they ask
link |
01:22:25.340
we're being discriminated against?
link |
01:22:27.740
I guess it depends on what role
link |
01:22:31.100
have they attained at that point, right?
link |
01:22:34.340
And so if I think about now.
link |
01:22:35.820
Careful what you say because the robots 50 years from now
link |
01:22:39.220
I'll be listening to this and you'll be on TV saying,
link |
01:22:42.180
this is what roboticists used to believe.
link |
01:22:44.420
Well, right?
link |
01:22:45.260
And so this is my, and as I said, I have a bias lens
link |
01:22:48.700
and my robot friends will understand that.
link |
01:22:52.780
So if you think about it, and I actually put this
link |
01:22:55.180
in kind of the, as a roboticist,
link |
01:22:59.660
you don't necessarily think of robots as human
link |
01:23:02.500
with human rights, but you could think of them
link |
01:23:05.020
either in the category of property,
link |
01:23:09.180
or you can think of them in the category of animals, right?
link |
01:23:14.340
And so both of those have different types of rights.
link |
01:23:18.340
So animals have their own rights as a living being,
link |
01:23:22.740
but they can't vote, they can't write,
link |
01:23:25.060
they can be euthanized, but as humans,
link |
01:23:29.700
if we abuse them, we go to jail, right?
link |
01:23:32.980
So they do have some rights that protect them,
link |
01:23:35.980
but don't give them the rights of like citizenship.
link |
01:23:40.140
And then if you think about property,
link |
01:23:42.260
property, the rights are associated with the person, right?
link |
01:23:45.700
So if someone vandalizes your property
link |
01:23:49.500
or steals your property, like there are some rights,
link |
01:23:53.820
but it's associated with the person who owns that.
link |
01:23:58.660
If you think about it back in the day,
link |
01:24:01.500
and if you remember, we talked about
link |
01:24:03.380
how society has changed, women were property, right?
link |
01:24:08.180
They were not thought of as having rights.
link |
01:24:11.860
They were thought of as property of, like their...
link |
01:24:15.740
Yeah, assaulting a woman meant
link |
01:24:17.620
assaulting the property of somebody else.
link |
01:24:20.060
Exactly, and so what I envision is,
link |
01:24:22.900
is that we will establish some type of norm at some point,
link |
01:24:27.820
but that it might evolve, right?
link |
01:24:29.580
Like if you look at women's rights now,
link |
01:24:31.460
like there are still some countries that don't have,
link |
01:24:35.380
and the rest of the world is like,
link |
01:24:36.700
why that makes no sense, right?
link |
01:24:39.260
And so I do see a world where we do establish
link |
01:24:42.100
some type of grounding.
link |
01:24:44.140
It might be based on property rights,
link |
01:24:45.700
it might be based on animal rights.
link |
01:24:47.620
And if it evolves that way,
link |
01:24:50.700
I think we will have this conversation at that time,
link |
01:24:54.460
because that's the way our society traditionally has evolved.
link |
01:24:58.500
Beautifully put, just out of curiosity,
link |
01:25:01.860
Anki, Jibo, Mayflower Robotics,
link |
01:25:05.460
with their robot Curie, SciFiWorks, WeThink Robotics,
link |
01:25:08.380
were all these amazing robotics companies
link |
01:25:10.580
led, created by incredible roboticists,
link |
01:25:14.300
and they've all went out of business recently.
link |
01:25:19.580
Why do you think they didn't last long?
link |
01:25:21.660
Why is it so hard to run a robotics company,
link |
01:25:25.140
especially one like these, which are fundamentally
link |
01:25:29.300
HRI human robot interaction robots?
link |
01:25:34.380
Or personal robots?
link |
01:25:35.700
Each one has a story,
link |
01:25:37.100
only one of them I don't understand, and that was Anki.
link |
01:25:41.180
That's actually the only one I don't understand.
link |
01:25:43.340
I don't understand it either.
link |
01:25:44.660
No, no, I mean, I look like from the outside,
link |
01:25:47.020
I've looked at their sheets, I've looked at the data that's.
link |
01:25:50.500
Oh, you mean like business wise,
link |
01:25:51.740
you don't understand, I got you.
link |
01:25:52.900
Yeah.
link |
01:25:53.740
Yeah, and like I look at all, I look at that data,
link |
01:25:59.180
and I'm like, they seem to have like product market fit.
link |
01:26:02.660
Like, so that's the only one I don't understand.
link |
01:26:05.660
The rest of it was product market fit.
link |
01:26:08.260
What's product market fit?
link |
01:26:09.860
Just that of, like how do you think about it?
link |
01:26:11.940
Yeah, so although WeThink Robotics was getting there, right?
link |
01:26:15.620
But I think it's just the timing,
link |
01:26:17.420
it just, their clock just timed out.
link |
01:26:20.340
I think if they'd been given a couple more years,
link |
01:26:23.100
they would have been okay.
link |
01:26:25.060
But the other ones were still fairly early
link |
01:26:28.620
by the time they got into the market.
link |
01:26:30.100
And so product market fit is,
link |
01:26:32.740
I have a product that I wanna sell at a certain price.
link |
01:26:37.140
Are there enough people out there, the market,
link |
01:26:40.060
that are willing to buy the product at that market price
link |
01:26:42.780
for me to be a functional viable profit bearing company?
link |
01:26:47.820
Right?
link |
01:26:48.940
So product market fit.
link |
01:26:50.420
If it costs you a thousand dollars
link |
01:26:53.300
and everyone wants it and only is willing to pay a dollar,
link |
01:26:57.340
you have no product market fit.
link |
01:26:59.260
Even if you could sell it for, you know,
link |
01:27:01.940
it's enough for a dollar, cause you can't.
link |
01:27:03.660
So how hard is it for robots?
link |
01:27:05.380
Sort of maybe if you look at iRobot,
link |
01:27:07.580
the company that makes Roombas, vacuum cleaners,
link |
01:27:10.740
can you comment on, did they find the right product,
link |
01:27:14.100
market product fit?
link |
01:27:15.940
Like, are people willing to pay for robots
link |
01:27:18.540
is also another kind of question underlying all this.
link |
01:27:20.540
So if you think about iRobot and their story, right?
link |
01:27:23.700
Like when they first, they had enough of a runway, right?
link |
01:27:28.660
When they first started,
link |
01:27:29.780
they weren't doing vacuum cleaners, right?
link |
01:27:31.340
They were contracts primarily, government contracts,
link |
01:27:36.540
designing robots.
link |
01:27:37.380
Or military robots.
link |
01:27:38.220
Yeah, I mean, that's what they were.
link |
01:27:39.380
That's how they started, right?
link |
01:27:40.820
And then.
link |
01:27:41.660
They still do a lot of incredible work there.
link |
01:27:42.740
But yeah, that was the initial thing
link |
01:27:44.660
that gave them enough funding to.
link |
01:27:46.620
To then try to, the vacuum cleaner is what I've been told
link |
01:27:50.740
was not like their first rendezvous
link |
01:27:53.900
in terms of designing a product, right?
link |
01:27:56.500
And so they were able to survive
link |
01:27:59.300
until they got to the point
link |
01:28:00.620
that they found a product price market, right?
link |
01:28:05.540
And even with, if you look at the Roomba,
link |
01:28:09.100
the price point now is different
link |
01:28:10.540
than when it was first released, right?
link |
01:28:12.260
It was an early adopter price,
link |
01:28:13.460
but they found enough people
link |
01:28:14.540
who were willing to fund it.
link |
01:28:16.700
And I mean, I forgot what their loss profile was
link |
01:28:20.340
for the first couple of years,
link |
01:28:22.180
but they became profitable in sufficient time
link |
01:28:25.860
that they didn't have to close their doors.
link |
01:28:28.140
So they found the right,
link |
01:28:29.140
there's still people willing to pay
link |
01:28:31.860
a large amount of money,
link |
01:28:32.700
so over $1,000 for a vacuum cleaner.
link |
01:28:35.940
Unfortunately for them,
link |
01:28:37.780
now that they've proved everything out,
link |
01:28:39.180
figured it all out,
link |
01:28:40.020
now there's competitors.
link |
01:28:40.860
Yeah, and so that's the next thing, right?
link |
01:28:43.500
The competition,
link |
01:28:44.660
and they have quite a number, even internationally.
link |
01:28:47.500
Like there's some products out there,
link |
01:28:50.180
you can go to Europe and be like,
link |
01:28:52.420
oh, I didn't even know this one existed.
link |
01:28:55.020
So this is the thing though,
link |
01:28:56.780
like with any market,
link |
01:28:59.300
I would, this is not a bad time,
link |
01:29:03.580
although as a roboticist, it's kind of depressing,
link |
01:29:06.500
but I actually think about things like with,
link |
01:29:11.340
I would say that all of the companies
link |
01:29:13.060
that are now in the top five or six,
link |
01:29:15.780
they weren't the first to the stage, right?
link |
01:29:19.620
Like Google was not the first search engine,
link |
01:29:22.780
sorry, Altavista, right?
link |
01:29:24.780
Facebook was not the first, sorry, MySpace, right?
link |
01:29:28.340
Like think about it,
link |
01:29:29.180
they were not the first players.
link |
01:29:31.100
Those first players,
link |
01:29:32.980
like they're not in the top five, 10 of Fortune 500 companies,
link |
01:29:38.580
right?
link |
01:29:39.420
They proved, they started to prove out the market,
link |
01:29:43.940
they started to get people interested,
link |
01:29:46.340
they started the buzz,
link |
01:29:48.300
but they didn't make it to that next level.
link |
01:29:50.060
But the second batch, right?
link |
01:29:52.300
The second batch, I think might make it to the next level.
link |
01:29:57.540
When do you think the Facebook of robotics?
link |
01:30:02.380
The Facebook of robotics.
link |
01:30:04.740
Sorry, I take that phrase back because people deeply,
link |
01:30:08.500
for some reason, well, I know why,
link |
01:30:10.340
but it's, I think, exaggerated distrust Facebook
link |
01:30:13.700
because of the privacy concerns and so on.
link |
01:30:15.500
And with robotics, one of the things you have to make sure
link |
01:30:18.420
is all the things we talked about is to be transparent
link |
01:30:21.340
and have people deeply trust you
link |
01:30:22.980
to let a robot into their lives, into their home.
link |
01:30:25.780
When do you think the second batch of robots will come?
link |
01:30:28.620
Is it five, 10 years, 20 years
link |
01:30:32.140
that we'll have robots in our homes
link |
01:30:34.700
and robots in our hearts?
link |
01:30:36.540
So if I think about, and because I try to follow
link |
01:30:38.900
the VC kind of space in terms of robotic investments,
link |
01:30:43.180
and right now, and I don't know
link |
01:30:44.900
if they're gonna be successful,
link |
01:30:45.900
I don't know if this is the second batch,
link |
01:30:49.220
but there's only one batch that's focused
link |
01:30:50.980
on like the first batch, right?
link |
01:30:52.900
And then there's all these self driving Xs, right?
link |
01:30:56.260
And so I don't know if they're a first batch of something
link |
01:30:59.540
or if like, I don't know quite where they fit in,
link |
01:31:03.060
but there's a number of companies,
link |
01:31:05.540
the co robot, I call them co robots
link |
01:31:08.500
that are still getting VC investments.
link |
01:31:13.060
Some of them have some of the flavor
link |
01:31:14.500
of like Rethink Robotics.
link |
01:31:15.740
Some of them have some of the flavor of like Curie.
link |
01:31:18.980
What's a co robot?
link |
01:31:20.740
So basically a robot and human working in the same space.
link |
01:31:26.060
So some of the companies are focused on manufacturing.
link |
01:31:30.500
So having a robot and human working together
link |
01:31:34.220
in a factory, some of these co robots
link |
01:31:37.580
are robots and humans working in the home,
link |
01:31:41.220
working in clinics, like there's different versions
link |
01:31:43.180
of these companies in terms of their products,
link |
01:31:45.380
but they're all, so we think robotics would be
link |
01:31:48.660
like one of the first, at least well known companies
link |
01:31:52.660
focused on this space.
link |
01:31:54.580
So I don't know if this is a second batch
link |
01:31:56.700
or if this is still part of the first batch,
link |
01:32:00.940
that I don't know.
link |
01:32:01.980
And then you have all these other companies
link |
01:32:03.740
in this self driving space.
link |
01:32:06.860
And I don't know if that's a first batch
link |
01:32:09.380
or again, a second batch.
link |
01:32:11.140
Yeah.
link |
01:32:11.980
So there's a lot of mystery about this now.
link |
01:32:13.860
Of course, it's hard to say that this is the second batch
link |
01:32:16.380
until it proves out, right?
link |
01:32:18.460
Correct.
link |
01:32:19.300
Yeah, we need a unicorn.
link |
01:32:20.540
Yeah, exactly.
link |
01:32:23.460
Why do you think people are so afraid,
link |
01:32:27.700
at least in popular culture of legged robots
link |
01:32:30.460
like those worked in Boston Dynamics
link |
01:32:32.340
or just robotics in general,
link |
01:32:34.140
if you were to psychoanalyze that fear,
link |
01:32:36.300
what do you make of it?
link |
01:32:37.980
And should they be afraid, sorry?
link |
01:32:39.780
So should people be afraid?
link |
01:32:41.420
I don't think people should be afraid.
link |
01:32:43.860
But with a caveat, I don't think people should be afraid
link |
01:32:47.060
given that most of us in this world
link |
01:32:51.460
understand that we need to change something, right?
link |
01:32:55.660
So given that.
link |
01:32:58.100
Now, if things don't change, be very afraid.
link |
01:33:01.500
Which is the dimension of change that's needed?
link |
01:33:04.380
So changing, thinking about the ramifications,
link |
01:33:07.740
thinking about like the ethics,
link |
01:33:09.420
thinking about like the conversation is going on, right?
link |
01:33:12.740
It's no longer a we're gonna deploy it
link |
01:33:15.860
and forget that this is a car that can kill pedestrians
link |
01:33:20.300
that are walking across the street, right?
link |
01:33:22.500
We're not in that stage.
link |
01:33:23.340
We're putting these roads out.
link |
01:33:25.820
There are people out there.
link |
01:33:27.500
A car could be a weapon.
link |
01:33:28.700
Like people are now, solutions aren't there yet,
link |
01:33:33.140
but people are thinking about this
link |
01:33:35.300
as we need to be ethically responsible
link |
01:33:38.460
as we send these systems out,
link |
01:33:40.820
robotics, medical, self driving.
link |
01:33:43.060
And military too.
link |
01:33:43.940
And military.
link |
01:33:45.260
Which is not as often talked about,
link |
01:33:46.980
but it's really where probably these robots
link |
01:33:50.260
will have a significant impact as well.
link |
01:33:51.900
Correct, correct.
link |
01:33:52.820
Right, making sure that they can think rationally,
link |
01:33:57.340
even having the conversations,
link |
01:33:58.700
who should pull the trigger, right?
link |
01:34:01.260
But overall you're saying if we start to think
link |
01:34:03.380
more and more as a community about these ethical issues,
link |
01:34:05.740
people should not be afraid.
link |
01:34:06.980
Yeah, I don't think people should be afraid.
link |
01:34:08.660
I think that the return on investment,
link |
01:34:10.540
the impact, positive impact will outweigh
link |
01:34:14.060
any of the potentially negative impacts.
link |
01:34:17.500
Do you have worries of existential threats
link |
01:34:20.540
of robots or AI that some people kind of talk about
link |
01:34:25.540
and romanticize about in the next decade,
link |
01:34:28.620
the next few decades?
link |
01:34:29.980
No, I don't.
link |
01:34:31.340
Singularity would be an example.
link |
01:34:33.700
So my concept is that, so remember,
link |
01:34:36.380
robots, AI, is designed by people.
link |
01:34:39.580
It has our values.
link |
01:34:41.260
And I always correlate this with a parent and a child.
link |
01:34:45.060
So think about it, as a parent, what do we want?
link |
01:34:47.100
We want our kids to have a better life than us.
link |
01:34:49.860
We want them to expand.
link |
01:34:52.300
We want them to experience the world.
link |
01:34:55.780
And then as we grow older, our kids think and know
link |
01:34:59.740
they're smarter and better and more intelligent
link |
01:35:03.020
and have better opportunities.
link |
01:35:04.780
And they may even stop listening to us.
link |
01:35:08.220
They don't go out and then kill us, right?
link |
01:35:10.500
Like, think about it.
link |
01:35:11.340
It's because we, it's instilled in them values.
link |
01:35:14.180
We instilled in them this whole aspect of community.
link |
01:35:17.420
And yes, even though you're maybe smarter
link |
01:35:19.780
and have more money and dah, dah, dah,
link |
01:35:22.460
it's still about this love, caring relationship.
link |
01:35:26.780
And so that's what I believe.
link |
01:35:27.740
So even if like, you know,
link |
01:35:29.020
we've created the singularity in some archaic system
link |
01:35:32.140
back in like 1980 that suddenly evolves,
link |
01:35:35.340
the fact is it might say, I am smarter, I am sentient.
link |
01:35:40.180
These humans are really stupid,
link |
01:35:43.220
but I think it'll be like, yeah,
link |
01:35:46.060
but I just can't destroy them.
link |
01:35:47.620
Yeah, for sentimental value.
link |
01:35:49.660
It's still just to come back for Thanksgiving dinner
link |
01:35:53.140
every once in a while.
link |
01:35:53.980
Exactly.
link |
01:35:54.820
That's such, that's so beautifully put.
link |
01:35:57.460
You've also said that The Matrix may be
link |
01:36:00.580
one of your more favorite AI related movies.
link |
01:36:03.660
Can you elaborate why?
link |
01:36:05.580
Yeah, it is one of my favorite movies.
link |
01:36:07.860
And it's because it represents
link |
01:36:11.180
kind of all the things I think about.
link |
01:36:14.060
So there's a symbiotic relationship
link |
01:36:16.100
between robots and humans, right?
link |
01:36:20.140
That symbiotic relationship is that they don't destroy us,
link |
01:36:22.500
they enslave us, right?
link |
01:36:24.620
But think about it,
link |
01:36:28.060
even though they enslaved us,
link |
01:36:30.260
they needed us to be happy, right?
link |
01:36:32.820
And in order to be happy,
link |
01:36:33.860
they had to create this cruddy world
link |
01:36:35.420
that they then had to live in, right?
link |
01:36:36.980
That's the whole premise.
link |
01:36:39.460
But then there were humans that had a choice, right?
link |
01:36:44.380
Like you had a choice to stay in this horrific,
link |
01:36:47.660
horrific world where it was your fantasy life
link |
01:36:51.220
with all of the anomalies, perfection, but not accurate.
link |
01:36:54.740
Or you can choose to be on your own
link |
01:36:57.940
and like have maybe no food for a couple of days,
link |
01:37:02.500
but you were totally autonomous.
link |
01:37:05.180
And so I think of that as, and that's why.
link |
01:37:07.980
So it's not necessarily us being enslaved,
link |
01:37:09.700
but I think about us having the symbiotic relationship.
link |
01:37:13.060
Robots and AI, even if they become sentient,
link |
01:37:15.780
they're still part of our society
link |
01:37:17.100
and they will suffer just as much as we.
link |
01:37:20.700
And there will be some kind of equilibrium
link |
01:37:23.820
that we'll have to find some symbiotic relationship.
link |
01:37:26.700
Right, and then you have the ethicists,
link |
01:37:28.220
the robotics folks that are like,
link |
01:37:30.180
no, this has got to stop, I will take the other pill
link |
01:37:34.500
in order to make a difference.
link |
01:37:36.300
So if you could hang out for a day with a robot,
link |
01:37:40.380
real or from science fiction, movies, books, safely,
link |
01:37:44.740
and get to pick his or her, their brain,
link |
01:37:48.780
who would you pick?
link |
01:37:55.980
Gotta say it's Data.
link |
01:37:57.620
Data.
link |
01:37:58.740
I was gonna say Rosie,
link |
01:38:00.460
but I'm not really interested in her brain.
link |
01:38:03.660
I'm interested in Data's brain.
link |
01:38:05.820
Data pre or post emotion chip?
link |
01:38:08.460
Pre.
link |
01:38:10.460
But don't you think it'd be a more interesting conversation
link |
01:38:15.100
post emotion chip?
link |
01:38:16.180
Yeah, it would be drama.
link |
01:38:17.740
And I'm human, I deal with drama all the time.
link |
01:38:22.860
But the reason why I wanna pick Data's brain
link |
01:38:24.860
is because I could have a conversation with him
link |
01:38:29.540
and ask, for example, how can we fix this ethics problem?
link |
01:38:34.540
And he could go through like the rational thinking
link |
01:38:38.300
and through that, he could also help me
link |
01:38:40.780
think through it as well.
link |
01:38:42.220
And so there's like these fundamental questions
link |
01:38:44.860
I think I could ask him
link |
01:38:46.420
that he would help me also learn from.
link |
01:38:49.980
And that fascinates me.
link |
01:38:52.860
I don't think there's a better place to end it.
link |
01:38:55.140
Ayana, thank you so much for talking to us, it was an honor.
link |
01:38:57.300
Thank you, thank you.
link |
01:38:58.140
This was fun.
link |
01:39:00.300
Thanks for listening to this conversation
link |
01:39:02.420
and thank you to our presenting sponsor, Cash App.
link |
01:39:05.900
Download it, use code LexPodcast,
link |
01:39:08.540
you'll get $10 and $10 will go to FIRST,
link |
01:39:11.340
a STEM education nonprofit that inspires
link |
01:39:13.620
hundreds of thousands of young minds
link |
01:39:15.820
to become future leaders and innovators.
link |
01:39:18.740
If you enjoy this podcast, subscribe on YouTube,
link |
01:39:21.540
give it five stars on Apple Podcast,
link |
01:39:23.540
follow on Spotify, support on Patreon
link |
01:39:26.220
or simply connect with me on Twitter.
link |
01:39:29.300
And now let me leave you with some words of wisdom
link |
01:39:31.860
from Arthur C. Clarke.
link |
01:39:35.180
Whether we are based on carbon or on silicon
link |
01:39:38.580
makes no fundamental difference.
link |
01:39:40.620
We should each be treated with appropriate respect.
link |
01:39:43.660
Thank you for listening and hope to see you next time.