back to index

Russ Tedrake: Underactuated Robotics, Control, Dynamics and Touch | Lex Fridman Podcast #114


small model | large model

link |
00:00:00.000
The following is a conversation with Russ Tedrick, a roboticist and professor at MIT and vice
link |
00:00:05.920
president of robotics research at Toyota Research Institute, or TRI. He works on control of robots
link |
00:00:13.840
in interesting, complicated, underactuated stochastic, difficult to model situations.
link |
00:00:19.920
He's a great teacher and a great person, one of my favorites at MIT. We get into a lot of topics
link |
00:00:27.200
in this conversation from his time leading MIT's Dabra Robotics Challenge team to the awesome fact
link |
00:00:34.320
that he often runs close to a marathon a day to and from work barefoot. For a world class roboticist
link |
00:00:42.320
interested in elegant efficient control of underactory dynamical systems like the human body,
link |
00:00:49.200
this fact makes Russ one of the most fascinating people I know.
link |
00:00:52.880
Quick summary of the ads. Three sponsors. Magic Spoon Serial, BetterHelp, and ExpressVPN. Please
link |
00:01:00.880
consider supporting this podcast by going to magicspoon.com slash lex and using code lex at checkout,
link |
00:01:07.840
going to betterhelp.com slash lex and signing up at expressvpn.com slash lex pod. Click the links
link |
00:01:15.120
in the description, buy the stuff, get the discount, it really is the best way to support this podcast.
link |
00:01:20.960
If you enjoy this thing, subscribe on YouTube, review it with 5 stars on Apple Podcasts,
link |
00:01:26.080
support it on Patreon, or connect with me on Twitter at Lex Freedman.
link |
00:01:31.120
As usual, I'll do a few minutes of ads now and never any ads in the middle that can break
link |
00:01:35.280
the flow of the conversation. This episode is supported by Magic Spoon Low Carb Keto Friendly
link |
00:01:42.400
Serial. I've been on a mix of keto or carnivore diet for a very long time now. That means eating
link |
00:01:48.800
very little carbs. I used to love cereal. Obviously, most of crazy amounts of sugar,
link |
00:01:54.800
which is terrible for you. So I quit years ago. But Magic Spoon is a totally new thing. Zero sugar,
link |
00:02:01.360
11 grams of protein, and only three net grams of carbs. It tastes delicious. It has a bunch of
link |
00:02:07.840
flavors. They're all good. But if you know what's good for you, you'll go with cocoa, my favorite
link |
00:02:13.120
flavor, and the flavor of champions. Click the magic spoon dot com slash Lex link in the description,
link |
00:02:19.360
use code Lex at checkout to get the discount and to let them know I sent you. So buy all of their
link |
00:02:25.680
cereal. It's delicious and good for you. You won't regret it. The show is also sponsored by Better
link |
00:02:32.400
Help spelled H E L P help. Check it out at better help dot com slash Lex. They figure out what you
link |
00:02:40.080
need and match you with a licensed professional therapist in under 48 hours. It's not a crisis
link |
00:02:45.600
line. It's not self help. It is professional counseling done securely online. As you may know,
link |
00:02:51.760
I'm a bit from the David Goggins line of creatures and so have some demons to contend with,
link |
00:02:56.960
usually on long runs or all nighters full of self doubt. I think suffering is essential for creation.
link |
00:03:04.160
But you can suffer beautifully in a way that doesn't destroy you. For most people, I think a
link |
00:03:09.440
good therapist can help in this. So it's the least worth a try. Check out the reviews. They're all
link |
00:03:14.880
good. It's easy, private, affordable, available worldwide. You can communicate by text anytime
link |
00:03:21.520
and schedule weekly audio and video sessions. Check it out at better help dot com slash Lex.
link |
00:03:28.400
This show is also sponsored by Express VPN. Get it at express VPN dot com slash Lex pod to get a
link |
00:03:35.040
discount and to support this podcast. Have you ever watched the office? If you have, you probably
link |
00:03:41.040
know it's based on a UK series also called the office, not to stir up trouble. But I personally
link |
00:03:47.600
think the British version is actually more brilliant than the American one. But both are amazing.
link |
00:03:52.960
Anyway, there are actually nine other countries with their own version of the office. You can get
link |
00:03:58.720
access to them with no geo restriction when you use Express VPN. It lets you control where you want
link |
00:04:05.120
sites to think you're located. You can choose from nearly 100 different countries, giving you
link |
00:04:10.720
access to content that isn't available in your region. So again, get it on any device at express
link |
00:04:17.280
VPN dot com slash Lex pod to get an extra three months free and to support this podcast. And now
link |
00:04:25.440
here's my conversation with Russ Tedrick. What is the most beautiful motion of animal or robot
link |
00:04:33.280
that you've ever seen? I think the most beautiful motion of a robot has to be the passive dynamic
link |
00:04:39.680
walkers. I think there's just something fundamentally beautiful. The ones in particular
link |
00:04:44.160
that Steve Collins built with Andy Ruina at Cornell, a 3d walking machine. So it was not
link |
00:04:51.440
confined to a boom or a plane that you put it on top of a small ramp, give it a little push.
link |
00:04:58.960
It's powered only by gravity, no controllers, no batteries whatsoever. It just falls down the ramp.
link |
00:05:06.080
And at the time, it looked more natural, more graceful, more human like than any robot we'd
link |
00:05:12.160
seen to date, powered only by gravity. How does it work? Well, okay, the simplest model,
link |
00:05:18.320
it's kind of like a slinky, it's like an elaborate slinky. One of the simplest models we
link |
00:05:22.480
used to think about it is actually a rimless wheel. So imagine taking a bicycle wheel,
link |
00:05:28.560
but take the rim off. So it's now just got a bunch of spokes. If you give that a push,
link |
00:05:33.520
it still wants to roll down the ramp. But every time it's foot, it's spoke comes around and hits
link |
00:05:38.320
the ground, it loses a little energy. Every time it takes a step forward, it gains a little energy.
link |
00:05:44.240
Those things can come into perfect balance. And actually, they want to, it's a stable phenomenon.
link |
00:05:49.680
If it's going too slow, it'll speed up. If it's going too fast, it'll slow down. And it comes into
link |
00:05:55.120
a stable periodic motion. Now, you can take that rimless wheel, which doesn't look very much like
link |
00:06:01.760
a human walking, take all the extra spokes away, put a hinge in the middle. Now it's two legs.
link |
00:06:07.680
That's called our compass gate walker. That can still, you give it a little push,
link |
00:06:12.400
starts falling down a ramp. It looks a little bit more like walking. At least it's a biped.
link |
00:06:19.520
But what Steve and Andy and Ted McGeer started the whole exercise, but what Steve and Andy did
link |
00:06:24.480
was they took it to this beautiful conclusion where they built something that had knees,
link |
00:06:30.640
arms, a torso, the arms swung naturally. Give it a little push and that looked like a stroll
link |
00:06:37.600
through the park. How do you design something like that? I mean, is that art or science?
link |
00:06:41.360
It's on the boundary. I think there's a science to getting close to the solution.
link |
00:06:47.440
I think there's certainly art in the way that they made a beautiful robot,
link |
00:06:51.920
but then the finesse, because they were working with a system that wasn't perfectly modeled,
link |
00:06:58.800
wasn't perfectly controlled, there's all these little tricks that you have to tune the suction
link |
00:07:04.320
cups at the knees, for instance, so that they stick, but then they release at just the right
link |
00:07:09.200
time or there's all these little tricks of the trade, which really are art, but it was a point.
link |
00:07:14.320
I mean, it made the point. At that time, the best walking robot in the world was Hondo's Asamo,
link |
00:07:21.760
absolutely marvel of modern engineering. This was in the 97 when they first released,
link |
00:07:27.280
it sort of announced P2 and then it went through. It was Asamo by then in 2004.
link |
00:07:31.920
It looks like this very cautious walking. You're walking on hot coals or something like that.
link |
00:07:42.240
I think it gets a bad rap. Asamo is a beautiful machine. It does walk with its knees bent.
link |
00:07:46.880
Our Atlas walking had its knees bent, but actually, Asamo was pretty fantastic,
link |
00:07:52.160
but it wasn't energy efficient. Neither was Atlas when we worked on Atlas. None of our robots
link |
00:07:59.120
that have been that complicated have been very energy efficient, but there's a thing that happens
link |
00:08:07.760
when you do control, when you try to control a system of that complexity. You try to use your
link |
00:08:13.120
motors to basically counteract gravity. Take whatever the world's doing to you and push back,
link |
00:08:20.560
erase the dynamics of the world and impose the dynamics you want because you can make them
link |
00:08:25.760
simple and analyzable, mathematically simple. This was a very sort of beautiful example that
link |
00:08:34.400
you don't have to do that. You can just let physics do most of the work and you just have
link |
00:08:40.640
to give it a little bit of energy. This one only walked down a ramp. It would never walk on the flat.
link |
00:08:45.120
To walk on the flat, you have to give a little energy at some point,
link |
00:08:48.320
but maybe instead of trying to take the forces imparted to you by the world
link |
00:08:52.800
and replacing them, what we should be doing is letting the world push us around
link |
00:08:58.000
and we go with the flow. Very Zen, very Zen robot.
link |
00:09:00.880
Yeah, but okay, so that sounds very Zen, but I can also imagine how many
link |
00:09:07.600
like failed versions they had to go through. I would say it's probably, would you say it's in
link |
00:09:14.560
the thousands that they've had to have the system fall down before they figured out?
link |
00:09:18.800
I don't know if it's thousands, but it's a lot. It takes some patience. There's no question.
link |
00:09:24.960
So in that sense, control might help a little bit.
link |
00:09:29.280
I think everybody, even at the time, said that the answer is to do that with control,
link |
00:09:34.800
but it was just pointing out that maybe the way we're doing control right now
link |
00:09:39.040
isn't the way we should. Got it. So what about on the animal side, the ones that figured out
link |
00:09:45.040
how to move efficiently? Is there anything you find inspiring or beautiful in the movement of
link |
00:09:50.240
any particular animal? I do have a favorite example. Okay. So it sort of goes with the
link |
00:09:55.760
passive walking idea. So is there, how energy efficient are animals? Okay, there's a great
link |
00:10:02.080
series of experiments by George Lauder at Harvard and Mike Tranifillo at MIT.
link |
00:10:07.280
They were studying fish swimming in a water tunnel. Okay. And one of these, the type of fish
link |
00:10:14.080
they were studying were these rainbow trout because there was a phenomenon well understood
link |
00:10:20.160
that rainbow trout, when they're swimming upstream at mating season, they kind of hang out behind
link |
00:10:24.320
the rocks. And it looks like, I mean, that's tiring work swimming upstream. They're hanging
link |
00:10:28.320
out behind the rocks. Maybe there's something energetically interesting there. So they tried
link |
00:10:32.240
to recreate that. They put in this water tunnel, a rock basically, a cylinder that had the same sort
link |
00:10:38.400
of vortex street, the eddies coming off the back of the rock that you would see in a stream. And
link |
00:10:44.160
they put a real fish behind this and watched how it swims. And the amazing thing is that if you
link |
00:10:50.080
watch from above, what the fish swims when it's not behind a rock, it has a particular gait.
link |
00:10:55.920
You can identify the fish the same way you look at a human walking down the street. You
link |
00:10:59.920
sort of have a sense of how a human walks. The fish has a characteristic gait.
link |
00:11:03.680
You put that fish behind the rock, its gait changes. And what they saw was that it was
link |
00:11:11.280
actually resonating and kind of surfing between the vortices. Now, here was the experiment that
link |
00:11:19.040
really was the clincher because there was still, it wasn't clear how much of that was mechanics
link |
00:11:22.960
of the fish, how much of that is control the brain. So the clincher experiment and maybe
link |
00:11:28.640
one of my favorites to date, although there are many good experiments. This was now a dead fish.
link |
00:11:38.160
They took a dead fish. They put a string that tied the mouse of the fish to the rock so it
link |
00:11:44.160
couldn't go back and get caught in the grates. And then they asked, what would that dead fish do
link |
00:11:48.960
when it was hanging up behind the rock? And so what you'd expect is sort of flopped around like a
link |
00:11:52.960
dead fish in the vortex wake until something sort of amazing happens. And this video is worth putting
link |
00:12:00.480
in. What happens? The dead fish basically starts swimming upstream. It's completely dead, no brain,
link |
00:12:09.920
no motors, no control, but it somehow the mechanics of the fish resonate with the vortex
link |
00:12:15.520
street and it starts swimming upstream. It's one of the best examples ever.
link |
00:12:19.360
Who do you credit for that too? Is that just evolution constantly just
link |
00:12:27.040
figuring out by killing a lot of generations of animals like the most efficient motion?
link |
00:12:33.200
Is that or maybe the physics of our world completely like, it's like evolution applied
link |
00:12:39.840
not only to animals, but just the entirety of it somehow drives to efficiency like nature likes
link |
00:12:46.080
efficiency. I don't know if that question even makes any sense. I understand the question.
link |
00:12:53.040
Do they co evolve? Yeah, somehow, yeah. I don't know if an environment can evolve, but
link |
00:12:59.920
I mean, there are experiments that people do, careful experiments that show that
link |
00:13:04.880
animals can adapt to unusual situations and recover efficiency. So there seems like at
link |
00:13:09.360
least in one direction, I think there is reason to believe that the animals motor system
link |
00:13:14.160
and probably its mechanics adapt in order to be more efficient, but efficiency isn't the only
link |
00:13:20.960
goal of course. Sometimes it's too easy to think about only efficiency, but we have to do a lot
link |
00:13:26.880
of other things first, not get eaten, and then all other things being equal try to save energy.
link |
00:13:33.920
By the way, let's draw a distinction between control and mechanics. How would you define each?
link |
00:13:40.080
Yeah. I think part of the point is that we shouldn't draw a line as clearly as we tend to,
link |
00:13:47.680
but on a robot, we have motors and we have the links of the robot, let's say. If the motors
link |
00:13:55.200
are turned off, the robot has some passive dynamics. Gravity does the work. You can put
link |
00:14:01.680
springs, I would call that mechanics. If we have springs and dampers, which our muscles are springs
link |
00:14:05.840
and dampers and tendons, but then you have something that's doing active work, putting
link |
00:14:10.560
energy in which your motors on the robot, the controller's job is to send commands to the
link |
00:14:15.840
motor that add new energy into the system. So the mechanics and control interplay somewhere
link |
00:14:22.720
the divide is around, did you decide to send some commands to your motor or did you just
link |
00:14:28.000
leave the motors off and let them do their work? Would you say is most of nature on the
link |
00:14:37.280
dynamic side or the control side? If you look at biological systems,
link |
00:14:43.440
we're living in a pandemic now. Do you think a virus is a dynamic system or is there a lot of
link |
00:14:52.080
control, intelligence? I think it's both, but I think we maybe have underestimated how important
link |
00:14:57.600
the dynamics are. I mean, even our bodies, the mechanics of our bodies, certainly with exercise,
link |
00:15:05.360
they evolve, but so I actually, I lost a finger in early 2000s and it's my fifth
link |
00:15:13.040
metacarpal. It turns out you use that a lot in ways you don't expect when you're opening jars,
link |
00:15:19.120
even when I'm just walking around, if I bump it on something, there's a bone there that was used
link |
00:15:23.520
to taking contact. My fourth metacarpal wasn't used to taking contact. It used to hurt. It still
link |
00:15:29.600
does a little bit, but actually my bone has remodeled. Over a couple of years, the geometry,
link |
00:15:39.360
the mechanics of that bone changed to address the new circumstances. So the idea that somehow
link |
00:15:45.760
it's only our brain that's adapting or evolving is not right.
link |
00:15:48.640
Right. Maybe sticking on evolution for a bit because it's tended to create some interesting
link |
00:15:55.280
things. Bipedal walking, why the heck did evolution give us, I think we're, are we the only mammals
link |
00:16:04.960
that walk on two feet? No. I mean, there's a bunch of animals that do it a bit. I think we are the
link |
00:16:12.400
most successful bipeds. I think I read somewhere that the reason the, you know, evolution made us
link |
00:16:22.240
walk on two feet is because there's an advantage to being able to carry food back to the tribe or
link |
00:16:28.560
something like that. So like you can carry, it's kind of this communal cooperative thing. So like
link |
00:16:34.480
to carry stuff back to a place of shelter and so on to share with others. Do you understand
link |
00:16:42.160
at all the value of walking on two feet from both the robotics and the human perspective?
link |
00:16:49.280
Yeah. There are some great books written about evolution of, walking evolution of the human
link |
00:16:54.960
body. I think it's easy though to make bad evolutionary arguments. Sure. Most of them
link |
00:17:02.720
are probably bad, but what else can we do? I mean, I think a lot of what dominated our evolution
link |
00:17:11.040
probably was not the things that worked well sort of in the steady state, you know, when things are,
link |
00:17:19.040
when things are good, but, but for instance, people talk about what we should eat now because
link |
00:17:25.360
our ancestors were meat eaters or, or whatever. Oh yeah, I love that. Yeah. But probably, you know,
link |
00:17:31.840
the reason that one pre, prehomo sapien species versus another survived was not because of
link |
00:17:40.560
whether they ate well when there was lots of food. But when the Ice Age came, you know,
link |
00:17:47.840
probably one of them happened to be in the wrong place. One of them happened to
link |
00:17:52.480
forage a food that was okay, even, even when the glaciers came or something like that. I mean,
link |
00:17:58.080
there's a million variables that contributed and we can't, and our actually the amount of
link |
00:18:03.120
information we're working with and telling these stories, these evolutionary stories is,
link |
00:18:08.080
is very little. So yeah, just like you said, it seems like if we, if we study history, it seems
link |
00:18:14.560
like history turns on like these little events that, that otherwise would seem meaningless, but
link |
00:18:22.480
in the grant, like when you in retrospect, or turning points, absolutely. And then that's
link |
00:18:28.720
probably how like somebody got hit in the head with a rock, because somebody slept with the wrong
link |
00:18:33.920
person back in the cave days and somebody get angry and that turned, you know, warring tribes
link |
00:18:41.760
combined with the environment, all those millions of things. And the meat eating,
link |
00:18:46.240
which I get a lot of criticism because I don't know, I don't know what your dietary processes
link |
00:18:50.960
are like, but these days I've been eating only meat, which is there's, there's a large community
link |
00:18:58.080
people who say, yeah, probably make evolutionary arguments and say, you're doing a great job.
link |
00:19:02.560
There's probably an even larger community of people, including my mom, who says it's a deeply
link |
00:19:07.760
unhealthy, it's wrong, but I just feel good doing it. But you're right, these evolutionary
link |
00:19:12.480
arguments can be flawed. But is there anything interesting to pull out for?
link |
00:19:17.120
There's a great book, by the way, look, a series of books by Nicholas Taylor about fooled by randomness
link |
00:19:22.640
and black swan, highly recommend them. But yeah, they make the point nicely that probably it was
link |
00:19:29.360
a few random events that, yes, maybe it was someone getting hit by a rock, as you say.
link |
00:19:39.440
That said, do you think, I don't know how to ask this question or how to talk about this,
link |
00:19:43.920
but there's something elegant and beautiful about moving on two feet, obviously biased,
link |
00:19:48.640
because I'm human. But from a robotics perspective to you work with robots on two feet,
link |
00:19:55.040
is it all useful to build robots that are on two feet as opposed to four? Is there something useful
link |
00:20:01.840
about it? The reason I spent a long time working on bipedal walking was because it was hard.
link |
00:20:10.240
It challenged control theory in ways that I thought were important. I wouldn't have
link |
00:20:17.120
ever tried to convince you that you should start a company around bipeds or something like this.
link |
00:20:23.120
There are people that make pretty compelling arguments. I think the most compelling one
link |
00:20:27.520
is that the world is built for the human form. And if you want a robot to work in the world we
link |
00:20:33.520
have today, then having a human form is a pretty good way to go. There are places that a biped
link |
00:20:41.200
can go that would be hard for other form factors to go, even natural places. But at some point,
link |
00:20:50.400
in the long run, we'll be building our environments for our robots probably. And so maybe that argument
link |
00:20:55.120
falls aside. So you famously run barefoot. Do you still run barefoot? I still run barefoot.
link |
00:21:02.960
That's so awesome. Much to my wife's chagrin. Do you want to make an evolutionary argument for
link |
00:21:09.280
why running barefoot is advantageous? What have you learned about human and robot movement in
link |
00:21:17.600
general from running barefoot? Human or robot and or? Well, you know, it happened the other way.
link |
00:21:25.520
So I was studying walking robots and there's a great conference called the Dynamic Walking
link |
00:21:33.040
Conference where it brings together both the biomechanics community and the walking robots
link |
00:21:38.400
community. And so I've been going to this for years and hearing talks by people who study
link |
00:21:44.240
barefoot running and other the mechanics of running. So I did eventually read Born to Run.
link |
00:21:50.160
Most people read Born to Run in the first day, right? The other thing I had going for me is
link |
00:21:55.360
actually that I wasn't a runner before and I learned to run after I had learned about
link |
00:22:02.080
barefoot running or I mean, started running longer distances. So I didn't have to unlearn.
link |
00:22:07.200
And I'm definitely, I'm a big fan of it for me, but I tend to not try to convince other
link |
00:22:14.080
people. There's people who run beautifully with shoes on and that's good. But here's why it makes
link |
00:22:20.720
sense for me. It's all about the long term game, right? So I think it's just too easy to run 10
link |
00:22:28.640
miles, feel pretty good. And then you get home at night and you realize my knees hurt. I did
link |
00:22:34.000
something wrong, right? If you take your shoes off, then if you hit hard with your foot at all,
link |
00:22:44.080
then it hurts. You don't like run 10 miles and then realize you've done something, some damage.
link |
00:22:50.640
You have immediate feedback telling you that you've done something that's maybe suboptimal
link |
00:22:55.280
and you change your gait. I mean, it's even subconscious. If I right now, having run many
link |
00:22:59.440
miles barefoot, if I put a shoe on my gait changes in a way that I think is not as good.
link |
00:23:05.760
So it makes me land softer. And I think my goals for running are to do it for as long as I can
link |
00:23:14.800
into old age, not to win any races. And so for me, this is a way to protect myself.
link |
00:23:22.480
Yeah, I think, first of all, I've tried running barefoot many years ago, probably the other way,
link |
00:23:30.400
just reading born to run. But just to understand, because I felt like I couldn't
link |
00:23:38.560
put in the miles that I wanted to. And it feels like running for me, and I think for a lot of
link |
00:23:45.040
people, was one of those activities that we do often and we never really try to learn to do
link |
00:23:51.280
correctly. Like it's funny, there's so many activities we do every day, like brushing our
link |
00:23:58.000
teeth. I think a lot of us, at least me, probably have never deeply studied how to properly brush
link |
00:24:05.280
my teeth or wash as now with a pandemic or how to properly wash our hands and do it every day.
link |
00:24:11.680
But we haven't really studied, like, am I doing this correctly? But running felt like one of those
link |
00:24:16.640
things that was absurd, not to study how to do correctly, because it's the source of so much
link |
00:24:21.440
pain and suffering. Like I hate running, but I do it. I do it because I hate it, but I feel
link |
00:24:27.760
good afterwards. But I think it feels like you need to learn how to do it properly. So that's
link |
00:24:32.000
where barefoot running came in. And then I quickly realized that my gait was completely wrong. I was
link |
00:24:38.400
taking huge steps and landing hard on the heel, all those elements. And so yeah, from that I
link |
00:24:46.960
actually learned to take really small steps. Look, I already forgot the number, but I feel like it was
link |
00:24:53.040
180 a minute or something like that. And I remember I actually just took songs that are 180 beats per
link |
00:25:02.320
minute and then tried to run at that beat. And just to teach myself, it took a long time. And I
link |
00:25:09.360
feel like after a while you learn to run properly, you adjust it properly without going all the way
link |
00:25:15.200
to barefoot. But I feel like barefoot is the legit way to do it. I mean, I think a lot of people
link |
00:25:21.520
would be really curious about it. If they're interested in trying, how would you recommend
link |
00:25:27.680
they start or try or explore? Slowly. That's the biggest thing people do is they are excellent
link |
00:25:35.280
runners and they're used to running long distances or running fast and they take their shoes off and
link |
00:25:39.200
they hurt themselves instantly trying to do something that they were used to doing. I think
link |
00:25:44.560
I lucked out in the sense that I couldn't run very far when I first started trying. And I run with
link |
00:25:50.800
minimal shoes too. I mean, I will bring along a pair of actually like aqua socks or something like
link |
00:25:56.000
this. I can just slip on or running sandals. I've tried all of them. What's the difference between
link |
00:26:01.680
a minimal shoe and nothing at all? What's like feeling wise? What does it feel like?
link |
00:26:06.880
There is it. I mean, I noticed my gait changing, right? So, I mean, your foot has as many muscles
link |
00:26:14.960
and sensors as your hand does, right? Sensors. Ooh, okay. And we do amazing things with our hands.
link |
00:26:22.160
And we stick our foot in a big salad shoe, right? So, there's, I think, when you're barefoot,
link |
00:26:29.520
you're just giving yourself more proprioception. And that's why you're more aware of some of
link |
00:26:34.480
the gait flaws and stuff like this. Now, you have less protection too. So.
link |
00:26:40.640
Rocks and stuff. I mean, yeah. So, I think people who are afraid of barefoot running
link |
00:26:45.920
are worried about getting cuts or getting stepping on rocks. First of all, even if that
link |
00:26:50.720
was a concern, I think those are all very short term. If I get a scratch or something,
link |
00:26:55.200
it'll heal in a week. If I blow out my knees, I'm done running forever. So, I will trade
link |
00:26:58.960
the short term for the long term anytime. But even then, this, again, to my wife's chagrin,
link |
00:27:06.400
your feet get tough, right? And. Cow's. Okay. Yeah. I can run over animals to anything now.
link |
00:27:11.200
I mean, what, maybe, can you talk about, is there, like, is there tips or tricks that you have
link |
00:27:23.440
suggestions about? Like, if I wanted to try it. You know, there is a good book, actually. There's
link |
00:27:29.600
probably more good books since I read them. But Ken Bob, barefoot Ken Bob Saxton.
link |
00:27:35.360
He's an interesting guy. But I think his book captures the right way to describe running,
link |
00:27:43.520
barefoot running to somebody better than any other I've seen.
link |
00:27:48.560
So, you run pretty good distances in your bike. And is there, you know, if we talk about bucket
link |
00:27:56.800
list items, is there something crazy on your bucket list, athletically, that you hope to do one day?
link |
00:28:02.480
I mean, my commute is already a little crazy. What are we talking about here?
link |
00:28:08.960
What, what, what distance are we talking about? Well, I live about 12 miles from MIT.
link |
00:28:14.560
But you can find lots of different ways to get there. So, I mean, I've run there for a long,
link |
00:28:18.640
many years, a bike there. And ways? Yeah. But normally, I would try to run in and then bike
link |
00:28:24.320
home, bike in, run home. But you have run there and back before? Sure. Barefoot? Yeah. Or with
link |
00:28:30.720
minimal shoes or whatever that? 12 times two. Yeah. Okay. It became kind of a game of how can I get
link |
00:28:37.280
to work. I've rollerbladed. I've done all kinds of weird stuff. But my favorite one these days,
link |
00:28:42.560
I've been taking the Charles River to work. So, I can put in a little robot, not so far from my
link |
00:28:50.000
house. But the Charles River takes a long way to get the MIT. So, I can spend a long time getting
link |
00:28:55.600
there. And it's, you know, it's not about, I don't know, it's just about, I've had people
link |
00:29:02.000
ask me, how can you justify taking that time? But for me, it's just a magical time to think,
link |
00:29:10.000
to compress, decompress. You know, especially, I'll wake up, do a lot of work in the morning,
link |
00:29:16.160
and then I kind of have to just let that settle before I'm ready for all my meetings. And then
link |
00:29:21.200
on the way home, it's a great time to sort of let that settle. You lead a, like a large group of
link |
00:29:28.400
people. I mean, is there days where you're like, oh, shit, I got to get to work in an hour?
link |
00:29:39.680
I mean, is there, is there a tension there? And like, if we look at the grand scheme of things,
link |
00:29:47.760
just like you said, long term, that meeting probably doesn't matter. Like, you can always say,
link |
00:29:53.200
I'll just, I'll run and let the meeting happen, how it happens. Like, what, how do you,
link |
00:30:00.080
that Zen, how do you, what do you do with that tension between the real world saying urgently,
link |
00:30:05.440
you need to be there, this is important, everything is melting down, how we're going to fix this robot,
link |
00:30:11.600
there's this critical meeting, and then there's this, the Zen beauty of just running the simplicity
link |
00:30:18.560
of it, you along with nature. What do you do with that? I would say I'm not a fast runner,
link |
00:30:24.000
particularly. Probably my fastest splits ever was when I had to get to daycare on time, because
link |
00:30:29.280
they were going to charge me, you know, some, some dollar per minute that I was late. I've run some
link |
00:30:34.240
fast splits to daycare. But that those times are past now. I think work, you can find a work life
link |
00:30:43.840
balance in that way. I think you just have to. I think I am better at work because I take time to
link |
00:30:50.320
think on the way in. So I plan my day around it. And I rarely feel that those are really in at odds.
link |
00:30:59.920
So what the bucket list item, if we're talking 12 times two, or approaching a marathon,
link |
00:31:10.480
what have you run an ultra marathon before? Do you do races? Is there what's
link |
00:31:17.920
to win? I'm not going to like take a dingy across the Atlantic or something, if that's what you
link |
00:31:24.400
want. But, but if someone does and wants to write a book, I would totally read it because I have a
link |
00:31:29.120
sucker for that kind of thing. No, I do have some fun things that I will try. I like to,
link |
00:31:34.560
when I travel, I almost always bike to Logan airport and fold up a little folding bike on
link |
00:31:38.640
and then take it with me and bike to wherever I'm going. And it's taken me or I'll take a
link |
00:31:43.280
stand up paddle board these days on the airplane. And then I'll try to paddle around where I'm going
link |
00:31:46.960
or whatever. And I've done some crazy things. But, but not for the, you know, I now talk,
link |
00:31:54.240
I don't know if you know who David Goggins is by any chance, but I talk to him now every day. So
link |
00:32:00.240
he's the person who made me do this stupid challenge. So he, he's insane. And he does things for the
link |
00:32:09.520
purpose in the best kind of way. He does things like for the explicit purpose of suffering.
link |
00:32:16.800
Like he picks the thing that like whatever he thinks he can do, he does more. So is that,
link |
00:32:23.680
do you have that thing in you or you? I think it's become the opposite.
link |
00:32:29.600
It's, uh,
link |
00:32:30.320
So you're like that dynamical system that the walk or the efficient, uh,
link |
00:32:34.240
Yeah, it's, uh, leave no pain, right? You should end feeling better than you started.
link |
00:32:41.360
But, um, it's mostly, I think, and COVID has tested this because I've lost my commute. I think
link |
00:32:48.080
I'm perfectly happy walking around, uh, around town with my wife and, uh, kids if they could get
link |
00:32:54.400
them to go. Uh, and it's more about just getting outside and getting away from the keyboard for
link |
00:32:59.440
some time just to let things compress. Let's go into robotics a little bit. What to use the most
link |
00:33:04.960
beautiful idea in robotics? Whether we're talking about control or whether we're talking about
link |
00:33:11.760
optimization and the math side of things or the engineering side of things or the philosophical
link |
00:33:17.280
side of things. I think I've been lucky to experience something that not so many roboticists
link |
00:33:26.160
have experienced, which is to hang out with some really amazing control theorists and,
link |
00:33:38.640
the clarity of thought that some of the more mathematical control theory can bring
link |
00:33:43.520
to even very complex, messy looking problems is really, it really had a big impact on me. And,
link |
00:33:53.360
and, uh, I had a day even, uh, just a couple of weeks ago where I had spent the day on a zoom
link |
00:33:59.920
robotics conference, having great conversations with lots of people.
link |
00:34:03.840
Felt really good, um, about the ideas that were flowing and, and the like. And then I had a,
link |
00:34:10.480
you know, late afternoon meeting with a, one of my favorite control theorists and,
link |
00:34:17.920
and we went from these, from these abstract discussions about maybes and what ifs and,
link |
00:34:23.200
and what a great idea to these super precise statements about systems that are that much
link |
00:34:31.680
much more simple or, or abstract than the ones I care about deeply. And the contrast of that is,
link |
00:34:39.840
um, I don't know, it really gets me. I think people underestimate, um, maybe the power of
link |
00:34:51.120
clear thinking. Uh, and so for instance, deep learning is amazing. Um, I use it heavily in our
link |
00:35:02.000
work. I think it's changed the world unquestionable. It makes it easy to get things to work without
link |
00:35:09.200
thinking as critically about it. So I think one of the challenges as an educator is to think about,
link |
00:35:14.400
um, how do we make sure people get a taste of the more rigorous thinking that I think goes
link |
00:35:21.600
along, uh, with, with some different approaches. Yeah. So that's really interesting. So
link |
00:35:27.200
understanding like the fundamentals, the first principles of the, of the, the, the problem
link |
00:35:33.680
more in this case is mechanics, like how a thing moves, how a thing behaves, like all the forces
link |
00:35:41.280
involved, like really getting a deep understanding of that. I mean, from physics, the first principle
link |
00:35:46.800
thing comes from physics. And here it's literally physics. Yeah. And this applies in deep learning.
link |
00:35:53.760
This applies to, um, not just, I mean, it applies so cleanly in robotics, but it also applies to
link |
00:36:00.560
just in any data set. I find this true. I mean, driving as well. There's a lot of folks in it
link |
00:36:08.800
that work on autonomous vehicles that don't study driving like deeply. I might be coming
link |
00:36:20.800
a little bit from the psychology side, but, um, I remember I spent a ridiculous number of hours
link |
00:36:28.240
at lunch, uh, at this like lawn chair and I would sit somewhere, um, somewhere in MIT's
link |
00:36:35.200
campus. There's a few interesting intersections and we just watch people cross. So we were studying,
link |
00:36:40.160
um, pedestrian behavior. And I felt like as we record a lot of video to try and then there's
link |
00:36:46.960
the computer vision extracts their movement, how they move their head and so on. But like every
link |
00:36:51.600
time I felt like I didn't understand enough, I just, I felt like I wasn't understanding what,
link |
00:36:59.040
how are people signaling to each other? What are they thinking? How cognizant are they
link |
00:37:05.280
of their fear of death? Like what are we, like what's the game, what's the underlying game theory
link |
00:37:11.440
here? What are, what are the, the incentives? And then I finally found a live stream of an
link |
00:37:16.640
intersection that's like high def that I just, I would watch so I wouldn't have to sit out there.
link |
00:37:21.520
But that's interesting. So like, I feel, that's tough. That's a tough example because
link |
00:37:25.360
I mean, the learning humans involved, not just because human, but I think, um,
link |
00:37:31.600
the learning mantra is the, basically the statistics of the data will tell me things I
link |
00:37:36.480
need to know, right? And, uh, you know, for the example you gave of all the nuances of, um, you
link |
00:37:43.680
know, eye contact or hand gestures or whatever that are happening for these subtle interactions
link |
00:37:48.720
between pedestrians and traffic, right? Maybe the data will tell the, tell, tell that story.
link |
00:37:53.600
I may be even, uh, uh, one level more meta than, than what you're saying. Um, for a particular
link |
00:38:02.000
problem, I think it might be the case that data should tell us the story. But I think
link |
00:38:07.680
there's a rigorous thinking that is just an essential skill for a mathematician or an engineer
link |
00:38:14.560
that, um, I just don't want to lose it. There are, there are certainly super rigorous, um,
link |
00:38:21.120
rigorous control, or sorry, um, machine learning people. I just think deep learning makes it so
link |
00:38:26.720
easy to do some things that, um, our next generation are, um, not immediately rewarded
link |
00:38:35.680
for going through some of the more rigorous approaches. And then I wonder where that takes us.
link |
00:38:40.240
I just, well, I'm, I'm actually optimistic about it. I just want to, um, do my part to try to steer
link |
00:38:45.440
that rigorous thinking. So there's like two questions I want to ask. Do you have sort of a,
link |
00:38:53.360
a good example of rigorous thinking where it's easy to get lazy and not do the rigorous thinking?
link |
00:39:00.720
And the other question I have is like, do you have advice
link |
00:39:05.120
of, um, how to practice rigorous thinking in, um, you know, in all the computer science disciplines
link |
00:39:13.520
that we've mentioned? Yeah. I mean, there are times where problems that can be solved with well
link |
00:39:22.800
known, mature methods, um, could also be solved with, uh, with a deep learning approach. And,
link |
00:39:32.160
um, there's an argument that you must use learning even for the parts we already think we know,
link |
00:39:38.160
because if the human has touched it, then you've, you've, you've biased the system and you've
link |
00:39:42.560
suddenly put a bottleneck in there that is your own mental model, but something like inverting
link |
00:39:47.520
a matrix. You know, I, I think we know how to do that pretty well, even if it's a pretty big
link |
00:39:51.440
matrix and we understand that pretty well and you could train a deep network to do it, but
link |
00:39:55.520
you shouldn't probably. So, so in that sense, rigorous thinking is, uh, understanding the,
link |
00:40:03.440
the scope and limitations of the method of the methods that we have, like how to use the tools
link |
00:40:08.640
of mathematics properly. Yeah. I think, you know, taking a class on analysis is all I'm sort of
link |
00:40:16.320
arguing is to take, take a chance to stop and enforce yourself to think rigorously about even,
link |
00:40:22.240
you know, the rational numbers or something, you know, it doesn't have to be the end all problem,
link |
00:40:27.600
but that exercise of clear thinking, I think, uh, goes a long way and I just want to make
link |
00:40:34.080
sure we, we keep preaching. We don't lose it. Yeah. But do you think, uh, when you're doing, like
link |
00:40:38.720
rigorous thinking or like maybe, uh, trying to write down equations or sort of explicitly,
link |
00:40:45.760
like formally describe a system, do you think we naturally simplify things too much? Is that a
link |
00:40:51.760
danger you run into? Like, uh, in order to be able to understand something about the system
link |
00:40:56.720
mathematically, we, uh, make it too much of a toy example. But I think that's the good stuff.
link |
00:41:03.200
Right? Um, that's how you understand the fundamentals. I think so. I think maybe even
link |
00:41:08.800
that's a key to intelligence or something, but I mean, okay, what if Newton and Galileo had deep
link |
00:41:14.000
learning and, and, and they had done a bunch of experiments and they told the world, here's your
link |
00:41:20.640
weights of your neural network. I've, we've solved the problem. I am. You know, where would we be
link |
00:41:24.640
today? I don't, I don't think we'd be as far as we, as we are. There's something to be said about
link |
00:41:29.120
having a, the simplest explanation for a phenomenon. So I don't doubt that we can train neural networks
link |
00:41:37.040
to predict even physical, um, you know, uh, F equals MA type equations. But, um, I maybe,
link |
00:41:50.000
I want another Newton to come along because I think there's more to do in terms of
link |
00:41:53.360
coming up with the simple models for more complicated tasks.
link |
00:41:59.040
Yeah. Uh, let's not offend the AI systems from 50 years from now that are listening to this
link |
00:42:05.600
that are probably better at, might be better coming up with F equals MA equations themselves.
link |
00:42:12.160
Oh, sorry. I actually think, um, learning is probably a route to, to achieving this.
link |
00:42:17.840
Um, but the representation matters, right? And I think, uh, having a function that takes my
link |
00:42:25.760
inputs to outputs that is arbitrarily complex may not be the end goal. I think, um, there's still,
link |
00:42:32.640
you know, the most simple or parsimonious explanation for the data, um, simple doesn't
link |
00:42:38.080
mean low dimensional. That's one thing I think that we've, a lesson that we've learned. So,
link |
00:42:42.080
you know, a standard way to do, um, model reduction or system identification and controls is to,
link |
00:42:48.480
the typical formulation is that you try to find the minimal state dimension realization
link |
00:42:53.280
of a system that hits some error bounds or something like that. And that's maybe not,
link |
00:42:58.800
I think we're, we're learning that, that was, the dimension, state dimension is not the right
link |
00:43:03.840
metric. Of complexity. Of complexity. But for me, I think a lot about contact,
link |
00:43:09.360
the mechanics of contact, the robot hand is picking up an object or something.
link |
00:43:14.400
And when I write down the equations of motion for that, they're, they look incredibly complex,
link |
00:43:19.040
not because, um, actually not so much because of the dynamics of the hand when it's moving,
link |
00:43:26.560
but it's just the interactions and when they turn on and off, right? So having a high dimensional,
link |
00:43:32.800
you know, but simple description of what's happening out here is fine. But if when I
link |
00:43:36.960
actually start touching, if I write down a different dynamical system for every polygon
link |
00:43:43.680
on my robot hand and every polygon on the object, whether it's in contact or not,
link |
00:43:48.800
with all the combinatorics that explodes there, then that's too complex. So I need to somehow
link |
00:43:55.040
summarize that with a more intuitive physics way of thinking. And, uh, yeah, I'm very optimistic
link |
00:44:03.360
that machine learning will get us there. First of all, I mean, I'll probably do it in the
link |
00:44:08.400
introduction, but you're, uh, one of the great robotics people at MIT. You're a professor at
link |
00:44:13.600
MIT. You've teach a lot of amazing courses. You run a large group, uh, and you have a
link |
00:44:20.720
important history for MIT, I think, as, uh, being a part of the DARPA robotics challenge.
link |
00:44:26.160
Can you maybe first say what is the DARPA robotics challenge and then
link |
00:44:30.160
tell your story around it, your journey with it? Yeah, sure.
link |
00:44:39.120
So the DARPA robotics challenge, it came on the tales of the DARPA grand challenge and DARPA
link |
00:44:44.960
urban challenge, which were the challenges that brought us, um, put a spotlight on self driving
link |
00:44:51.200
cars. Um, Gil Pratt was at DARPA and pitched a new challenge that involved disaster response.
link |
00:45:04.800
It didn't explicitly require humanoids, although humanoids came into the picture.
link |
00:45:10.080
This happened shortly after the Fukushima disaster in Japan. And our challenge was
link |
00:45:16.000
motivated roughly by that because that was a case where if we had had robots that were ready to be
link |
00:45:21.760
sent in, there's a chance that we could have, um, averted disaster. And certainly after the, um,
link |
00:45:29.040
in the disaster response, there were times we would love, we would have loved to have sent
link |
00:45:32.720
robots in. So in practice, what we ended up with was, uh, a grand challenge, a DARPA robotics
link |
00:45:40.400
challenge, um, where Boston Dynamics was, uh, was to make humanoid robots. People like me
link |
00:45:49.600
and the amazing team at MIT, um, were competing first in a simulation challenge to try to be one
link |
00:45:57.440
of the ones that wins the right to work on one of the, uh, the Boston Dynamics humanoids in
link |
00:46:03.280
order to compete in the final challenge, which was a physical challenge. And at that point,
link |
00:46:09.200
it was already, so it was decided that it's humanoid robots early on.
link |
00:46:13.200
So there were, there were two tracks that you could enter as a hardware team where you brought
link |
00:46:17.120
your own robot, or you could enter through the virtual robotics challenge as a software team
link |
00:46:22.720
that would try to win the right to use one of the Boston Dynamics robots.
link |
00:46:25.760
Which are called Atlas.
link |
00:46:27.280
Atlas.
link |
00:46:27.760
Humanoid robots.
link |
00:46:28.480
Yeah. It was a 400 pound marvel, but a, you know, pretty big, scary looking robot.
link |
00:46:35.360
Expensive too.
link |
00:46:36.160
Expensive.
link |
00:46:36.560
At least at the time.
link |
00:46:37.360
Yeah.
link |
00:46:38.160
Okay. So, uh, I mean, how did you feel at the prospect of this kind of challenge?
link |
00:46:44.640
I mean, it seems, you know, autonomous vehicles, yeah, I guess that sounds hard,
link |
00:46:50.960
but, uh, not really from a robotics perspective. It's like, didn't they do it in the 80s?
link |
00:46:55.920
Is it kind of feeling I would have, uh, like when you first look at the problem and sound
link |
00:47:01.120
wheels, but like humanoid robots, that sounds really hard. Uh, so what, like, what, what are
link |
00:47:11.200
you, psychologically speaking, what were you feeling excited, scared? Why the heck did you
link |
00:47:16.800
get yourself involved in this kind of messy challenge?
link |
00:47:19.520
We didn't really know for sure what we were signing up for, um, in the sense that you could
link |
00:47:25.280
have had something that as it was described in the call for participation, um, that could have
link |
00:47:31.600
put a huge emphasis on the dynamics of walking and not falling down and walking over rough terrain,
link |
00:47:37.200
or the same description, because the robot had to go into this disaster area and turn valves and,
link |
00:47:42.720
and pick up a drill, cut the hole through a wall. It had to do some interesting things.
link |
00:47:48.240
The challenge could have really highlighted perception and autonomous planning,
link |
00:47:53.440
or it ended up that, you know, locomoting over a complex, uh, terrain played a pretty big role
link |
00:48:02.320
in the competition. So, um,
link |
00:48:05.360
And the degree of autonomy wasn't clear.
link |
00:48:08.160
The degree of autonomy was always a central part of the discussion. So, um, what wasn't clear was
link |
00:48:13.760
how we would be able, how far we'd be able to get with it. So the idea was always, uh, that you
link |
00:48:19.920
want semi autonomy, that you want the robot to have enough compute that you can have a degraded
link |
00:48:25.440
network link to a human. And so the same way you, we had degraded networks at, uh, at a many
link |
00:48:31.360
natural disasters, you'd send your robot in, you'd be able to get a few bits back and forth,
link |
00:48:37.360
but you don't get to have enough, potentially to fully, uh, operate the robot in every joint of the
link |
00:48:42.720
robot. So, and then the question was, and the gamesmanship of the organizers was to figure out
link |
00:48:49.440
what we're capable of, push us as far as we could so that, um, it would differentiate the teams that
link |
00:48:55.680
put more autonomy on the robot and had a few clicks and just said, go there or do this, go there,
link |
00:49:00.480
do this versus someone who's picking every footstep or something like that.
link |
00:49:05.200
So what were some, uh, memories, painful, triumphant from the experience? Like, what was that
link |
00:49:14.480
journey? Maybe if you can dig in a little deeper, maybe even on the technical side, on the team side,
link |
00:49:20.960
that, that whole process of, um, from the early idea stages to actually competing.
link |
00:49:28.000
I mean, this was a defining experience for me. It was, it came at the right time for me in my
link |
00:49:33.360
career. I had gotten tenure before I was due a sabbatical and most people do something, you know,
link |
00:49:38.960
relaxing and restorative for sabbatical. So you got tenure before the, the, before this? Yeah.
link |
00:49:44.720
Yeah. Yeah. It was a good time for me. I had, I had, we had a bunch of algorithms that we
link |
00:49:49.920
were very happy with. We wanted to see how far we could push them. And this was a chance to
link |
00:49:53.280
really test our metal to do more proper software engineering. Um, the team, we all just worked
link |
00:50:00.240
our butts off. We, you know, we're in that lab almost all the time. Um, okay. So there, I mean,
link |
00:50:08.240
there were some, of course, high highs and low lows throughout that, uh, anytime you're, you know,
link |
00:50:12.960
not sleeping and devoting your life to a 400 pound humanoid. Um, I remember actually one funny
link |
00:50:20.240
moment where we're all super tired and so Atlas had to walk across cinder blocks. That was one of
link |
00:50:24.960
the obstacles. And I remember Atlas was powered down and hanging limp, you know, on the, on its
link |
00:50:30.160
harness and the humans were there like laying, you know, picking up and laying the brick down
link |
00:50:35.040
so that the robot could walk over it. And I thought, what is wrong with this? You know, we've got a
link |
00:50:40.160
robot just watching us do all the manual labor so that it can take its little, um, scroll across
link |
00:50:46.000
the tree. But, um, I mean, even the, even the virtual robotics challenge was, was super nerve
link |
00:50:53.520
racking and dramatic. I remember, um, so, so we were using gazebo as a simulator on the cloud.
link |
00:51:02.160
And there was all these interesting challenges. I think, um, the investment that, that OSRs, um,
link |
00:51:07.920
FC, whatever they were called at that time, Brian Gerkey's team at open source robotics, um,
link |
00:51:14.000
they were pushing on the capabilities of gazebo in order to scale it to the complexity
link |
00:51:18.400
of these challenges. So, um, you know, up to the virtual competition. So the virtual competition
link |
00:51:25.600
was you will sign on at a certain time and we'll have a network connection to another machine on
link |
00:51:30.800
the cloud that is running the simulator of your robot. And your controller will run on this computer,
link |
00:51:36.640
this controller, this computer, and, and the physics will run on the other and you have to,
link |
00:51:41.360
to connect. Now, um, the physics, they wanted it to run at real time rates because there was
link |
00:51:48.640
an element of human interaction. Um, and humans could, if you do want to tell you, it works
link |
00:51:53.600
way better if it's at frame rate. Oh, cool. But it was very hard to simulate these
link |
00:51:58.800
complex, these complex scenes at real time rate. So right up to like days before the competition,
link |
00:52:06.320
the, the simulator wasn't quite at real time rate. And that was great for me because my controller
link |
00:52:13.120
was solving a big, pretty big optimization problem and it wasn't quite at real time rate. So I was
link |
00:52:18.160
fine. I was keeping up with the simulator. We were both running at about 0.7. And I remember
link |
00:52:23.280
getting this email, and by the way, the perception folks on our team hated that they knew that if
link |
00:52:29.440
my controller was too slow, the robot was going to fall down. And, and you know, no matter how
link |
00:52:33.520
good their perception system was, if I can't make my controller fast, anyways, we get this email like
link |
00:52:38.000
three days before the virtual competition, you know, it's for all the marbles, we're going to
link |
00:52:41.600
either get a humanoid robot or we're not. And we get an email saying, good news, we made the robot,
link |
00:52:47.200
does the simulator faster? It's now one point. And I, we're, I was just like, oh man, what are we
link |
00:52:54.160
going to do here? So that came in late at night for me. A few days ahead. A few days ahead. I went
link |
00:53:01.680
over, there was, it happened at Frank Permanter, who's a very, very sharp. He's a student at the
link |
00:53:07.840
time working on optimization. He was still in lab. Frank, we need to make the quadratic programming
link |
00:53:16.080
solver faster, not like a little faster. It's actually, you know, and we wrote a new solver for
link |
00:53:22.640
that QP together that night. And he's terrifying. So there's a really hard optimization problem
link |
00:53:31.680
that you're constantly solving. You didn't make the optimization problem simpler. You wrote a new
link |
00:53:37.520
solver. So, I mean, your observation is almost spot on. What we did was what everybody, I mean,
link |
00:53:44.560
people know how to do this, but we had not yet done this idea of worm starting. So we are solving
link |
00:53:49.840
a big optimization problem at every time step. But if you're running fast enough, the optimization
link |
00:53:54.800
problem you're solving on the last time step is pretty similar to the optimization you're going
link |
00:53:58.640
to solve with the next. We had course had told our commercial solver to use worm starting. But
link |
00:54:03.680
even the interface to that commercial solver was causing us these delays. So what we did was we
link |
00:54:11.040
basically wrote, we called it fast QP at the time, we wrote a very lightweight, very fast layer,
link |
00:54:18.320
which would basically check if nearby solutions to the quadratic program were, which were very
link |
00:54:24.720
easily checked, could stabilize the robot. And if they couldn't, we would fall back to the solver.
link |
00:54:30.640
You couldn't really test this well, right?
link |
00:54:32.160
Right. So we always knew that if we fell back, it got to the point where if for some reason
link |
00:54:40.320
things slowed down and we fell back to the original solver, the robot would actually
link |
00:54:43.600
literally fall down. So it was a harrowing sort of ledge we're sort of on. But I mean,
link |
00:54:52.640
actually, like the 400 pound human could come crashing to the ground if your solver's not
link |
00:54:57.280
fast enough. But, you know, we have lots of good experiences. So can I ask a weird question I get
link |
00:55:06.560
about the idea of hard work? So actually people like students of yours that I've interacted with
link |
00:55:15.840
and just in robotics people in general, but they have moments at moments of work harder than
link |
00:55:25.520
most people I know in terms of if you look at different disciplines of how hard people work.
link |
00:55:32.240
But they're also like the happiest. Like just like, I don't know. It's the same thing with
link |
00:55:38.560
like running people that push themselves to like the limit. They also seem to be like the most like
link |
00:55:43.920
full of life somehow. And I get often criticized like, you're not getting enough sleep. What are
link |
00:55:50.640
you doing to your body, blah, blah, blah, like this kind of stuff. And I usually just kind of
link |
00:55:56.480
respond like I'm doing what I love. I'm passionate about it. I love it. I feel like it's it's
link |
00:56:03.840
invigorating. I actually think I don't think the lack of sleep is what hurts you. I think what
link |
00:56:09.200
hurts you is stress and lack of doing things that you're passionate about. But in this world,
link |
00:56:14.160
yeah, I mean, can you comment about why the heck robotics people are
link |
00:56:23.840
willing to push themselves to that degree? Is there value in that? And why are they so happy?
link |
00:56:30.240
I think you got it right. I mean, I think the causality is not that we work hard.
link |
00:56:36.240
And I think other disciplines work very hard too. But I don't think it's that we work hard and
link |
00:56:40.320
therefore we are happy. I think we found something that we're truly passionate about.
link |
00:56:47.920
It makes us very happy. And then we get a little involved with it and spend a lot of time on it.
link |
00:56:54.400
What a luxury to have something that you want to spend all your time on, right?
link |
00:56:59.040
We could talk about this for many hours, but maybe if we could pick,
link |
00:57:03.760
is there something on the technical side on the approach that you took that's interesting
link |
00:57:07.520
that turned out to be a terrible failure or a success that you carry into your work today
link |
00:57:13.680
about all the different ideas that were involved in making, whether in the simulation
link |
00:57:20.720
or in the real world, making the semi autonomous system work?
link |
00:57:26.720
I mean, it really did teach me something fundamental about what it's going to take to
link |
00:57:32.560
get robustness out of a system of this complexity. I would say the DARPA challenge really
link |
00:57:39.360
was foundational in my thinking. I think the autonomous driving community thinks about this.
link |
00:57:43.600
I think lots of people thinking about safety critical systems that might have machine learning
link |
00:57:47.760
in the loop are thinking about these questions. For me, the DARPA challenge was the moment where
link |
00:57:53.360
I realized we've spent every waking minute running this robot. And again, for the physical
link |
00:58:00.800
competition, days before the competition, we saw the robot fall down in a way it had never
link |
00:58:04.800
fallen down before. I thought, you know, how could we have found that? We only have one robot.
link |
00:58:11.440
It's running almost all the time. We just didn't have enough hours in the day to test that robot.
link |
00:58:17.040
Something has to change, right? And then I think that, I mean, I would say that the team that won
link |
00:58:23.600
was from KAIST was the team that had two robots and was able to do not only incredible engineering,
link |
00:58:30.400
just absolutely top rate engineering, but also they were able to test at a rate and
link |
00:58:37.440
discipline that we didn't keep up with. What does testing look like? What are we talking about here?
link |
00:58:42.080
Like, what's a loop of tests? Like, from start to finish, what is a loop of testing?
link |
00:58:48.560
Yeah, I mean, I think there's a whole philosophy to testing. There's the unit tests. And you can
link |
00:58:53.200
do that on a hardware. You can do that in a small piece of code. You write one function,
link |
00:58:56.960
you should write a test that checks that function's input outputs. You should also write an
link |
00:59:01.520
integration test at the other extreme of running the whole system together, you know,
link |
00:59:06.000
that try to turn on all the different functions that you think are correct. It's much harder to
link |
00:59:12.080
write the specifications for a system level test, especially if that system is as complicated as a
link |
00:59:17.280
humanoid robot. But the philosophy is sort of the same. On the real robot, it's no different,
link |
00:59:24.000
but on a real robot, it's impossible to run the same experiment twice. So if you see a failure,
link |
00:59:32.320
you hope you caught something in the logs that tell you what happened, but you'd probably never
link |
00:59:36.320
be able to run exactly that experiment again. And right now, I think our philosophy is just
link |
00:59:45.520
basically Monte Carlo estimation is just run as many experiments as we can, maybe try to set up
link |
00:59:51.600
the environment to make the things we are worried about happen as often as possible. But really,
link |
01:00:00.160
we're relying on somewhat random search in order to test. Maybe that's all we'll ever be able to,
link |
01:00:05.360
but I think, you know, because there's an argument that the things that'll get you are the things
link |
01:00:11.920
that are really nuanced in the world. And it'd be very hard to, for instance, put back in a simulation.
link |
01:00:16.720
Yeah, the I guess the edge cases. What was the hardest thing? Like, so you said walking over
link |
01:00:23.520
rough terrain, like that just taking footsteps. I mean, people, it's so dramatic and painful in
link |
01:00:31.280
a certain kind of way to watch these videos from the DRC of robots falling. Yeah, I just so
link |
01:00:38.560
heartbreaking. I don't know. Maybe it's because for me, at least we anthropomorphize the robot.
link |
01:00:44.800
Of course, it's funny for some reason, like humans falling is funny. For I don't it's some dark
link |
01:00:52.880
reason. I'm not sure why it is so, but it's also like tragic and painful. And so speaking of which,
link |
01:01:00.080
I mean, what what made the robots fall and fail in your view? So I can tell you exactly what
link |
01:01:06.240
happened on we I contributed one of those our team contributed one of those spectacular falls.
link |
01:01:10.480
Every one of those falls, the has a complicated story. I mean, at one time, the power effectively
link |
01:01:17.760
went out on the robot. Because it had been sitting at the door waiting for a green light to be able
link |
01:01:23.440
to proceed and its batteries, you know, and therefore it just fell backwards and smashed its head
link |
01:01:28.480
to ground and it was hilarious. But it wasn't because of bad software, right? But for ours,
link |
01:01:34.720
so the hardest part of the challenge, the hardest task, in my view, was getting out of the Polaris.
link |
01:01:40.240
It was actually relatively easy to drive the Polaris. Can you tell the story of the car?
link |
01:01:49.920
People should watch this video. I mean, the thing you've come up with is just brilliant. But
link |
01:01:54.240
anyway, sorry, what's, we kind of joke, we call it the big robot little car problem, because
link |
01:01:59.200
somehow the race organizers decided to give us a 400 pound humoid. And then they also provided the
link |
01:02:05.360
vehicle, which was a little Polaris. And the robot didn't really fit in the car. So you couldn't
link |
01:02:11.360
drive the car with your feet under the steering column. We actually had to straddle the the main
link |
01:02:17.440
column of the and have basically one foot in the passenger seat, one foot in the driver's seat,
link |
01:02:23.920
in the driver's seat, and then drive with our left hand. But the hard part was we had to then
link |
01:02:30.240
park the car, get out of the car. It didn't have a door, that was okay. But it's just getting up
link |
01:02:36.880
from crouched from sitting when you're in this very constrained environment. First of all,
link |
01:02:42.560
I remember after watching those videos, I was much more cognizant of how hard is it,
link |
01:02:46.960
it is for me to get in and out of the car, and out of the car especially. Like,
link |
01:02:51.600
it's actually a really difficult control problem. Yeah. I'm very cognizant of it when I'm like
link |
01:02:57.600
injured for whatever reason. It's really hard. Yeah. So how did you, how did you approach this
link |
01:03:03.360
problem? So we had a, you know, you think of NASA's operations, and they have these checklists,
link |
01:03:09.440
you know, prelaunch checklists and the like, we weren't far off from that, we had this big
link |
01:03:12.880
checklist. And on the first day of the competition, we were running down our checklist. And one of
link |
01:03:17.680
the things we had to do, we had to turn off the controller, the piece of software that was running
link |
01:03:23.200
that would drive the left foot of the robot in order to accelerate on the gas.
link |
01:03:28.000
And then we turned on our balancing controller. And the nerves jitters of the first day of the
link |
01:03:33.120
competition, someone forgot to check that box and turn that controller off. So we used a lot of
link |
01:03:39.360
motion planning to figure out a sort of configuration of the robot that we get up and over. We
link |
01:03:47.360
relied heavily on our balancing controller. And, and basically there were, when the robot was in
link |
01:03:53.440
one of its most precarious, you know, sort of configurations trying to sneak its big leg out
link |
01:03:59.680
of the, out of the side, the other controller that thought it was still driving, told it's
link |
01:04:05.440
left foot to go like this. And, and that wasn't good. But, but it turned disastrous for us,
link |
01:04:13.280
because what happened was a little bit of push here. Actually, if we have videos of us, you know,
link |
01:04:20.160
running into the robot with a 10 foot pole, and it kind of will recover. But this is a case where
link |
01:04:26.320
there's no space to recover. So a lot of our secondary balancing mechanisms about,
link |
01:04:30.800
like take a step to recover, they were all disabled because we were in the car and there's no place
link |
01:04:34.320
to step. So we're relying on our just lowest level reflexes. And even then, I think just hitting
link |
01:04:40.400
the foot on the seat on the floor, we probably could have recovered from it. But the thing that
link |
01:04:45.440
was bad that happened is when we did that, and we jostled a little bit, the tailbone of our robot
link |
01:04:52.000
was only a little off the seat, it hit the seat. And the other foot came off the ground just a
link |
01:04:56.880
little bit. And nothing in our plans had ever told us what to do if your butt's on the seat
link |
01:05:03.680
and your feet are in the air. And then the thing is, once you get off the script,
link |
01:05:09.280
things can go very wrong. Because even our state estimation, our system that was trying to
link |
01:05:14.240
collect all the data from the sensors and understand what's happening with the robot,
link |
01:05:18.400
it didn't know about this situation. So it was predicting things that were just wrong.
link |
01:05:22.720
And then we did a violent shake and fell off in our face first on out of the robot.
link |
01:05:29.120
But like into the destination.
link |
01:05:32.400
That's true. We fell in and we got our point for egress.
link |
01:05:34.960
But so is there any hope for, that's interesting, is there any hope for
link |
01:05:41.840
Atlas to be able to do something when it's just on its butt and feet in the air?
link |
01:05:46.240
Absolutely.
link |
01:05:46.960
So you can, what do you?
link |
01:05:48.320
No, so that's, that is one of the big challenges. And I think it's still true.
link |
01:05:53.440
You know, Boston Dynamics and, and, and EMI and there's this incredible work on,
link |
01:05:59.440
on legged robots happening around the world.
link |
01:06:01.680
Most of them still are, are very good at the case where you're making contact with the world at
link |
01:06:09.360
your feet. And they have typically point feet relatively, their balls on their feet, for instance.
link |
01:06:14.400
If that, if those robots get in a situation where the elbow hits the wall or something like this,
link |
01:06:19.760
that's a pretty different situation. Now they have layers of mechanisms that will make,
link |
01:06:24.080
I think the more mature solutions have, have ways in which the controller won't do stupid things.
link |
01:06:30.160
But a human, for instance, is able to leverage incidental contact in order to accomplish a goal.
link |
01:06:35.440
In fact, I might, if you push me, I might actually put my hand out and make a new, brand new contact.
link |
01:06:40.800
The feet of the robot are doing this on quadrupeds, but we mostly in robotics are afraid of contact
link |
01:06:47.840
on the rest of our body, which is crazy. There's this whole field of motion planning,
link |
01:06:54.880
motion planning, collision free motion planning. And we write very complex algorithms so that the
link |
01:07:00.160
robot can dance around and make sure it doesn't touch the world. So people are just afraid of
link |
01:07:07.040
contact because contact is seen as a difficult. It's still a difficult control problem and sensing
link |
01:07:12.800
problem. Now you're a serious person. I'm a little bit of an idiot and I'm going to ask
link |
01:07:21.760
you some dumb questions. So I do, I do martial arts. So like Jiu Jitsu, I wrestled my whole life.
link |
01:07:30.240
So let me, let me ask the question, you know, like whenever people learn that I do any kind
link |
01:07:36.160
of AI or like I mentioned robots and things like that, they say, when are we going to have robots
link |
01:07:41.760
that, you know, they can win in a wrestling match or in a fight against a human. So we just mentioned
link |
01:07:50.640
sitting on your butt, if you're in the air, that's a common position. Jiu Jitsu, when you're on the
link |
01:07:54.800
ground, you're your down opponent. Like what, how difficult do you think is the problem? And when
link |
01:08:04.080
will we have a robot that can defeat a human in a wrestling match? And we're talking about a lot,
link |
01:08:09.840
like, I don't know if you're familiar with wrestling, but essentially,
link |
01:08:13.520
not very, it's basically the art of contact. It's like, it's because you're, you're, you're
link |
01:08:22.400
picking contact points, and then using like leverage, like to off balance to, to trick people,
link |
01:08:31.280
like you make them feel like you're doing one thing, and then they, they change their balance,
link |
01:08:38.720
and then you switch what you're doing, and then results in a throw or whatever. So like, it's
link |
01:08:44.640
basically the art of multiple contacts. So awesome. It's a nice description of it. So there's also an
link |
01:08:51.520
opponent in there, right? So, so if very dynamic, right? If you are wrestling a human, and are in
link |
01:09:00.320
a game theoretic situation with a human, that's still hard. But just to speak to the, you know,
link |
01:09:08.160
quickly reasoning about contact part of it, for instance,
link |
01:09:11.200
yeah, maybe even throwing the game theory out of it, almost like,
link |
01:09:15.040
yeah, almost like a non dynamic opponent, right? There's reasons to be optimistic,
link |
01:09:19.920
but I think our best understanding of those problems are still pretty hard. I have been
link |
01:09:26.640
increasingly focused on manipulation, partly where that's a case where the contact has to be
link |
01:09:32.000
much more rich. And there are some really impressive examples of deep learning policies,
link |
01:09:40.880
controllers, that, that can appear to do good things through contact. We've even got new examples of,
link |
01:09:50.240
of, you know, deep learning models of predicting what's going to happen to objects as they go
link |
01:09:54.560
through contact. But I think the challenge you just offered there still eludes us,
link |
01:10:00.800
right? The ability to make a decision based on those models quickly.
link |
01:10:07.360
You know, I have to think though, it's hard for humans to, when you get that complicated. I think
link |
01:10:11.520
probably you had maybe a slow motion version of where you learn the basic skills, and you've
link |
01:10:19.360
probably gotten better at it. And there's, there's much more subtlety, but it might still be hard
link |
01:10:25.760
to actually, you know, really on the fly, take a, you know, model of your humanoid and figure out
link |
01:10:32.560
how to, how to plan the optimal sequence that might be a problem we never solve.
link |
01:10:36.480
Well, the, I mean, one of the most amazing things to me about the, we can talk about
link |
01:10:42.320
martial arts. We could also talk about dancing. It doesn't really matter. It's too human.
link |
01:10:48.880
I think it's the most interesting study of contact. It's not even the dynamic element of it. It's the,
link |
01:10:53.520
the, like when you get good at it, it's so effortless. Like I can just, I'm very cognizant
link |
01:11:00.800
of the entirety of the learning process being essentially like learning how to move my body
link |
01:11:07.600
in a way that I could throw very large weights around effortlessly. Like, and I can feel the
link |
01:11:17.600
learning, like I'm a huge believer in drilling of techniques. And you can just like feel your,
link |
01:11:23.120
I don't, you're not feeling, you're feeling, um, sorry, you're learning it intellectually a little
link |
01:11:29.040
bit, but a lot of it is the body learning it somehow, like instinctually. And whatever that
link |
01:11:34.640
learning is, that's really, I'm not even sure if that's equivalent to a, like a deep learning,
link |
01:11:42.960
learning a controller. I think it's something more, it feels like there's a lot of distributed
link |
01:11:49.200
learning going on. Yeah, I think there's hierarchy and composition, probably in the systems that
link |
01:11:58.000
we don't capture very well yet. You have layers of control systems. You have reflexes at the
link |
01:12:03.200
bottom layer and you have a, you know, a system that's capable of planning a vacation to
link |
01:12:10.320
some distant country, which is probably, you probably don't have a controller, a policy for
link |
01:12:14.960
every possible destination you'll ever pick, right? But there's something magical in the
link |
01:12:21.600
in between. And how do you go from these low level feedback loops to something that feels
link |
01:12:27.440
like a pretty complex set of outcomes? You know, my guess is, I think, I think there's
link |
01:12:33.840
evidence that you can plan at some of these levels, right? So Josh Tenenbaum just showed
link |
01:12:39.280
it in his talk the other day. He's got a game he likes to talk about. I think he calls it the
link |
01:12:43.840
Pick 3 game or something, where he puts a bunch of clutter down in front of a person and he says,
link |
01:12:51.120
okay, pick three objects and it might be a telephone or a shoe or a Kleenex box or whatever.
link |
01:12:59.680
And apparently you pick three items and then you pick, he says, okay, pick the first one
link |
01:13:03.120
up with your right hand, the second one up with your left hand. Now using those objects,
link |
01:13:07.440
those now as tools pick up the third object, right? So that's down at the level of physics
link |
01:13:15.600
and mechanics and contact mechanics that I think we do learning or we do have policies for, we do
link |
01:13:22.080
control for almost feedback. But somehow we're able to still, I mean, I've never picked up a
link |
01:13:27.840
telephone with a shoe and a water bottle before and somehow, and it takes me a little longer to
link |
01:13:32.480
do that the first time. But most of the time we can sort of figure that out. So yeah, I think the
link |
01:13:40.480
amazing thing is this ability to be flexible with our models, plan when we need to use our
link |
01:13:47.040
well oiled controllers when we don't, when we're in familiar territory. Having models,
link |
01:13:53.840
I think the other thing you just said was something about, I think your awareness of what's
link |
01:13:58.320
happening is even changing as you improve your expertise, right? So maybe you have a very
link |
01:14:03.680
approximate model of the mechanics to begin with. And as you gain expertise, you get a more refined
link |
01:14:10.800
version of that model. You're aware of muscles or balanced components that you just weren't
link |
01:14:17.520
even aware of before. So how do you scaffold that? Yeah, plus the fear of injury, the ambition of goals
link |
01:14:26.560
of excelling and fear of mortality. Let's see what else is in there as the motivations,
link |
01:14:35.680
overinflated ego in the beginning, like, and then a crash of confidence in the middle,
link |
01:14:42.800
all of those seem to be essential for the learning process. And if all that's good,
link |
01:14:48.000
then you're probably optimizing energy efficiency. Yeah, right. So we have to get that right. So
link |
01:14:53.200
you know, there was this idea that you would have robots play soccer better
link |
01:15:01.120
than human players by 2050. That was the goal. Basically, was the goal to beat world champion
link |
01:15:09.280
team to become a world cup, be like a world cup level team. So are we going to see that first?
link |
01:15:15.040
Or a robot, if you're familiar, there's an organization called UFC for mixed martial arts.
link |
01:15:23.280
Are we going to see a world cup championship soccer team that are robots? Or a UFC champion
link |
01:15:30.560
mixed martial artist as a robot? I mean, it's very hard to say one thing is a harder one.
link |
01:15:37.040
Some problems are harder than the other. What probably matters is who started the organization
link |
01:15:44.320
that I mean, I think Robocop has a pretty serious following and there is a history now of people
link |
01:15:49.840
playing that game, learning about that game, building robots to play that game, building
link |
01:15:53.920
increasingly more human robots. It's got momentum. So if you want to have mixed martial arts compete,
link |
01:16:00.720
you better start your organization now, right? I think almost independent of which problem is
link |
01:16:07.680
technically harder because they're both hard and they're both different. That's a good point.
link |
01:16:11.840
I mean, those videos are just hilarious that like, especially the human robot's trying to
link |
01:16:19.840
try to play soccer. I mean, they're kind of terrible right now. I mean, I guess there is Robo
link |
01:16:24.800
Sumo wrestling. There's like the Robo one competitions where they do have these robots
link |
01:16:30.080
that go on the table and basically fight. So maybe I'm wrong. Maybe first of all,
link |
01:16:34.080
do you have a year in mind for Robocop? Just from a robotics perspective,
link |
01:16:38.880
seems like a super exciting possibility that like in the physical space, this is what's
link |
01:16:46.720
interesting. I think the world is captivated. I think it's really exciting. It's it inspires
link |
01:16:54.560
just a huge number of people when machine beats a human at a game that humans are really damn
link |
01:17:02.720
good at. So you're talking about chess and go, but that's in the world of digital. I don't think
link |
01:17:11.680
machines have beat humans at a game in the physical space yet, but that would be just...
link |
01:17:17.520
You have to make the rules very carefully, right? I mean, if Atlas kicked me in the shins,
link |
01:17:22.160
I'm down and game over. So it's very subtle on what's fair.
link |
01:17:31.040
I think the fighting one is a weird one. Yeah, because you're talking about a machine that's
link |
01:17:35.200
much stronger than you. But yeah, in terms of soccer, basketball, all those kinds.
link |
01:17:39.440
Even soccer, right? I mean, as soon as there's contact or whatever. And there's,
link |
01:17:44.080
there are some things that the robot will do better. I think if you really set yourself up to
link |
01:17:50.160
try to see could robots win the game of soccer as the rules were written. The right thing for
link |
01:17:56.240
the robot to do is to play very differently than a human would play. You're not going to get the
link |
01:18:01.840
perfect soccer player robot. You're going to get something that exploits the rules, exploits its
link |
01:18:08.640
super actuators. It's super low band with feedback loops or whatever. And it's going to play the
link |
01:18:15.040
game differently than you want it to play. And I bet there's ways, I bet there's loopholes.
link |
01:18:21.200
We saw that in the DARPA challenge that it's very hard to write a set of rules that someone can't
link |
01:18:28.800
find a way to exploit. Let me ask another ridiculous question. I think this might be
link |
01:18:36.560
the last ridiculous question, but I doubt it. I aspire to ask as many ridiculous questions
link |
01:18:44.400
of a brilliant MIT professor. Okay. I don't know if you've seen the black mirror.
link |
01:18:53.440
It's funny. I never watched the episode. I know when it happened though, because I gave a talk
link |
01:19:00.400
to some MIT faculty one day on a, on assuming, you know, Monday or whatever, I was telling them
link |
01:19:06.080
about the state of robotics. And I showed some video from Boston Dynamics of the quadruped
link |
01:19:11.440
spot at the time. It was the early version of spot. And there was a look of horror that went
link |
01:19:17.680
across the room. And I said, what, you know, I've shown videos like this a lot of times. What
link |
01:19:23.280
happened? And it turns out that this video had got, yeah, this black mirror episode had changed
link |
01:19:28.160
the way people watched. Yeah, the videos I was putting out. The way they see these kinds of
link |
01:19:34.160
robots. So I talked to so many people who are just terrified because of that episode probably
link |
01:19:39.120
of these kinds of robots. Hey, I almost want to say that you almost kind of like enjoy being
link |
01:19:43.600
terrified. I don't even know what it is about human psychology that kind of imagine doomsday,
link |
01:19:49.120
the destruction of the universe or our society and kind of like enjoy being afraid. I don't want
link |
01:19:57.520
to simplify it, but it feels like they talk about it so often. It almost, there does seem to be an
link |
01:20:04.000
addictive quality to it. I talked to a guy, a guy named Joe Rogan, who's kind of the flag bearer
link |
01:20:11.440
for being terrified of these robots. Do you have two questions? One, do you have an understanding
link |
01:20:18.480
of why people are afraid of robots? And the second question is, in black mirror, just to tell you
link |
01:20:25.440
the episode, I don't even remember it that much anymore, but these robots, I think they can shoot
link |
01:20:30.960
like a pellet or something. They basically, it's basically a spot with a gun. And how far are we
link |
01:20:38.560
away from having robots that go rogue like that, you know, basically spot that goes rogue for some
link |
01:20:47.440
reason and somehow finds a gun. Right. So I mean, I'm not a psychologist. I think I don't know exactly
link |
01:20:57.920
why people react the way they do. I think, I think we have to be careful about the way robots influence
link |
01:21:08.480
our society and the like. I think that's something that's a responsibility that roboticists need to
link |
01:21:13.280
embrace. I don't think robots are going to come after me with a kitchen knife or a
link |
01:21:18.720
pellet gun right away. And I mean, if they were programmed in such a way, but I used to joke with
link |
01:21:24.880
Atlas that all I had to do was run for five minutes and its battery would run out. But actually,
link |
01:21:31.280
they've got a very big battery in there by the end. So it was over an hour.
link |
01:21:37.040
I think the fear is a bit cultural though, because I mean, you notice that like, I think
link |
01:21:43.440
in my age, in the US, we grew up watching Terminator. If I'd grown up at the same time in
link |
01:21:49.840
Japan, I probably would have been watching Astro Boy. And there's a very different reaction to
link |
01:21:54.800
robots in different countries, right? So I don't know if it's a human innate fear of
link |
01:22:01.280
metal marvels, or if it's something that we've done to ourselves with our sci fi.
link |
01:22:09.600
Yeah, the stories we tell ourselves through, through movies, through just
link |
01:22:15.360
through popular media. But if, if I were to tell, you know, if, if you were my therapist,
link |
01:22:20.880
and I said, I'm really terrified that we're going to have these robots
link |
01:22:27.200
very soon that will hurt us. Like, how do you approach making me feel better?
link |
01:22:36.400
Like, why shouldn't people be afraid? There's a, I think there's a video that went viral
link |
01:22:43.840
recently. Everything, everything was spot in Boston. Nameless goes viral in general.
link |
01:22:48.160
But usually it's like really cool stuff, like they're doing flips and stuff, or like sad stuff.
link |
01:22:54.400
Atlas being hit with a broomstick or something like that. But there's a video where I think
link |
01:23:00.880
one of the new productions bought robots, which are awesome. It was like patrolling
link |
01:23:05.920
somewhere in like, in some country. And like, people immediately were like, saying, like,
link |
01:23:11.760
this is like the dystopian future, like the surveillance state. For some reason, like,
link |
01:23:17.200
you can just have a camera, like something about spot being able to walk on four feet
link |
01:23:23.280
with like really terrified people. So what, what do you say to those people?
link |
01:23:30.880
I think there is a legitimate fear there, because so much of our future is uncertain.
link |
01:23:37.680
But at the same time, technically speaking, it seems like we're not there yet. So what do you
link |
01:23:42.240
say? I mean, I think technology is complicated. It can be used in many ways. I think there are purely
link |
01:23:51.680
software attacks somebody could use to do great damage. Maybe they have already. You know, I think
link |
01:24:01.600
wheeled robots could be used in bad ways to drones, right? I don't think that. Let's see. I don't want
link |
01:24:16.800
to be building technology just because I'm compelled to build technology. And I don't think
link |
01:24:22.240
about it. But I would consider myself a technological optimist, I guess, in the sense that
link |
01:24:29.280
I think we should continue to create and evolve and our world will change. And if we will introduce
link |
01:24:38.160
new challenges, we'll screw something up maybe. But I think also we'll invent ourselves out of
link |
01:24:45.280
those challenges and life will go on. So it's interesting because you didn't mention, like,
link |
01:24:50.560
this is technically too hard. I don't think robots are, I think people attribute a robot that looks
link |
01:24:57.040
like an animal as maybe having a level of self awareness or consciousness or something that
link |
01:25:02.720
they don't have yet, right? So it's not, I think our ability to anthropomorphize those robots is
link |
01:25:10.480
probably, we're assuming that they have a level of intelligence that they don't yet have. And that
link |
01:25:16.960
might be part of the fear. So in that sense, it's too hard. But, you know, there are many scary
link |
01:25:23.600
things in the world, right? So I think we're right to ask those questions. We're right to
link |
01:25:30.800
think about the implications of our work.
link |
01:25:34.000
Right. In the short term, as we're working on it for sure, is there something long term
link |
01:25:41.280
that scares you about our future with AI and robots? A lot of folks from Elon Musk to Sam
link |
01:25:50.880
Harris to a lot of folks talk about the, you know, existential threats about artificial
link |
01:25:57.120
intelligence. Oftentimes robots kind of inspire that the most because of the anthropomorphism.
link |
01:26:05.520
Do you have any fears?
link |
01:26:07.040
It's an important question. I actually, I think I like Rod Brooks answer maybe the best on this,
link |
01:26:16.320
I think, and it's not the only answer he's given over the years, but maybe one of my favorites is,
link |
01:26:23.040
he says, it's not going to be, he's got a book Flesh and Machines, I believe.
link |
01:26:29.200
It's not going to be the robots versus the people. We're all going to be robot people
link |
01:26:34.080
because, you know, we already have smartphones, some of us have serious technology implanted
link |
01:26:40.960
in our bodies already, whether we have a hearing aid or a pacemaker or anything like this. People
link |
01:26:48.400
with amputations might have prosthetics. That's a trend, I think, that is likely to continue. I
link |
01:26:57.280
mean, this is now wild speculation. But I mean, when do we get to cognitive implants and the like?
link |
01:27:06.080
And yeah, with Neuralink, brain computer interfaces. That's interesting. So there's a dance
link |
01:27:11.120
between humans and robots that's going to be, it's going to be impossible to be scared of
link |
01:27:20.560
the other out there, the robot, because the robot will be part of us, essentially. It'd be so
link |
01:27:26.400
intricately sort of part of our society that it might not even be implanted part of us,
link |
01:27:32.880
but just it's so much a part of our society. So in that sense, the smartphone is already the
link |
01:27:38.800
robot we should be afraid of. Yeah. I mean, yeah. And then all the usual fears arise,
link |
01:27:46.160
the misinformation, the manipulation, all those kinds of things that,
link |
01:27:55.360
the problems are all the same. They're human problems, essentially, it feels like.
link |
01:28:00.560
Yeah. I mean, I think the way we interact with each other online is changing the value we put on
link |
01:28:07.600
personal interaction. And that's a crazy big change that's going to happen and rip through
link |
01:28:11.760
our, has already been ripping through our society, right? And that has implications that are
link |
01:28:17.440
massive. I don't know if they should be scared of it or go with the flow, but
link |
01:28:22.480
I don't see some battle lines between humans and robots being the first thing to worry about.
link |
01:28:29.440
I mean, I do want to just, as a kind of comment, maybe you can comment about your just feelings
link |
01:28:35.360
about Boston Dynamics in general, but you know, I love science. I love engineering. I think there's
link |
01:28:40.640
so many beautiful ideas in it. And when I look at Boston Dynamics or legged robots in general,
link |
01:28:47.520
I think they inspire people curiosity and feelings in general, excitement about engineering
link |
01:28:57.360
more than almost anything else in popular culture. And I think that's such an exciting
link |
01:29:03.680
like responsibility and possibility for robotics. And Boston Dynamics is riding that wave pretty
link |
01:29:09.840
damn well. Like they found it, they've discovered that hunger and curiosity in the people and they're
link |
01:29:15.680
doing magic with it. I don't care if the, I mean, I guess that their company have to make money,
link |
01:29:20.560
right? But they're already doing incredible work and inspiring the world about technology.
link |
01:29:26.800
I mean, do you have thoughts about Boston Dynamics and maybe others, your own work
link |
01:29:33.760
in robotics and inspiring the world in that way?
link |
01:29:37.600
I completely agree. I think Boston Dynamics is absolutely awesome. I think I show my kids those
link |
01:29:45.120
videos, you know, and the best thing that happens is sometimes they've already seen them, you know,
link |
01:29:49.520
right. I think, I just think it's a pinnacle of success in robotics that is just one of the
link |
01:29:58.240
best things that's happened. I absolutely completely agree. One of the heartbreaking things to me
link |
01:30:05.120
is how many robotics companies fail. How hard it is to make money with a robotics company.
link |
01:30:12.320
Like iRobot went through hell just to arrive at a Roomba to figure out one product. And then
link |
01:30:20.000
there's so many home robotics companies like Gebo and Anki, Anki, the cutest toy. There's a great
link |
01:30:32.080
robot I thought went down. I'm forgetting a bunch of them, but a bunch of robotics companies fail,
link |
01:30:37.840
Rod's company rethink robotics. Do you have anything hopeful to say about the possibility
link |
01:30:48.240
of making money with robots? Oh, I think you can't just look at the failures. I mean,
link |
01:30:54.320
Boston Dynamics is a success. There's lots of companies that are still doing amazingly good
link |
01:30:59.200
work in robotics. I mean, this is the capitalist ecology or something, right? I think you have
link |
01:31:05.920
many companies. You have many startups and they push each other forward and many of them fail and
link |
01:31:11.360
some of them get through and that's sort of the natural way of those things. I don't know that
link |
01:31:18.720
is robotics really that much worse. I feel the pain that you feel too. Every time I read one of
link |
01:31:23.920
these, sometimes it's friends and I definitely wish it went better differently. But I think it's
link |
01:31:33.920
healthy and good to have bursts of ideas, bursts of activities, ideas. If they are really aggressive,
link |
01:31:41.680
they should fail sometimes. Certainly, that's the research mantra, right? If you're
link |
01:31:48.080
succeeding at every problem you attempt, then you're not choosing aggressively enough.
link |
01:31:53.200
Is it exciting to you, the new spot? Oh, it's so good.
link |
01:31:57.440
When are you getting them as a pet? Yeah, I mean, I have to dig up 75k right now. It's so cool
link |
01:32:04.560
that there's a price tag. You can go and actually buy it. I have a Skydio R1. Love it.
link |
01:32:13.440
No, I would absolutely be a customer. I wonder what your kids would think about it. I actually,
link |
01:32:20.960
Zach from Boston Dynamics would let my kid drive in one of their demos one time,
link |
01:32:27.120
and that was just so good. I'll forever be grateful for that.
link |
01:32:34.160
And there's something magical about the anthropomorphization of that arm. It has another
link |
01:32:39.680
level of human connection. I'm not sure we understand from a control aspect the value of
link |
01:32:48.080
anthropomorphization. I think that's an understudied and under understood
link |
01:32:55.680
engineering problem. Psychologists have been studying it. I think it's part, manipulating
link |
01:33:02.320
our mind to believe things is a valuable engineering. This is another degree of
link |
01:33:08.400
freedom that can be controlled. I like that. Yeah, I think that's right. I think,
link |
01:33:11.600
there's something that humans seem to do, or maybe my dangerous introspection, is
link |
01:33:20.320
I think we are able to make very simple models that assume a lot about the world very quickly.
link |
01:33:27.760
And then it takes us a lot more time, like your wrestling. You probably thought you knew what
link |
01:33:32.640
you were doing with wrestling, and you were fairly functional as a complete wrestler,
link |
01:33:36.800
and then you slowly got more expertise. So maybe it's natural that our first
link |
01:33:44.800
level of defense against seeing a new robot is to think of it in our existing models of how
link |
01:33:50.480
humans and animals behave. And it's just, as you spend more time with it,
link |
01:33:54.880
then you'll develop more sophisticated models that will appreciate the differences.
link |
01:33:59.120
Exactly. Can you say, what does it take to control a robot? Like, what is the control
link |
01:34:06.960
problem of a robot? And in general, what is a robot in your view? Like, how do you think of this
link |
01:34:13.200
system? What is a robot? What is a robot? I think, I told you ridiculous questions.
link |
01:34:19.920
No, no, it's good. I mean, there's standard definitions of combining computation with
link |
01:34:26.320
some ability to do mechanical work. I think that gets us pretty close. But I think
link |
01:34:32.720
robotics has this problem that once things really work, we don't call them robots anymore.
link |
01:34:40.320
My dishwasher at home is pretty sophisticated. Beautiful mechanisms. There's actually a pretty
link |
01:34:46.240
good computer, probably a couple of chips in there doing amazing things. We don't think of
link |
01:34:49.920
that as a robot anymore, which isn't fair. Because then, roughly, it means that robotics
link |
01:34:55.280
always has to solve the next problem and doesn't get to celebrate its past successes.
link |
01:35:00.480
I mean, even factory room floor robots are super successful. They're amazing. But that's not the
link |
01:35:09.040
ones, I mean, people think of them as robots, but they don't, if you ask what are the successes of
link |
01:35:13.280
robotics, somehow it doesn't come to your mind immediately. So the definition of robot is a
link |
01:35:20.080
system with some level automation that fails frequently. Something like it's the computation
link |
01:35:25.840
plus mechanical work and unsolved problems. Yeah. So from a perspective of control and mechanics,
link |
01:35:36.880
dynamics, what is a robot? So there are many different types of robots. The control that you
link |
01:35:43.120
need for a Jibo robot, some robot that's sitting on your countertop and interacting with you,
link |
01:35:52.240
but not touching you, for instance, is very different than what you need for an autonomous car
link |
01:35:56.640
or an autonomous drone. It's very different than what you need for a robot that's going to walk or
link |
01:36:02.480
pick things up with its hands. My passion has always been for the places where you're
link |
01:36:09.520
interacting or doing more dynamic interactions with the world. So walking, now manipulation.
link |
01:36:18.560
And the control problems there are beautiful. I think contact is one thing that differentiates
link |
01:36:25.600
them from many of the control problems we've solved classically. The modern control grew up
link |
01:36:31.440
stabilizing fighter jets that were passively unstable. And there's amazing success stories
link |
01:36:36.240
from control all over the place. Power grid, I mean, there's all kinds of, it's everywhere
link |
01:36:43.360
that we don't even realize, just like AI is now. Do you mention contact? Like, what's contact?
link |
01:36:51.440
So an airplane is an extremely complex system or a spacecraft landing or whatever. But at least
link |
01:36:57.840
it has the luxury of things change relatively continuously. That's an oversimplification.
link |
01:37:04.800
But if I make a small change in the command I send to my actuator, then the path that the robot will
link |
01:37:11.920
take tends to take a change only by a small amount. And there's a feedback mechanism here.
link |
01:37:18.640
That's what we're talking about. And there's a feedback mechanism. And thinking about this as
link |
01:37:22.880
locally like a linear system, for instance, I can use more linear algebra tools to study
link |
01:37:29.600
systems like that, generalizations of linear algebra to these smooth systems.
link |
01:37:36.240
What is contact? The robot has something very discontinuous that happens when it
link |
01:37:42.400
makes or breaks, when it starts touching the world. And even the way it touches or the order of
link |
01:37:47.360
contacts can change the outcome in potentially unpredictable ways. Not unpredictable, but
link |
01:37:54.720
complex ways. I do think there's a little bit of... A lot of people will say that contact is hard
link |
01:38:03.280
in robotics, even to simulate. And I think there's a little bit of a... There's truth to that, but
link |
01:38:10.000
maybe a misunderstanding around that. So what is limiting is that when we think about our robots
link |
01:38:19.440
and we write our simulators, we often make an assumption that objects are rigid. And when it
link |
01:38:27.200
comes down, that their mass moves all... It stays in a constant position relative to each other
link |
01:38:33.040
itself. And that leads to some paradoxes when you go to try to talk about rigid body mechanics
link |
01:38:41.520
and contact. And so, for instance, if I have a three legged stool with just a... Imagine it comes
link |
01:38:50.320
to a point at the leg. So it's only touching the world at a point. If I draw my physics...
link |
01:38:56.800
My high school physics diagram of this system, then there's a couple of things that I'm given
link |
01:39:02.160
by elementary physics. I know if the system... If the table is at rest, if it's not moving,
link |
01:39:07.280
zero velocities. That means that the normal force... All the forces are in balance. So the
link |
01:39:14.640
force of gravity is being countered by the forces that the ground is pushing on my table legs.
link |
01:39:21.200
I also know, since it's not rotating, that the moments have to balance. And since it's a three
link |
01:39:28.160
dimensional table, it could fall in any direction, it actually tells me uniquely what those three
link |
01:39:34.000
normal forces have to be. If I have four legs on my table, four legged table,
link |
01:39:41.760
and they were perfectly machined to be exactly the right same height and they're set down and
link |
01:39:46.080
the table's not moving, then the basic conservation laws don't tell me... There are many solutions for
link |
01:39:53.520
the forces that the ground could be putting on my legs that would still result in the table not
link |
01:39:58.560
moving. Now, the reason that seems fine, I could just pick one. But it gets funny now because if
link |
01:40:05.120
you think about friction, what we think about with friction is our standard model says the amount
link |
01:40:12.720
of force that the table will push back if I were to now try to push my table sideways,
link |
01:40:17.920
I guess I have a table here, is proportional to the normal force. So if I have... If I'm
link |
01:40:25.280
barely touching and I push, I'll slide, but if I'm pushing more and I push, I will slide less.
link |
01:40:30.320
It's called Coulomb friction is our standard model. Now, if you don't know what the normal
link |
01:40:35.040
force is on the four legs and you push the table, then you don't know what the friction forces are
link |
01:40:41.760
going to be. And so you can't actually tell, the laws just aren't explicit yet about which
link |
01:40:48.400
way the table's going to go. It could veer off to the left, it could veer off to the right,
link |
01:40:52.640
it could go straight. So the rigid body assumption of contact leaves us with some paradoxes,
link |
01:40:59.760
which are annoying for writing simulators and for writing controllers.
link |
01:41:06.080
We still do that sometimes because soft contact is potentially harder numerically or whatever,
link |
01:41:13.040
and the best simulators do both or do some combination of the two. But anyways, because
link |
01:41:17.920
of these kind of paradoxes, there's all kinds of paradoxes in contact, mostly due to these
link |
01:41:23.520
rigid body assumptions. It becomes very hard to write the same kind of control laws that we've
link |
01:41:30.000
been able to be successful with for fighter jets. We haven't been as successful writing those
link |
01:41:35.280
controllers for manipulation. And so you don't know what's going to happen at the point of
link |
01:41:40.080
contact, at the moment of contact. There are situations absolutely where you... Where our
link |
01:41:44.240
laws don't tell us. So the standard approach, that's okay. I mean, instead of having a differential
link |
01:41:50.320
equation, you end up with a differential inclusion, it's called. It's a set valued
link |
01:41:55.440
equation. It says that I'm in this configuration, I have these forces applied on me,
link |
01:42:01.520
and there's a set of things that could happen.
link |
01:42:05.280
And those aren't continuous, I mean, so when you're saying non smooth, they're not only not
link |
01:42:12.960
smooth, but this is discontinuous. The non smooth comes in when I make or break a new contact first,
link |
01:42:20.320
or when I transition from stick to slip. So you typically have static friction,
link |
01:42:25.200
and then you'll start sliding, and that'll be a discontinuous change in velocity, for instance.
link |
01:42:31.200
Especially if you come to rest or... That's so fascinating.
link |
01:42:34.400
Okay, so what do you do? Sorry, I interrupted you. What's the hope under so much uncertainty about
link |
01:42:44.320
what's going to happen? What are you supposed to do? I mean, control has an answer for this.
link |
01:42:48.400
Robust control is one approach, but roughly, you can write controllers which try to still
link |
01:42:54.080
perform the right task, despite all the things that could possibly happen. The world might want
link |
01:42:58.880
the table to go this way and this way, but if I write a controller that pushes a little bit more
link |
01:43:03.520
and pushes a little bit, I can certainly make the table go in the direction I want.
link |
01:43:07.840
It just puts a little bit more of a burden on the control system. And these discontinuities
link |
01:43:13.760
do change the control system, because the way we write it down right now,
link |
01:43:21.120
every different control configuration, including sticking or sliding or parts of my body that
link |
01:43:27.680
are in contact or not, looks like a different system. And I think of them, I reason about them
link |
01:43:32.880
separately or differently, and the combinatorics of that blow up. So I just don't have enough time
link |
01:43:39.760
to compute all the possible contact configurations of my humanoid. Interestingly, I mean, I'm a
link |
01:43:48.080
humanoid. I have lots of degrees of freedom, lots of joints. I've only been around for a
link |
01:43:54.160
handful of years, it's getting up there, but I haven't had time in my life to visit all of the
link |
01:44:00.000
states in my system, certainly all the contact configurations. So if step one is to consider
link |
01:44:08.240
every possible contact configuration that I'll ever be in, that's probably not a problem I need to
link |
01:44:14.960
solve, right? Just as a small tangent, what's the contact configuration? Just so we can
link |
01:44:22.640
enumerate what are we talking about? How many are there? The simplest example maybe would be
link |
01:44:29.920
imagine a robot with a flat foot. And we think about the phases of gate where the heel strikes,
link |
01:44:37.280
and then the front toe strikes, and then you can heel up, toe off. Those are each different
link |
01:44:44.960
contact configurations. I only had two different contacts, but I ended up with four different
link |
01:44:49.840
contact configurations. Now, of course, my robot might actually have bumps on it or other things,
link |
01:44:57.840
so it could be much more subtle than that, right? But it's just even with one sort of box
link |
01:45:03.040
interacting with the ground already in the plane has that many, right? And if I was just even a
link |
01:45:08.000
3D foot, then probably my left toe might touch just before my right toe and things get subtle.
link |
01:45:13.360
Now, if I'm a dexterous hand, and I go to talk about just grabbing a water bottle,
link |
01:45:22.240
if I have to enumerate every possible order that my hand came into contact with the bottle,
link |
01:45:30.800
then I'm dead in the water. Any approach that we were able to get away with that in walking,
link |
01:45:36.960
because we mostly touch the ground with a small number of points, for instance,
link |
01:45:40.160
and we haven't been able to get dexterous hands that way.
link |
01:45:45.040
So, you've mentioned that people think that contact is really hard, and that's the reason
link |
01:45:55.600
that robotic manipulation is problem is really hard. Is there any flaws in that thinking?
link |
01:46:05.520
So, I think simulating contact is one aspect, and people often say that one of the reasons
link |
01:46:14.080
that we have a limit in robotics is because we do not simulate contact accurately in our
link |
01:46:19.200
simulators. And I think that is the extent to which that's true is partly because our
link |
01:46:27.200
simulators, we haven't got mature enough simulators. There are some things that are still hard,
link |
01:46:33.120
difficult that we should change. But we actually know what the governing equations are. They have
link |
01:46:41.760
some foibles like this indeterminacy, but we should be able to simulate them accurately.
link |
01:46:48.480
We have incredible open source community in robotics, but it actually just takes a professional
link |
01:46:53.040
engineering team a lot of work to write a very good simulator like that.
link |
01:46:57.440
Now, where does I believe you've written Drake?
link |
01:47:03.120
There's a team of people. I certainly spent a lot of hours on it myself.
link |
01:47:07.840
What is Drake? What does it take to create a simulation environment for the kind of
link |
01:47:17.200
difficult control problems we're talking about?
link |
01:47:20.560
Right. So, Drake is the simulator that I've been working on. There are other good simulators out
link |
01:47:25.840
there. I don't like to think of Drake as just a simulator because we write our controllers in
link |
01:47:31.360
Drake. We write our perception systems a little bit in Drake, but we write all of our low level
link |
01:47:36.560
control and even planning and optimization capabilities. Drake is three things roughly.
link |
01:47:45.840
It's an optimization library, which provides a layer of abstraction in C++ and Python
link |
01:47:53.600
for commercial solvers. You can write linear programs, quadratic programs,
link |
01:48:00.640
semi definite programs, sums of squares programs, the ones we've used mixed integer programs,
link |
01:48:05.600
and it will do the work to curate those and send them to whatever the right solver is,
link |
01:48:09.680
for instance, and it provides a level of abstraction. The second thing is a system
link |
01:48:16.240
modeling language, a bit like LabView or Simulink, where you can make block diagrams out of complex
link |
01:48:23.440
systems, or it's like ROS in that sense, where you might have lots of ROS nodes that are each
link |
01:48:29.680
doing some part of your system, but to contrast it with ROS, we try to write, if you write a Drake
link |
01:48:37.200
system, then you have to, it asks you to describe a little bit more about the system. If you have any
link |
01:48:44.400
state, for instance, in the system, there are any variables that are going to persist, you have to
link |
01:48:47.840
declare them, parameters can be declared and the like, but the advantage of doing that is that you
link |
01:48:53.680
can, if you like, run things all on one process, but you can also do control design against it,
link |
01:49:00.080
you can do, I mean, simple things like rewinding and playing back your simulations. For instance,
link |
01:49:07.760
these things, you get some rewards for spending a little bit more upfront cost in describing each
link |
01:49:12.000
system. And I was inspired to do that because I think the complexity of Atlas, for instance,
link |
01:49:21.200
is just so great. And I think although, I mean, ROS has been incredible,
link |
01:49:26.080
absolute huge fan of what it's done for the robotics community, but the ability to rapidly
link |
01:49:34.080
put different pieces together and have a functioning thing is very good. But I do think that it's
link |
01:49:41.040
hard to think clearly about a bag of disparate parts, Mr. Potato Head kind of software stack.
link |
01:49:48.080
And if you can, you know, ask a little bit more out of each of those parts, then you can understand
link |
01:49:54.880
there the way they work better, you can try to verify them and the like, or you can do learning
link |
01:50:00.960
against them. And then one of those systems, the last thing I said the first two things that Drake
link |
01:50:06.240
is, but the last thing is that there is a set of multi body equations, rigid body equations,
link |
01:50:12.400
that is trying to provide a system that simulates physics. And we also have renderers and other
link |
01:50:19.680
things, but I think the physics component of Drake is special in the sense that we have done
link |
01:50:26.800
excessive amount of engineering to make sure that we've written the equations correctly.
link |
01:50:31.440
Every possible tumbling satellite or spinning top or anything that we could possibly write as a test
link |
01:50:35.760
is tested. We are making some, you know, I think fundamental improvements on the way you simulate
link |
01:50:42.880
contact. Just what does it take to simulate contact? I mean, it just seems, I mean, there's
link |
01:50:51.200
something just beautiful the way you're like explaining contact and you're like tapping
link |
01:50:55.920
your fingers on the on the table while you're while you're doing it, just easily easily,
link |
01:51:02.080
just like, just not even like, it was like helping you think, I guess. What I do, you have this like
link |
01:51:11.360
awesome demo of loading or unloading a dishwasher, just picking up a plate, grasping it like for the
link |
01:51:22.880
first time. That just seems like so difficult. How do you simulate any of that? So it was really
link |
01:51:34.480
interesting that what happened was that we started getting more professional about our software
link |
01:51:39.840
development during the DARPA Robotics Challenge. I learned the value of software engineering and
link |
01:51:46.000
how to bridle complexity. I guess that's what I want to somehow fight against and bring some
link |
01:51:53.120
of the clear thinking of controls into these complex systems we're building for robots.
link |
01:52:00.400
Shortly after the DARPA Robotics Challenge, Toyota opened a research institute,
link |
01:52:04.480
TRI, Toyota Research Institute. They put one of their, there's three locations, one of them is
link |
01:52:11.120
just down the street from MIT, and I helped ramp that up as a part of the end of my sabbatical,
link |
01:52:20.320
I guess. So TRI has given me the TRI Robotics effort, has made this investment in simulation
link |
01:52:31.600
in Drake, and Michael Sherman leads a team there of just absolutely top notch dynamics experts
link |
01:52:37.120
that are trying to write those simulators that can pick up the dishes. And there's also a team
link |
01:52:42.800
working on manipulation there that is taking problems like loading the dishwasher,
link |
01:52:48.880
and we're using that to study these really hard corner cases kind of problems in manipulation.
link |
01:52:55.200
So for me, this simulating the dishes, we could actually write a controller, if we just cared
link |
01:53:02.080
about picking up dishes in the sink once, we could write a controller without any simulation
link |
01:53:06.720
whatsoever, and we could call it done. But we want to understand what is the path you take
link |
01:53:14.160
to actually get to a robot that could perform that for any dish in anybody's kitchen
link |
01:53:22.000
with enough confidence that it could be a commercial product, right? And it has deep
link |
01:53:27.920
learning perception in the loop, it has complex dynamics in the loop, it has controller, it has
link |
01:53:31.760
a planner, and how do you take all of that complexity and put it through this engineering
link |
01:53:38.160
discipline and verification and validation process to actually get enough confidence to deploy?
link |
01:53:46.240
I mean, the DARPA challenge made me realize that that's not something you throw over the fence
link |
01:53:51.840
and hope that somebody will harden it for you, that there are really fundamental challenges
link |
01:53:57.280
in closing that last gap. During the validation and the testing?
link |
01:54:03.360
I think it might even change the way we have to think about the way we write systems.
link |
01:54:09.680
What happens if you have the robot running lots of tests and it screws up, it breaks a dish, right?
link |
01:54:18.880
How do you capture that? I said you can't run the same simulation or the same experiment twice
link |
01:54:24.880
on a real robot. Do we have to be able to bring that one off failure back into simulation in
link |
01:54:32.640
order to change our controllers, study it, make sure it won't happen again? Is it enough to just try
link |
01:54:39.360
to add that to our distribution and understand that on average, we're going to cover that situation
link |
01:54:44.880
again? There's really subtle questions at the corner cases that I think we don't yet have
link |
01:54:51.760
satisfying answers for. How do you find the corner cases? That's one kind of... Do you think
link |
01:54:57.440
it's possible to create a systematized way of discovering corner cases efficiently
link |
01:55:05.120
in whatever the problem is? Yes. I mean, I think we have to get better at that.
link |
01:55:10.640
I mean, control theory has, for decades, talked about active experiment design.
link |
01:55:16.560
What's that? People call it curiosity these days. It's roughly this idea of trying to
link |
01:55:24.000
exploration or exploitation, but the active experiment design is even more specific. You
link |
01:55:29.600
could try to understand the uncertainty in your system, design the experiment that will
link |
01:55:36.000
provide the maximum information to reduce that uncertainty. If there's a parameter you want
link |
01:55:41.200
to learn about, what is the optimal trajectory I could execute to learn about that parameter,
link |
01:55:46.640
for instance. Scaling that up to something that has a deep network in the loop and a
link |
01:55:52.880
planning in the loop is tough. We've done some work with Matt O. Kelly and Amansina. We've worked
link |
01:56:00.880
on some falsification algorithms that are trying to do rare event simulation that try to just
link |
01:56:06.160
hammer on your simulator. If your simulator is good enough, you can write good algorithms
link |
01:56:15.600
that try to spend most of their time in the corner cases. You basically imagine you're building
link |
01:56:24.880
an autonomous car and you want to put it in, I don't know, downtown New Delhi all the time,
link |
01:56:29.360
accelerated testing. If you can write sampling strategies which figure out where your controller
link |
01:56:35.200
is performing badly in simulation and start generating lots of examples around that.
link |
01:56:40.720
It's just the space of possible places where things can go wrong is very big,
link |
01:56:47.840
so it's hard to write those algorithms. Rare event simulation is just a really compelling
link |
01:56:52.880
notion if it's possible. We joke and we call it the black swan generator.
link |
01:56:59.920
Because you don't just want the rare events, you want the ones that are highly impactful.
link |
01:57:03.600
I mean, those are the most profound questions we ask of our world. What's the worst that
link |
01:57:14.800
can happen? What we're really asking isn't some kind of computer science worst case analysis.
link |
01:57:22.400
We're asking what are the millions of ways this can go wrong? That's our curiosity.
link |
01:57:28.960
We humans, I think are pretty bad at, we just run into it. I think there's a distributed
link |
01:57:38.160
sense because there's now like 7.5 billion of us. There's a lot of them and then a lot of them write
link |
01:57:44.320
blog posts about the stupid thing they've done, so we learn in a distributed way.
link |
01:57:50.720
I think that's going to be important for robots too. That's another massive theme
link |
01:57:55.840
at Toyota Research for Robotics is this fleet learning concept. The idea that I as a human,
link |
01:58:04.640
I don't have enough time to visit all of my states. It's very hard for one robot to experience
link |
01:58:10.640
all the things, but that's not actually the problem we have to solve. We're going to have
link |
01:58:16.800
fleets of robots that can have very similar appendages. At some point, maybe collectively,
link |
01:58:24.080
they have enough data that their computational processes should be set up differently than ours.
link |
01:58:34.320
All these dishwasher unloading robots, that robot dropping a plate and a human looking
link |
01:58:43.840
at the robot probably pissed off, but that's a special moment to record. I think one thing
link |
01:58:52.240
in terms of fleet learning, and I've seen that because I've talked to a lot of folks just like
link |
01:58:58.960
Tesla users or Tesla drivers. They're another company that's using this kind of fleet learning
link |
01:59:04.480
idea. One hopeful thing I have about humans is they really enjoy when a system improves
link |
01:59:12.160
learns, so they enjoy fleet learning. The reason it's hopeful for me is they're willing to put up
link |
01:59:18.800
with something that's kind of dumb right now. If it's improving, they almost enjoy being part
link |
01:59:27.920
of the teaching it, almost like if you have kids, you're teaching them something. I think that's
link |
01:59:33.840
a beautiful thing because that gives me hope that we can put dumb robots out there. The problem
link |
01:59:41.120
on the Tesla side with cars, cars can kill you. That makes the problem so much harder.
link |
01:59:46.960
Dishwasher unloading is a little safe. That's why Home Robotics is really exciting. Just to
link |
01:59:54.800
clarify, for people who might not know, I mean, TRI, Toyota Research Institute, they're pretty
link |
02:00:03.360
well known for autonomous vehicle research, but they're also interested in Home Robotics.
link |
02:00:11.120
There's a big group working on multiple groups working on Home Robotics. It's a major part
link |
02:00:15.040
of the portfolio. There's also a couple other projects and advanced materials discovery using
link |
02:00:21.840
AI and machine learning to discover new materials for car batteries and the like,
link |
02:00:27.200
for instance. That's been actually an incredibly successful team. There's new projects starting
link |
02:00:32.320
up too. Do you see a future of where robots are in our home and robots that have actuators that
link |
02:00:44.080
look like arms in our home or more like humanoid type robots? We're going to do the same thing
link |
02:00:51.600
that you just mentioned that dishwasher is no longer a robot. We're going to just not even
link |
02:00:57.200
see them as robots. What's your vision of the home of the future, 10, 20 years from now,
link |
02:01:03.680
50 years if you get crazy? I think we already have Roombas cruising around. We have
link |
02:01:10.960
Alexis or Google Homes on our kitchen counter. It's only a matter of time till they spring arms
link |
02:01:17.920
and start doing something useful like that. I do think it's coming. I think lots of people
link |
02:01:26.000
have lots of motivations for doing it. It's been super interesting actually learning about
link |
02:01:32.320
Toyota's vision for it, which is about helping people age in place,
link |
02:01:36.000
because I think that's not necessarily the first entry, the most lucrative entry point,
link |
02:01:44.160
but it's the problem maybe that we really need to solve no matter what.
link |
02:01:52.480
I think there's a real opportunity. It's a delicate problem. How do you work with people,
link |
02:01:57.360
help people, keep them active, engaged, but improve the quality of life and help them age
link |
02:02:07.040
in place, for instance? It's interesting because older folks are also, I mean, there's a contrast
link |
02:02:13.360
there because they're not always the folks who are the most comfortable with technology, for
link |
02:02:19.600
example. There's a division that's interesting there that you can do so much good with a robot
link |
02:02:28.160
for older folks, but there's a gap to feel of understanding. I mean, it's actually kind of
link |
02:02:37.360
beautiful. Robot is learning about the human and the human is kind of learning about this
link |
02:02:42.960
new robot thing. Also, when I talk to my parents about robots, there's a little bit of a blank slate
link |
02:02:53.200
there too. I mean, they don't know anything about robotics, so it's completely wide open.
link |
02:03:03.920
My parents haven't seen Black Mirror, so it's a blank slate. Here's a cool thing,
link |
02:03:10.160
like what can you do for me? It's an exciting space.
link |
02:03:14.240
I think it's a really important space. I do feel like a few years ago, drones were successful enough
link |
02:03:22.000
in academia. They kind of broke out and started an industry and autonomous cars have been happening.
link |
02:03:28.960
It does feel like manipulation in logistics, of course, first, but in the home shortly after,
link |
02:03:35.600
seems like one of the next big things that's going to really pop. I don't think we talked about it,
link |
02:03:41.920
but what's soft robotics? We talked about rigid bodies. If we can just linger on this
link |
02:03:50.720
whole touch thing. What's soft robotics? I told you that I really dislike the fact that robots
link |
02:04:01.360
are afraid of touching the world all over their body. There's a couple reasons for that. If you
link |
02:04:07.280
look carefully at all the places that robots actually do touch the world, they're almost
link |
02:04:11.600
always soft. They have some sort of pad on their fingers or a rubber sole on their foot,
link |
02:04:17.680
but if you look up and down the arm, we're just pure aluminum or something. That makes it hard,
link |
02:04:26.240
actually. In fact, hitting the table with your rigid arm or nearly rigid arm has some of the
link |
02:04:34.160
problems that we talked about in terms of simulation. I think it fundamentally changes
link |
02:04:38.640
the mechanics of contact when you're soft. You turn point contacts into patch contacts,
link |
02:04:44.880
which can have torsional friction. You can have distributed load. If I want to pick up an egg,
link |
02:04:50.560
right? If I pick it up with two points, then in order to put enough force to sustain the
link |
02:04:56.640
weight of the egg, I might have to put a lot of force to break the egg. If I envelop it with
link |
02:05:02.400
contact all around, then I can distribute my force across the shell of the egg and have a
link |
02:05:08.320
better chance of not breaking it. Soft robotics is for me a lot about changing the mechanics
link |
02:05:13.920
of contact. Does it make the problem a lot harder? Quite the opposite.
link |
02:05:23.920
It changes the computational problem. I think our world and our mathematics has
link |
02:05:32.320
biased us towards rigid, but it really should make things better in some ways.
link |
02:05:36.560
Right? I think the future is unwritten there, but the other thing is...
link |
02:05:45.120
I think ultimately, sorry to interrupt, but I think ultimately it will make things simpler
link |
02:05:49.360
if we embrace the softness of the world. It makes things smoother, right? The result of
link |
02:05:58.720
small actions is less discontinuous, but it also means potentially less
link |
02:06:04.160
instantaneously bad, for instance. I won't necessarily contact something and send it flying off.
link |
02:06:13.200
The other aspect of it that just happens to dovetail really well is that soft robotics tends
link |
02:06:17.680
to be a place where we can embed a lot of sensors to. If you change your hardware and make it more
link |
02:06:24.240
soft, then you can potentially have a tactile sensor, which is measuring the deformation.
link |
02:06:28.560
There's a team at TRI that's working on soft hands, and you get so much more information.
link |
02:06:38.080
If you can put a camera behind the skin roughly and get fantastic tactile information, which is
link |
02:06:47.920
it's super important. In manipulation, one of the things that really
link |
02:06:51.600
is frustrating is if you work super hard on your head mounted on your perception system for your
link |
02:06:56.160
head mounted cameras, and then you've identified an object, you reach down to touch it, and the
link |
02:07:01.040
last thing that happens right before the most important time, you stick your hand and you're
link |
02:07:04.800
occluding your head mounted sensors. In all the part that really matters, all of your offboard
link |
02:07:11.120
sensors are occluded. Really, if you don't have tactile information, then you're blind in an
link |
02:07:17.680
important way. It happens that soft robotics and tactile sensing tend to go hand in hand.
link |
02:07:24.160
I think we've kind of talked about it, but you taught a course on underactuated robotics.
link |
02:07:30.800
I believe that was the name of it, actually. Can you talk about it in that context? What is
link |
02:07:38.000
underactuated robotics? Underactuated robotics is my graduate course. It's online mostly now,
link |
02:07:46.480
in the sense that the lectures. Several versions of it, I think.
link |
02:07:48.560
Right. It's really great. I recommend it highly. Look on YouTube for the 2020 versions
link |
02:07:55.280
until March, and then you have to go back to 2019, thanks to COVID.
link |
02:08:00.880
No, I've poured my heart into that class. Lecture one is basically explaining what the
link |
02:08:06.960
word underactuated means. People are very kind to show up and then maybe have to learn
link |
02:08:12.320
what the title of the course means over the course of the first lecture.
link |
02:08:14.880
That first lecture is really good. You should watch it.
link |
02:08:20.000
It's a strange name, but I thought it captured the essence of what control was good at doing
link |
02:08:28.080
and what control was bad at doing. What do I mean by underactuated? A mechanical system
link |
02:08:36.640
has many degrees of freedom, for instance. I think of a joint as a degree of freedom,
link |
02:08:41.360
and it has some number of actuators, motors. If you have a robot that's bolted to the table
link |
02:08:49.360
that has five degrees of freedom and five motors, then you have a fully actuated robot.
link |
02:08:58.480
If you take away one of those motors, then you have an underactuated robot.
link |
02:09:03.520
Why on earth? I have a good friend who likes to tease me. He said,
link |
02:09:07.920
Russ, if you had more research funding, would you work on fully actuated robots?
link |
02:09:14.000
The answer is no. The world gives us underactuated robots, whether we like it or not. I'm a human.
link |
02:09:19.920
I'm an underactuated robot, even though I have more muscles than my big degrees of freedom,
link |
02:09:25.360
because I have in some places multiple muscles attached to the same joint.
link |
02:09:30.960
But still, there's a really important degree of freedom that I have, which is the location of my
link |
02:09:36.160
center of mass in space, for instance. I can jump into the air, and there's no motor that connects
link |
02:09:44.560
my center of mass to the ground, in that case. I have to think about the implications of not
link |
02:09:50.080
having control over everything. The passive dynamic walkers are the extreme view of that,
link |
02:09:56.640
where you've taken away all the motors, and you have to let physics do the work.
link |
02:10:00.080
But it shows up in all of the walking robots, where you have to use some of the actuators
link |
02:10:04.640
to push and pull even the degrees of freedom that you don't have an actuator on.
link |
02:10:10.160
That's referring to walking if you're falling forward. Is there a way to walk that's fully
link |
02:10:15.200
actuated? It's a subtle point. When you're in contact and you have your feet on the ground,
link |
02:10:24.080
there are still limits to what you can do. Unless I have suction cups on my feet,
link |
02:10:29.280
I cannot accelerate my center of mass towards the ground faster than gravity,
link |
02:10:33.920
because I can't get a force pushing me down. But I can still do most of the things that I want to.
link |
02:10:39.600
So you can get away with basically thinking of the system as fully actuated, unless you
link |
02:10:44.000
suddenly needed to accelerate down super fast. But as soon as I take a step, I get into more
link |
02:10:50.880
nuanced territory. And to get to really dynamic robots, or airplanes, or other things, I think
link |
02:10:59.360
you have to embrace the underactuated dynamics. Manipulation, people think is manipulation
link |
02:11:05.120
underactuated? Even if my arm is fully actuated, I have a motor, if my goal is to control
link |
02:11:12.640
the position and orientation of this cup, then I don't have an actuator for that directly. So I
link |
02:11:19.280
have to use my actuators over here to control this thing. Now it gets even worse, like what if I have
link |
02:11:25.040
to button my shirt? What are the degrees of freedom of my shirt? That's a hard question
link |
02:11:33.600
to think about. It kind of makes me queasy as thinking about my state space control ideas.
link |
02:11:40.480
But actually, those are the problems that make me so excited about manipulation right now,
link |
02:11:44.400
is that it breaks a lot of the foundational control stuff that I've been thinking about.
link |
02:11:50.880
Is there what are some interesting insights you can say about trying to solve an underactuated
link |
02:11:58.000
control in an underactuated system? So I think the philosophy there is let
link |
02:12:05.040
physics do more of the work. The technical approach has been optimization. So you typically
link |
02:12:12.720
formulate your decision making for control as an optimization problem. And you use the
link |
02:12:17.600
language of optimal control and sometimes often numerical optimal control in order to make those
link |
02:12:23.920
decisions and balance these complicated equations and in order to control. You don't have to use
link |
02:12:31.520
optimal control to do underactuated systems, but that has been the technical approach that has
link |
02:12:36.880
borne the most fruit at least in our line of work. So in underactuated systems, when you say
link |
02:12:44.640
let physics do some of the work, so there's a kind of feedback loop that observes the state
link |
02:12:52.000
that the physics brought you to. So there's a perception there. There's a feedback
link |
02:12:59.440
somehow. Do you ever loop in complicated perception systems into this whole picture?
link |
02:13:06.720
Right. Right around the time of the DARPA challenge, we had a complicated perception
link |
02:13:10.880
system in the DARPA challenge. We also started to embrace perception for our flying vehicles at
link |
02:13:16.800
the time. We had a really good project on trying to make airplanes fly at high speeds through
link |
02:13:22.480
forests. Sir Tash Karaman was on that project, and it was a really fun team to work on. He's
link |
02:13:30.800
carried it much farther forward since then. And that's using cameras for perception?
link |
02:13:35.840
So that was using cameras. At the time, we felt like LiDAR was too heavy and too power
link |
02:13:44.400
heavy to be carried on a light UAV, and we were using cameras. And that was a big part of it,
link |
02:13:50.240
was just how do you do even stereo matching at a fast enough rate with a small camera,
link |
02:13:56.080
a small onboard compute. Since then, we have now, so the deep learning revolution unquestionably
link |
02:14:03.280
changed what we can do with perception for robotics and control. So in manipulation,
link |
02:14:10.000
we can address, we can use perception, you know, I think a much deeper way. And we get into not
link |
02:14:16.720
only, I think the first use of it naturally would be to ask your deep learning system to
link |
02:14:23.360
look at the cameras and produce the state, which is like the pose of my thing, for instance.
link |
02:14:28.160
But I think we've quickly found out that that's not always the right thing to do.
link |
02:14:34.320
Why is that? Because what's the state of my shirt? Imagine, I'm very noisy, I mean,
link |
02:14:42.640
if the first step of me trying to button my shirt is estimate the full state of my shirt,
link |
02:14:48.480
including like what's happening in the back, you know, whatever, whatever.
link |
02:14:50.880
Yeah. That's just not the right specification. There are aspects of the state that are very
link |
02:14:57.920
important to the task. There are many that are unobservable and not important to the task.
link |
02:15:05.680
So you really need, it begs new questions about state representation. Another example that we've
link |
02:15:11.920
been playing with in lab has been just the idea of chopping onions, okay? Or carrots,
link |
02:15:18.400
turns out to be better. So onions stink up the lab and they're hard to see in a camera.
link |
02:15:27.440
The details matter, yeah. Details matter, you know. So,
link |
02:15:32.800
if I'm moving around a particular object, right, then I think about, oh, it's got a position
link |
02:15:36.640
on orientation and space, that's the description I want. Now, when I'm chopping an onion, okay,
link |
02:15:42.640
the first chop comes down. I have now a hundred pieces of onion. Does my control system really
link |
02:15:49.520
need to understand the position and orientation and even the shape of the hundred pieces of onion
link |
02:15:53.840
in order to make a decision? Probably not, you know, and like, if I keep going, I'm just getting,
link |
02:15:58.800
more and more is my state space getting bigger as I cut. It's not right. So,
link |
02:16:06.160
somehow there's, I think there's a richer idea of state. It's not the state that is given to us
link |
02:16:15.520
by Lagrangian mechanics. There is a, there is a proper Lagrangian state of the system,
link |
02:16:21.120
but the relevant state for this is some latent state is what we call it in machine learning.
link |
02:16:28.400
But, you know, there's some, some different state representation.
link |
02:16:32.000
Some compressed representation. And that's what I worry about saying compressed because it doesn't,
link |
02:16:38.080
I don't mind that it's low dimensional or not, but it has to be something that's easier to think
link |
02:16:44.800
about. By us humans. Or my algorithms. Or the algorithms being like control optimal. So, for
link |
02:16:54.000
instance, if the contact mechanics of all of those onion pieces and all the permutations
link |
02:16:59.520
of possible touches between those onion pieces, you know, you can give me a high dimensional
link |
02:17:04.160
state representation, I'm okay if it's linear. But if I have to think about all the possible
link |
02:17:08.640
shattering combinatorics of that, then my robot's going to sit there thinking and
link |
02:17:15.360
the soup's going to get cold or something. So, since you taught the course, it kind of entered my
link |
02:17:21.280
mind, the idea of under actuated as really compelling to see the, to see the world in this
link |
02:17:27.840
kind of way. Do you ever, you know, if we talk about onions or you talk about the world with
link |
02:17:34.160
people in it, in general, do you see the world as a basically an under actuated system? Do you
link |
02:17:40.080
like often look at the world in this way? Or is this overreach?
link |
02:17:46.880
Under actuated as a way of life, man. Exactly. I guess that's what I'm asking.
link |
02:17:51.120
I do think it's everywhere. I think some, in some places,
link |
02:17:58.640
we already have natural tools to deal with it. You know, it rears its head. I mean,
link |
02:18:02.480
in linear systems, it's not a problem. We just, like an under actuated linear system is really
link |
02:18:07.600
not sufficiently distinct from a fully actuated linear system. It's a subtle point about when
link |
02:18:13.440
that becomes a bottleneck in what we know how to do with control. It happens to be a bottleneck.
link |
02:18:18.400
Although we've gotten incredibly good solutions now, but for a long time that I felt that that was
link |
02:18:24.400
the key bottleneck in legged robots. And roughly now, the under actuated course is,
link |
02:18:30.240
you know, me trying to tell people everything I can about how to make Atlas do a backflip, right?
link |
02:18:38.240
I have a second course now in that I teach in the other semesters, which is on manipulation.
link |
02:18:43.440
And that's where we get into now more of the, that's a newer class. I'm hoping to put it online
link |
02:18:48.000
this fall completely. And that's going to have much more aspects about these perception problems
link |
02:18:55.280
and the state representation questions. And then how do you do control? And the,
link |
02:19:00.640
the thing that's a little bit sad is that for me, at least, is there's a lot of manipulation tasks
link |
02:19:07.280
that people want to do and should want to do. They could start a company with it and be very
link |
02:19:11.440
successful that don't actually require you to think that much about under act or dynamics at all,
link |
02:19:17.040
even, but certainly under actuated dynamics. Once I have, if I, if I reach out and grab something,
link |
02:19:22.880
if it, if I can sort of assume it's rigidly attached to my hand, then I can do a lot of
link |
02:19:26.320
interesting meaningful things with it without really ever thinking about the dynamics of that
link |
02:19:31.200
object. So they built, we've built systems that kind of reduce the need for that,
link |
02:19:39.040
enveloping grasps and the like. But I think the really good problems in manipulation. So
link |
02:19:44.400
I, manipulation, by the way, is more than just pick and place. That's like a lot of people think
link |
02:19:50.960
of that, just grasping. I don't mean that. I mean, buttoning my shirt. I mean, tying shoelaces.
link |
02:19:57.600
How do you program a robot to tie shoelaces and not just one shoe, but every shoe, right?
link |
02:20:04.000
That's a really good problem. It's tempting to write down like the infinite dimensional
link |
02:20:09.360
state of the, of the laces. That's probably not needed to write a good controller. I know we
link |
02:20:16.640
could hand design a controller that would do it, but I don't want that. I want to understand the
link |
02:20:21.040
principles that would allow me to solve another problem that's kind of like that. But I think
link |
02:20:27.840
if we can stay pure in our approach, then the challenge of tying anybody's shoes is a great
link |
02:20:35.440
challenge. That's a great challenge. I mean, and the soft touch comes into play there. That's
link |
02:20:40.960
really interesting. Let me ask another ridiculous question on this topic. How important is
link |
02:20:49.280
touch? We haven't talked much about humans, but I have this argument with my dad,
link |
02:20:56.320
where like, I think you can fall in love with a robot based on language alone. And he believes
link |
02:21:03.600
that touch is essential. Touch and smell, he says, but so in terms of robots, you know, connecting
link |
02:21:14.240
with humans and we can go philosophical in terms of like a deep meaningful connection,
link |
02:21:20.560
like love, but even just like collaborating in an interesting way. How important is touch? Like
link |
02:21:26.960
from an engineering perspective and a philosophical one?
link |
02:21:32.640
I think it's super important. Even just in a practical sense, if we forget about the emotional
link |
02:21:38.480
part of it, but for robots to interact safely while they're doing meaningful mechanical work
link |
02:21:47.120
in the close contact with or vicinity of people that need help, I think we have to have them,
link |
02:21:54.800
we have to build them differently. They have to be afraid, not afraid of touching the world. So
link |
02:22:01.200
I think Baymax is just awesome. That's just like the movie of Big Hero 6 and the concept of Baymax,
link |
02:22:07.680
that's just awesome. I think we should, and we have some folks at Toyota that are trying to,
link |
02:22:13.600
Toyota Research that are trying to build Baymax roughly. And I think it's just a fantastically
link |
02:22:20.160
good project. I think it will change the way people physically interact. The same way,
link |
02:22:26.080
I mean, you gave a couple of examples earlier, but if the robot that was walking around my home
link |
02:22:31.840
looked more like a teddy bear and a little less like the Terminator, that could change completely
link |
02:22:37.440
the way people perceive it and interact with it. And maybe they'll even want to teach it, like you
link |
02:22:42.720
said, right? You could not quite gamify it, but somehow instead of people judging it and looking
link |
02:22:50.480
at it as if it's not doing as well as a human, they're going to try to help out the cute teddy
link |
02:22:55.680
bear, right? Who knows? But I think we're building robots wrong and being more soft and more
link |
02:23:04.000
contact is important, right? Yeah, like all the magical moments I can remember with robots,
link |
02:23:11.120
well, first of all, just visiting your lab and seeing Atlas, but also Spotmini. When I first
link |
02:23:18.960
saw Spotmini in person and hung out with him, her, it, I don't have trouble in gendering robots.
link |
02:23:28.240
I feel robotics people really say, oh, is it it? I kind of like the idea that it's a her or a him.
link |
02:23:34.160
There's a magical moment, but there's no touching. I guess the question I have, have you ever been,
link |
02:23:41.520
like, have you had a human robot experience where like a robot touched you? And like it was like,
link |
02:23:50.720
wait, like, was there a moment that you've forgotten that a robot is a robot? And like the
link |
02:23:57.920
anthropomorphization stepped in and for a second you forgot that it's not human? I mean, I think
link |
02:24:05.200
when you're in on the details, then we, of course, anthropomorphized our work with Atlas, but in,
link |
02:24:13.680
you know, in verbal communication and the like, I think we were pretty aware of it as a machine
link |
02:24:20.240
that needed to be respected. I actually, I worry more about the smaller robots that could still,
link |
02:24:27.600
you know, move quickly if programmed wrong. And we have to be careful, actually, about safety
link |
02:24:32.080
and the like right now. And that if we build our robots correctly, I think then those, a lot of
link |
02:24:38.000
those concerns could go away. And we're seeing that trend. We're seeing the lower cost, lighter
link |
02:24:42.640
weight arms now that could be fundamentally safe. I mean, I do think touch is so fundamental. Ted
link |
02:24:52.240
Adelson is great. He's a perceptual scientist at MIT. And he studied vision most of his life.
link |
02:25:01.280
And he said, when I had kids, I expected to be fascinated by their perceptual development.
link |
02:25:09.600
But what really, what he noticed was felt more impressive, more dominant was the way that they
link |
02:25:15.040
would touch everything and lick everything and pick things up to get on their tongue and whatever.
link |
02:25:19.280
And he said, watching his daughter convinced him that actually he needed to study tactile
link |
02:25:27.040
sensing more. So there's something very important. I think it's a little bit also of the passive
link |
02:25:35.600
versus active part of the world, right? You can passively perceive the world.
link |
02:25:43.840
But it's fundamentally different if you can do an experiment, right? And if you can change the
link |
02:25:47.840
world, you can learn a lot more than a passive observer. So you can in dialogue, that was your
link |
02:25:56.080
initial example, you could have an active experiment exchange. But I think if you're just a camera
link |
02:26:01.760
watching YouTube, I think that's a very different problem than if you're a robot that can apply
link |
02:26:08.320
force and touch. I think it's important. Yeah, I think it's just an exciting area of
link |
02:26:17.360
research. I think you're probably right that this hasn't been under researched. It's, to me,
link |
02:26:24.000
as a person who's captivated by the idea of human robot interaction, it feels like
link |
02:26:30.240
such a rich opportunity to explore touch, not even from a safety perspective, but like you said,
link |
02:26:36.400
the emotional too. I mean, safety comes first. But the next step is like,
link |
02:26:42.480
like, you know, like a real human connection, even in the world, like even in the industrial
link |
02:26:50.400
setting, it just feels like it's nice for the robot. I don't know, you might disagree with this,
link |
02:26:58.000
but because I think it's important to see robots as tools often. But I don't know. I think they're
link |
02:27:06.560
just always going to be more effective once you humanize them. Like, it's convenient now to think
link |
02:27:13.520
of them as tools because we want to focus on the safety. But I think ultimately, to create
link |
02:27:19.840
like a good experience for the worker, for the person, there has to be a human element. I don't
link |
02:27:28.080
know, for me, it feels like like an industrial robotic arm would be better if has a human element.
link |
02:27:34.720
I think like rethink robotics had that idea with the Baxter and having eyes and so on having,
link |
02:27:41.120
I don't know, I'm a big believer in that. It's not my area, but I am also a big believer.
link |
02:27:49.040
Do you have an emotional connection to Alice? Do you miss them?
link |
02:27:54.800
I mean, yes, I don't know if I'd more so than if I had a different science project that I'd
link |
02:28:04.800
worked on super hard, right? But yeah, I mean, the robot, we basically had to do heart surgery
link |
02:28:14.160
on the robot in the final competition because we melted the core. And yeah, there was something
link |
02:28:21.920
about watching that robot hanging there. We know we had to compete with it in an hour and it was
link |
02:28:25.920
getting its guts ripped out. Those are all historic moments. I think if you look back like 100 years
link |
02:28:32.240
from now, yeah, I think those are important moments in robotics. I mean, these are the early
link |
02:28:39.440
day, you look at like the early days of a lot of scientific disciplines, they look ridiculous,
link |
02:28:43.600
it's full of failure. But it feels like robotics will be important in the coming 100 years. And
link |
02:28:52.560
these are the early days. So I think a lot of people are, look at a brilliant person such as
link |
02:29:00.400
yourself and are curious about the intellectual journey they've took. Is there maybe three books,
link |
02:29:08.240
technical fiction, philosophical, that had a big impact on your life that you would recommend,
link |
02:29:15.280
perhaps others reading? Yeah, so I actually didn't read that much as a kid, but I read
link |
02:29:22.800
fairly voraciously now. There are some recent books that if you're interested in this kind of
link |
02:29:30.080
topic, like AI Superpowers by Kaifuli is just a fantastic read. You must read that. Yuval Harari
link |
02:29:40.960
is just, I think that can open your mind. Sapiens. Sapiens is the first one, Homo Deus is the
link |
02:29:48.960
second. We mentioned The Black Swan by Talib. I think that's a good sort of mind opener.
link |
02:29:55.760
I actually, so there's maybe a more controversial recommendation I could give.
link |
02:30:05.440
Great. I would love that first.
link |
02:30:09.040
In some sense, it's so classical, it might surprise you. But I actually recently read
link |
02:30:14.560
Mortimer Adler's How to Read a Book not so long ago. It was a while ago, but
link |
02:30:19.120
some people hate that book. I loved it. I think we're in this time right now where,
link |
02:30:30.720
boy, we're just inundated with research papers that you could read on archive with
link |
02:30:37.360
limited peer review and just this wealth of information.
link |
02:30:40.560
I don't know. I think the passion of what you can get out of a book, a really good book or
link |
02:30:51.360
a really good paper if you find it, the attitude, the realization that you're only going to find
link |
02:30:56.000
a few that really are worth all your time. But then once you find them, you should just dig in
link |
02:31:02.480
and understand it very deeply and it's worth marking it up and having the hard copy,
link |
02:31:11.200
writing in the side notes, side margins. I read it at the right time where I was just
link |
02:31:21.040
feeling just overwhelmed with really low quality stuff, I guess. And similarly,
link |
02:31:28.400
I'm just giving more than three now. I'm sorry if I've exceeded my quota.
link |
02:31:35.200
But on that topic just real quick is so basically finding a few companions to keep for the rest of
link |
02:31:42.480
your life in terms of papers and books and so on. And those are the ones like not doing
link |
02:31:50.800
what is it, foam wall fear, missing out, constantly trying to update yourself,
link |
02:31:54.640
but really deeply making a life journey of studying a particular paper essentially,
link |
02:31:59.920
set of papers. Yeah, I think when you really find something, a book that resonates with you
link |
02:32:07.600
might not be the same book that resonates with me. But when you really find one that resonates
link |
02:32:12.720
with you, I think the dialogue that happens and that's what I love that Adler was saying,
link |
02:32:17.280
you know, I think Socrates and Plato say the written word is never going to capture the beauty
link |
02:32:26.160
of dialogue, right? But Adler says, no, no, a really good book is a dialogue between you and
link |
02:32:34.480
the author and it crosses time and space. And I don't know, I think it's a very romantic,
link |
02:32:40.560
there's a bunch of like specific advice, which you can just gloss over, but the romantic view of
link |
02:32:46.240
how to read and really appreciate it is so good. And similarly, teaching, I
link |
02:32:57.280
thought a lot about teaching. And so Isaac Asimov, great science fiction writer,
link |
02:33:03.040
has also actually spent a lot of his career writing nonfiction, right? His memoir is fantastic.
link |
02:33:09.840
He was passionate about explaining things, right? He wrote all kinds of books on all
link |
02:33:13.760
kinds of topics in science. He was known as the great explainer. And I do really resonate with
link |
02:33:21.280
his style and just his way of talking about, by communicating and explaining to something
link |
02:33:30.400
is really the way that you learn something. I think about problems very differently because
link |
02:33:36.400
of the way I've been given the opportunity to teach them at MIT. And we have questions asked,
link |
02:33:43.280
the fear of the lecture, the experience of the lecture and the questions I get and the interactions
link |
02:33:50.080
just forces me to be rock solid on these ideas in a way that I didn't have that. I don't know,
link |
02:33:56.160
I would be in a different intellectual space. Also video, does that scare you that your
link |
02:34:00.640
lectures are online? And people like me in sweatpants can sit sipping coffee and watch
link |
02:34:05.760
you give lectures that I think it's great. I do think that something's changed right now,
link |
02:34:12.640
which is, you know, right now we're giving lectures over Zoom, I mean, giving seminars
link |
02:34:18.000
over Zoom and everything. I'm trying to figure out, I think it's a new medium.
link |
02:34:24.160
Do you think it's possible? Yeah, I've been, I've been quite cynical
link |
02:34:34.400
about the human to human connection over that medium. But I think that's because it's
link |
02:34:41.840
hasn't been explored fully. And teaching is a different thing.
link |
02:34:45.680
Every lecture is a, I'm sorry, every seminar even, I think every talk I give,
link |
02:34:52.240
you know, it's an opportunity to give that differently. I can deliver content directly
link |
02:34:57.120
into your browser. You have a WebGL engine right there. I could, I can throw 3D content into your
link |
02:35:04.400
browser while you're listening to me, right? Yeah. And I can assume that you have a, you know,
link |
02:35:09.600
at least a powerful enough laptop or something to watch Zoom while I'm doing that while I'm
link |
02:35:13.840
giving a lecture. That's a new communication tool that I didn't have last year, right? And
link |
02:35:21.360
I think robotics can potentially benefit a lot from teaching that way.
link |
02:35:24.960
Okay. We'll see. It's going to be an experiment this fall.
link |
02:35:28.080
It's interesting. I'm thinking a lot about it.
link |
02:35:30.320
Yeah. And also like the, the length of lectures or the length of like, there's something, so like,
link |
02:35:39.680
I guarantee you, you know, it's like 80% of people who started listening to our conversation
link |
02:35:44.880
are still listening to now, which is crazy to me. But so there's a, there's a patience and
link |
02:35:50.800
interest in long form content, but at the same time, there's a magic to forcing yourself to
link |
02:35:56.560
condense an idea to the shortest possible, uh, shortest possible like clip. It can be a part
link |
02:36:05.200
of a longer thing, but like just a really beautifully condensed an idea. There's a lot of
link |
02:36:10.080
a opportunity there that's easier to do in remote with, I don't know, uh, with editing too. Editing
link |
02:36:19.360
is an interesting thing. Like what, uh, you know, when most professors don't get, when they give a
link |
02:36:25.520
lecture, you don't get to go back and edit out parts like Chris, like Chris bit up a little bit.
link |
02:36:31.600
That's also, it can do magic. Like if you remove like five to 10 minutes from an hour lecture,
link |
02:36:39.520
it can, it can actually, it can make something special of a lecture. I've, uh, I've seen that
link |
02:36:44.560
of myself and, and, and in others too, cause I added other people's lectures to extract clips.
link |
02:36:50.480
It's like there's certain tangents they're like that lose, they're not interesting. They're,
link |
02:36:55.120
they're, they're mumbling. They're just not, they're not clarifying. They're, they're not helpful
link |
02:36:59.200
at all. And once you remove them, it's just, I don't know, editing can be magic. It takes a
link |
02:37:04.960
lot of time. Yeah. It takes, it depends like what is teaching you have to ask. Yeah. Um,
link |
02:37:10.080
um, um, yeah. Cause I find the editing process is also beneficial as, uh, for teaching, but also
link |
02:37:20.160
for your own learning. I don't know if, have you watched yourself on the other day? Have you watched
link |
02:37:25.120
those videos? It's, I mean, not all of them. It could be, it could be painful and to see like how
link |
02:37:30.960
to improve. So do you find that, uh, I know you segment your, um, your podcast. Do you think that
link |
02:37:37.920
it helps people with the, the attention span aspect of it? Or is it segment like sections? Like,
link |
02:37:44.160
yeah, we're talking about this topic, whatever. Nope. Nope. That just helps me. It's actually bad.
link |
02:37:49.360
So, uh, and you've been incredible. Um, so I'm, I'm learning, like I'm afraid of conversation.
link |
02:37:56.320
This is even today. I'm terrified of talking to you. I mean, it's something I'm, um, trying to
link |
02:38:02.000
remove for myself. I, there's, there's a guy, I mean, I've learned from a lot of people, but
link |
02:38:08.000
really, um, there's been a few people who's been inspirational to me in terms of conversation.
link |
02:38:14.000
Whatever people think of him, uh, Joe Rogan has been inspirational to me because, uh, comedians
link |
02:38:18.960
have been to being able to just have fun and enjoy themselves and lose themselves in conversation
link |
02:38:25.440
that requires you to be a great storyteller, to be able to, uh, pull a lot of different pieces of
link |
02:38:31.520
information together, but mostly just to enjoy yourself in conversations. I'm trying to learn
link |
02:38:37.440
that these notes are, you see me looking down. That's like a safety blanket that I'm trying to
link |
02:38:43.600
let go of more and more. Cool. Um, so that's that people love just regular conversation.
link |
02:38:49.280
That's, that's what they, the structure is like, whatever. Uh, I would say, I would say maybe
link |
02:38:56.000
like 10 to like, so there's a bunch of, you know, there's, uh, probably a couple of thousand
link |
02:39:02.880
PhD students listening to this right now, right? And they might know what we're talking about,
link |
02:39:09.360
but there is somebody I guarantee you right now in Russia, some kid who's just like,
link |
02:39:16.400
who's just smoked some weed is sitting back and just enjoying the hell out of this conversation,
link |
02:39:22.480
not really understanding. You kind of watch some Boston Dynamics videos. He's just enjoying it.
link |
02:39:27.600
Um, and I salute you, sir. Uh, no, but just like there's a, so much variety of people,
link |
02:39:33.600
uh, that just have curiosity about engineering, about sciences, about mathematics and, um, and
link |
02:39:40.080
also like I should, I mean, um, enjoying it is one thing, but also often notice it in inspires
link |
02:39:48.880
people to, there's a lot of people who are like in their undergraduate studies, trying to figure
link |
02:39:53.680
out what, uh, trying to figure out what to pursue. And these conversations can really spark the
link |
02:40:00.000
direction of their, of their life. And in terms of robotics, I hope it does. Cause, uh, I'm excited
link |
02:40:06.320
about the possibilities of robotics brings on that topic. Um, do you have advice? Like what advice
link |
02:40:14.160
would you give to a young person about life? A young person about life or a young person about
link |
02:40:21.680
life in robotics? Uh, it could be in robotics. It could be in life in general. It could be career.
link |
02:40:28.400
It could be, uh, relationship advice. It could be running advice, just like
link |
02:40:34.000
they're, um, that's one of the things I see, like we talked like 20 year olds. They're, they're like,
link |
02:40:39.680
how do I, how do I do this thing? What, what do I do? Um, if they come up to you, what would you
link |
02:40:46.960
tell them? I think it's an interesting time to be a kid these days. Everything points to this being
link |
02:40:57.120
sort of a winner take all economy and the like, I think the people that will really excel in my
link |
02:41:03.760
opinion are going to be the ones that can think deeply about problems. Um, you have to be able to
link |
02:41:12.080
ask questions, agilely and use the internet for everything it's good for and stuff like this.
link |
02:41:16.400
And I think a lot of people will develop those skills. I think the leaders, thought leaders,
link |
02:41:24.320
you know, robotics leaders, whatever, are going to be the ones that can do more and they can think
link |
02:41:29.520
very deeply and critically. Um, and that's a harder thing to learn. I think one, one path to
link |
02:41:36.240
learning that is through mathematics, through engineering. Um, I would encourage people to
link |
02:41:42.960
start math early. I mean, I didn't really start. I mean, I, I was always in the, the better math
link |
02:41:49.920
classes that I could take, but I wasn't pursuing super advanced mathematics or anything like that
link |
02:41:55.600
until I got to MIT. I think MIT lit me up and, uh, really started the life that I'm living now.
link |
02:42:05.520
But, uh, yeah, I really want kids to, to dig deep, really understand things, building things too.
link |
02:42:12.400
I mean, pull things apart, put them back together. Like that's just such a good way to really
link |
02:42:17.520
understand things and expect it to be a long journey, right? It's, uh, you don't have to
link |
02:42:26.240
know everything. You're never going to know everything.
link |
02:42:29.280
So think deeply and stick with it.
link |
02:42:32.720
Enjoy the ride, but just make sure you're not, um, yeah, just, just make sure you're, you're,
link |
02:42:39.760
you're stopping to think about why things work.
link |
02:42:42.480
And it's true. It's, uh, it's easy to lose yourself in the, in the, in the distractions of the world.
link |
02:42:50.960
We're overwhelmed with content right now, but
link |
02:42:54.560
you have to stop and pick some of it and, and really understand it.
link |
02:42:58.640
Yeah. On the book point, I've read, um, Animal Farm by George Orwell, a ridiculous number of times.
link |
02:43:05.920
So for me, like that book, I don't know if it's a good book in general, but for me,
link |
02:43:10.320
it connects deeply somehow. Uh, it somehow connects. So I was born in the Soviet Union,
link |
02:43:18.080
so it connects to me to the entirety of the history of the Soviet Union and to World War II,
link |
02:43:23.040
and to the love and hatred and suffering that went on there and the, uh, the corrupting nature of
link |
02:43:32.320
power and greed. And just somehow I just, that, that, that book has taught me more about life
link |
02:43:37.920
than like anything else, even though it's just like a silly, like childlike book about pigs.
link |
02:43:46.240
I don't know why it just connects and inspires. And the same, there's a few, um, yeah, there's a
link |
02:43:52.320
few technical books too and algorithms that just, yeah, you return too often. Right. I'm, I'm, I'm
link |
02:43:58.880
with you. Um, yeah, there's, uh, I don't, and I've been losing that because of the internet.
link |
02:44:05.200
I've been like, uh, going on, I've been going on archive and blog posts and GitHub and, and the
link |
02:44:11.680
new thing and, uh, you lose your ability to really master an idea. Right. Well, exactly right.
link |
02:44:20.960
What's the fond memory from childhood when baby Russ Tedrick? Well, I guess I just said that, um,
link |
02:44:31.280
um, at least my current life begins, began when I got to MIT. If I have to go farther than that.
link |
02:44:38.720
Yeah. What was, was there a life before MIT? Oh, absolutely. But, but let me actually tell
link |
02:44:47.040
you what happened when I first got to MIT. Cause that, I think might be relevant here. But I,
link |
02:44:53.760
you know, I had taken a computer engineering degree at Michigan. I enjoyed it immensely,
link |
02:44:58.640
learned a bunch of stuff. I was, I liked computers. I liked programming. Um, but when I did get to
link |
02:45:06.000
MIT and started working with Sebastian Song, theoretical physicist, computational neuroscientist,
link |
02:45:14.240
the culture here was just different. Um, it demanded more of me, certainly mathematically
link |
02:45:20.080
and in the critical thinking. And I remember the day that I, uh, to borrowed one of the books from
link |
02:45:27.920
my advisor's office and walked down to the Charles River and was like, I'm getting my butt kicked,
link |
02:45:33.200
you know? Um, and I think that's going to happen to everybody who's doing this kind of stuff.
link |
02:45:39.840
Right. I think, uh, I expected you to ask me the meaning of life. You know, I think that the, uh,
link |
02:45:48.640
somehow I think that's, that's got to be part of it. This.
link |
02:45:51.680
I'm doing hard things. Yeah. Did you, uh, did you consider quitting at any point?
link |
02:45:58.000
Did you consider this isn't for me? No, never that. I mean, I was,
link |
02:46:03.600
I was working hard, but I was loving it. Right. I mean, there's, I think the,
link |
02:46:07.600
there's this magical thing where you, uh, you know, I'm lucky to surround myself with people that
link |
02:46:11.920
basically almost every day I'll, I'll, I'll see something. I'll be told something or something
link |
02:46:18.960
that I realized, wow, I don't understand that. And if I could just understand that there's,
link |
02:46:24.000
there's something else to learn that if I could just learn that thing, I would connect another
link |
02:46:28.960
piece of the puzzle. And, and, uh, you know, I think that is just such an important aspect and
link |
02:46:36.400
being willing to understand what you can and can't do and, and loving the journey of going
link |
02:46:43.360
and learning those other things. I think that's the best part.
link |
02:46:45.680
I don't think there's a better way to end it for us. I've, um, you've been an inspiration to me
link |
02:46:52.880
since I showed up at MIT. Uh, your work has been an inspiration to the world. This conversation
link |
02:46:58.320
was amazing. I can't wait to see what you do next with robotics, home robots. I,
link |
02:47:03.600
I hope to see you work in my home one day. So thanks so much for talking today. It's been awesome.
link |
02:47:08.000
Cheers. Thanks for listening to this conversation with Rostedrik and thank you to our sponsors,
link |
02:47:14.000
Magic Spoon Serial, BetterHelp and ExpressVPN. Please consider supporting this podcast by going
link |
02:47:20.480
to magicspoon.com slash Lex and using code Lex at checkout, going to betterhelp.com slash Lex
link |
02:47:27.760
and signing up at expressvpn.com slash Lex pod. Click the links, buy the stuff, get the discount.
link |
02:47:36.080
It really is the best way to support this podcast. If you enjoy this thing, subscribe on YouTube,
link |
02:47:41.360
review it with five stars and up a podcast, support on Patreon or connect with me on Twitter
link |
02:47:46.400
at Lex Friedman spelled somehow without the E just F R I D M A N. And now let me leave you
link |
02:47:54.320
with some words from Neil deGrasse Tyson talking about robots in space and the emphasis we humans
link |
02:47:59.920
put on human based space exploration. Robots are important. If I don my pure scientist hat,
link |
02:48:07.760
I would say just send robots. I'll stay down here and get the data. But nobody's ever given a parade
link |
02:48:13.840
for a robot. Nobody's ever named a high school after a robot. So when I don my public educator hat,
link |
02:48:20.000
I have to recognize the elements of exploration that excite people. It's not only the discoveries
link |
02:48:25.600
and the beautiful photos that come down from the heavens. It's the vicarious participation
link |
02:48:31.280
in discovery itself. Thank you for listening and hope to see you next time.