back to index

Rodney Brooks: Robotics | Lex Fridman Podcast #217


small model | large model

link |
00:00:00.000
The following is a conversation with Rodney Brooks, one of the greatest roboticists in history.
link |
00:00:06.400
He led the Computer Science and Artificial Intelligence Laboratory at MIT,
link |
00:00:10.640
then cofounded iRobot, which is one of the most successful robotics companies ever.
link |
00:00:16.560
Then he cofounded Rethink Robotics that created some amazing collaborative robots like Baxter
link |
00:00:22.560
and Sawyer. Finally, he cofounded Robust.ai, whose mission is to teach robots common sense,
link |
00:00:30.640
which is a lot harder than it sounds. To support this podcast,
link |
00:00:35.120
please check out our sponsors in the description.
link |
00:00:38.160
As a side note, let me say that Rodney is someone I've looked up to for many years in my now over
link |
00:00:43.920
two decade journey in robotics because, one, he's a legit great engineer of real world systems,
link |
00:00:52.080
and two, he's not afraid to state controversial opinions that challenge the way we see the AI
link |
00:00:57.600
world. But of course, while I agree with him on some of his critical views of AI, I don't agree
link |
00:01:04.240
with some others, and he's fully supportive of such disagreement. Nobody ever built anything great
link |
00:01:10.640
by being fully agreeable. There's always respect and love behind our interactions, and when a
link |
00:01:16.960
conversation is recorded like it was for this podcast, I think a little bit of disagreement is
link |
00:01:22.560
fun. This is the Lex Friedman Podcast, and here is my conversation with Rodney Brooks.
link |
00:01:31.760
What is the most amazing or beautiful robot that you've ever had the chance to work with?
link |
00:01:37.600
I think it was Domo, which was made by one of my grad students, Aaron Edsinger. It now sits in
link |
00:01:43.760
Daniela Russo's office, director of CSAIL, and it was just a beautiful robot. Aaron was really
link |
00:01:50.720
clever. He didn't give me a budget ahead of time. He didn't tell me what he was going to do.
link |
00:01:56.240
He just started spending money. He spent a lot of money. He and Jeff Weber, who is a mechanical
link |
00:02:02.960
engineer who Aaron insisted he bring with him when he became a grad student, built this beautiful,
link |
00:02:08.640
gorgeous robot, Domo, which is an upper torso humanoid, two arms with fingers, three fingered
link |
00:02:17.040
hands, and face eyeballs. Not the eyeballs, but everything else, series elastic actuators.
link |
00:02:26.880
You can interact with it. Cable driven. All the motors are inside, and it's just gorgeous.
link |
00:02:33.760
The eyeballs are actuated too, or no?
link |
00:02:35.680
Oh yeah, the eyeballs are actuated with cameras, so it had a visual attention mechanism,
link |
00:02:41.280
looking when people came in and looking in their face and talking with them.
link |
00:02:46.240
Wow, was it amazing?
link |
00:02:48.000
The beauty of it.
link |
00:02:49.600
You said what was the most beautiful?
link |
00:02:51.040
What is the most beautiful?
link |
00:02:52.160
It's just mechanically gorgeous. As everything Aaron builds,
link |
00:02:55.600
there's always been mechanically gorgeous. It's just exquisite in the detail.
link |
00:03:00.400
We're talking about mechanically, like literally the amount of actuators.
link |
00:03:04.400
The actuators, the cables, he anodizes different parts, different colors,
link |
00:03:10.080
and it just looks like a work of art.
link |
00:03:13.200
What about the face? Do you find the face beautiful in robots?
link |
00:03:17.760
When you make a robot, it's making a promise for how well it will be able to interact,
link |
00:03:23.120
so I always encourage my students not to overpromise.
link |
00:03:27.680
Even with its essence, like the thing it presents, it should not overpromise.
link |
00:03:31.840
Yeah, so the joke I make, which I think you'll get, is if your robot looks like Albert Einstein,
link |
00:03:37.200
it should be as smart as Albert Einstein.
link |
00:03:39.440
So the only thing in Domo's face is the eyeballs, because that's all it can do.
link |
00:03:47.520
It can look at you and pay attention.
link |
00:03:52.640
It's not like one of those Japanese robots that looks exactly like a person at all.
link |
00:03:58.240
But see, the thing is, us humans and dogs, too, don't just use eyes as attentional mechanisms.
link |
00:04:06.160
They also use it to communicate, as part of the communication.
link |
00:04:09.440
Like a dog can look at you, look at another thing, and look back at you,
link |
00:04:12.880
and that designates that we're going to be looking at that thing together.
link |
00:04:15.840
Yeah, or intent, you know, on both Baxter and Sawyer at Rethink Robotics,
link |
00:04:21.200
they had a screen with, you know, graphic eyes,
link |
00:04:25.440
so it wasn't actually where the cameras were pointing, but the eyes would look in the direction
link |
00:04:31.200
it was about to move its arm, so people in the factory nearby were not surprised by its motions,
link |
00:04:36.160
because it gave that intent away.
link |
00:04:39.840
Before we talk about Baxter, which I think is a beautiful robot, let's go back to the beginning.
link |
00:04:45.120
When did you first fall in love with robotics?
link |
00:04:48.560
We're talking about beauty and love to open the conversation.
link |
00:04:50.880
This is great.
link |
00:04:51.440
I was born in the end of 1954, and I grew up in Adelaide, South Australia,
link |
00:04:57.520
and I have these two books that are dated 1961, so I'm guessing my mother found them in a store
link |
00:05:05.120
in 62 or 63, How and Why Wonder Books.
link |
00:05:09.600
How and Why Wonder Book of Electricity, and a How and Why Wonder Book of Giant Brains and Robots.
link |
00:05:15.680
And I learned how to build circuits, you know, when I was eight or nine, simple circuits,
link |
00:05:23.200
and I read, you know, learned the binary system, and saw all these drawings, mostly, of robots,
link |
00:05:31.680
and then I tried to build them for the rest of my childhood.
link |
00:05:36.080
Wait, 61, you said?
link |
00:05:38.400
This was when the two books, I've still got them at home.
link |
00:05:41.200
What does the robot mean in that context?
link |
00:05:43.520
Some of the robots that they had were arms, you know, big arms to move nuclear material around,
link |
00:05:51.600
but they had pictures of welding robots that looked like humans under the sea, welding stuff
link |
00:05:57.600
underwater.
link |
00:05:59.040
So they weren't real robots, but they were, you know, what people were thinking about for robots.
link |
00:06:05.200
What were you thinking about?
link |
00:06:06.560
Were you thinking about humanoids?
link |
00:06:07.920
Were you thinking about arms with fingers?
link |
00:06:09.760
Were you thinking about faces or colors?
link |
00:06:12.080
Were you thinking about faces or cars?
link |
00:06:14.000
No, actually, to be honest, I realized my limitation on building mechanical stuff.
link |
00:06:19.360
So I just built the brains, mostly, out of different technologies as I got older.
link |
00:06:28.320
I built a learning system which was chemical based, and I had this ice cube tray.
link |
00:06:35.040
Each well was a cell, and by applying voltage to the two electrodes, it would build up a
link |
00:06:42.400
copper bridge.
link |
00:06:43.040
So over time, it would learn a simple network so I could teach it stuff.
link |
00:06:50.000
And mostly, things were driven by my budget, and nails as electrodes and an ice cube tray
link |
00:07:00.160
was about my budget at that stage.
link |
00:07:02.160
Later, I managed to buy transistors, and I could build gates and flip flops and stuff.
link |
00:07:07.520
So one of your first robots was an ice cube tray?
link |
00:07:11.040
Yeah, it was very cerebral because it learned to add.
link |
00:07:16.720
Very nice.
link |
00:07:17.920
Well, just a decade or so before, in 1950, Alan Turing wrote a paper that formulated
link |
00:07:26.080
the Turing Test, and he opened that paper with the question, can machines think?
link |
00:07:32.400
So let me ask you this question.
link |
00:07:34.160
Can machines think?
link |
00:07:36.160
Can your ice cube tray one day think?
link |
00:07:40.800
Certainly, machines can think because I believe you're a machine, and I'm a machine, and I
link |
00:07:44.720
believe we both think.
link |
00:07:46.640
I think any other philosophical position is sort of a little ludicrous.
link |
00:07:51.360
What does think mean if it's not something that we do?
link |
00:07:53.760
And we are machines.
link |
00:07:56.160
So yes, machines can, but do we have a clue how to build such machines?
link |
00:08:00.880
That's a very different question.
link |
00:08:02.480
Are we capable of building such machines?
link |
00:08:05.680
Are we smart enough?
link |
00:08:06.720
We think we're smart enough to do anything, but maybe we're not.
link |
00:08:10.000
Maybe we're just not smart enough to build stuff like us.
link |
00:08:14.160
The kind of computer that Alan Turing was thinking about, do you think there is something
link |
00:08:18.720
fundamentally or significantly different between the computer between our ears, the biological
link |
00:08:25.040
computer that humans use, and the computer that he was thinking about from a sort of
link |
00:08:31.200
high level philosophical?
link |
00:08:33.280
Yeah, I believe that it's very wrong.
link |
00:08:36.480
In fact, I'm halfway through a, I think it'll be about a 480 page book, the working title
link |
00:08:44.160
is Not Even Wrong.
link |
00:08:45.440
And if I may, I'll tell you a bit about that book.
link |
00:08:48.080
Yes, please.
link |
00:08:48.720
So there's two, well, three thrusts to it.
link |
00:08:52.720
One is the history of computation, what we call computation.
link |
00:08:56.160
It goes all the way back to some manuscripts in Latin from 1614 and 1620 by Napier and
link |
00:09:03.760
Kepler through Babbage and Lovelace.
link |
00:09:06.640
And then Turing's 1936 paper is what we think of as the invention of modern computation.
link |
00:09:17.360
And that paper, by the way, did not set out to invent computation.
link |
00:09:23.120
It set out to negatively answer one of Hilbert's three later set of problems.
link |
00:09:29.680
He called it an effective way of getting answers.
link |
00:09:38.560
And Hilbert really worked with rewriting rules, as did Church, who also, at the same time,
link |
00:09:49.360
a month earlier than Turing, disproved Hilbert's one of these three hypotheses.
link |
00:09:54.880
The other two had already been disproved by Gödel.
link |
00:09:57.360
Turing set out to disprove it, because it's always easier to disprove these things than
link |
00:10:01.680
to prove that there is an answer.
link |
00:10:04.160
And so he needed, and it really came from his professor while I was an undergrad at
link |
00:10:12.880
Cambridge, who turned it into, is there a mechanical process?
link |
00:10:16.400
So he wanted to show a mechanical process that could calculate numbers, because that
link |
00:10:23.840
was a mechanical process that people used to generate tables.
link |
00:10:27.760
They were called computers, the people at the time.
link |
00:10:30.800
And they followed a set of rules where they had paper, and they would write numbers down,
link |
00:10:35.360
and based on the numbers, they'd keep writing other numbers.
link |
00:10:39.040
And they would produce numbers for these tables, engineering tables, that the more iterations
link |
00:10:46.800
they did, the more significant digits came out.
link |
00:10:48.960
And so Turing, in that paper, set out to define what sort of machine could do that, mechanical
link |
00:10:56.960
machine, where it could produce an arbitrary number of digits in the same way a human computer
link |
00:11:04.320
did.
link |
00:11:06.720
And he came up with a very simple set of constraints where there was an infinite supply
link |
00:11:13.600
of paper.
link |
00:11:14.320
This is the tape of the Turing machine, and each Turing machine came with a set of instructions
link |
00:11:22.320
that, as a person, could do with pencil and paper, write down things on the tape and erase
link |
00:11:27.920
them and put new things there.
link |
00:11:30.000
And he was able to show that that system was not able to do something that Hilbert had
link |
00:11:36.560
hypothesized, so he disproved it.
link |
00:11:38.800
But he had to show that this system was good enough to do whatever could be done, but couldn't
link |
00:11:47.120
do this other thing.
link |
00:11:48.400
And there he said, and he says in the paper, I don't have any real arguments for this,
link |
00:11:53.840
but based on intuition.
link |
00:11:55.840
So that's how he defined computation.
link |
00:11:58.080
And then if you look over the next, from 1936 up until really around 1975, you see people
link |
00:12:05.440
struggling with, is this really what computation is?
link |
00:12:10.000
And so Marvin Minsky, very well known in AI, but also a fantastic mathematician, in his
link |
00:12:17.200
book Finite and Infant Machines from the mid-'60s, which is a beautiful, beautiful mathematical
link |
00:12:22.400
book, says at the start of the book, well, what is computation?
link |
00:12:26.720
Turing says it's this, and yeah, I sort of think it's that.
link |
00:12:29.520
It doesn't really matter whether the stuff's made of wood or plastic.
link |
00:12:32.240
It's just that relatively cheap stuff can do this stuff.
link |
00:12:36.320
And so yeah, seems like computation.
link |
00:12:40.160
And Donald Knuth, in his first volume of his Art of Computer Programming in around 1968,
link |
00:12:49.440
says, well, what's computation?
link |
00:12:52.320
It's this stuff, like Turing says, that a person could do each step without too much
link |
00:12:57.200
trouble.
link |
00:12:57.600
And so one of his examples of what would be too much trouble was a step which required
link |
00:13:03.600
knowing whether Fermat's Last Theorem was true or not, because it was not known at the
link |
00:13:08.160
time.
link |
00:13:08.800
And that's too much trouble for a person to do as a step.
link |
00:13:12.160
And Hopcroft and Ullman sort of said a similar thing later that year.
link |
00:13:18.080
And by 1975, in the A.H.O.
link |
00:13:20.960
Hopcroft and Ullman book, they're saying, well, you know, we don't really know what
link |
00:13:24.880
computation is, but intuition says this is sort of about right, and this is what it is.
link |
00:13:31.280
That's computation.
link |
00:13:32.400
It's a sort of agreed upon thing which happens to be really easy to implement in silicon.
link |
00:13:39.280
And then we had Moore's Law, which took off, and it's been an incredibly powerful tool.
link |
00:13:44.640
I certainly wouldn't argue with that.
link |
00:13:46.080
The version we have of computation, incredibly powerful.
link |
00:13:49.440
Can we just take a pause?
link |
00:13:51.440
So what we're talking about is there's an infinite tape with some simple rules of how
link |
00:13:55.440
to write on that tape, and that's what we're kind of thinking about.
link |
00:13:59.120
This is computation.
link |
00:14:00.080
Yeah, and it's modeled after humans, how humans do stuff.
link |
00:14:03.200
And I think it's, Turing says in the 36th paper, one of the critical facts here is that
link |
00:14:09.040
a human has a limited amount of memory.
link |
00:14:11.920
So that's what we're going to put onto our mechanical computers.
link |
00:14:15.280
So, you know, I'm like mass.
link |
00:14:19.680
I'm like mass or charge or, you know, it's not given by the universe.
link |
00:14:26.240
It was, this is what we're going to call computation.
link |
00:14:29.200
And then it has this really, you know, it had this really good implementation, which
link |
00:14:33.600
has completely changed our technological world.
link |
00:14:36.800
That's computation.
link |
00:14:40.400
Second part of the book, or argument in the book, I have this two by two matrix with science.
link |
00:14:48.880
In the top row, engineering in the bottom row, left column is intelligence, right column
link |
00:14:56.080
is life.
link |
00:14:58.000
So in the bottom row, the engineering, there's artificial intelligence and artificial life.
link |
00:15:03.440
In the top row, there's neuroscience and abiogenesis.
link |
00:15:07.520
How does living matter turn in?
link |
00:15:09.920
How does nonliving matter become living matter?
link |
00:15:12.720
Four disciplines.
link |
00:15:14.000
These four disciplines all came into the current form in the period 1945 to 1965.
link |
00:15:24.000
That's interesting.
link |
00:15:24.880
There was neuroscience before, but it wasn't effective neuroscience.
link |
00:15:28.480
It was, you know, there were these ganglia and there's electrical charges, but no one
link |
00:15:32.160
knows what to do with it.
link |
00:15:33.680
And furthermore, there are a lot of players who are common across them.
link |
00:15:38.000
I've identified common players except for artificial intelligence and abiogenesis.
link |
00:15:43.360
I don't have, but for any other pair, I can point to people who work them.
link |
00:15:47.200
And a whole bunch of them, by the way, were at the research lab for electronics at MIT
link |
00:15:53.200
where Warren McCulloch held forth.
link |
00:15:58.240
In fact, McCulloch, Pitts, Letvin, and Maturana wrote the first paper on functional neuroscience
link |
00:16:06.400
called What the Frog's Eye Tells the Frog's Brain, where instead of it just being this
link |
00:16:10.480
bunch of nerves, they sort of showed what different anatomical components were doing
link |
00:16:17.680
and telling other anatomical components and, you know, generating behavior in the frog.
link |
00:16:23.920
Would you put them as basically the fathers or one of the early pioneers of what are now
link |
00:16:29.840
called artificial neural networks?
link |
00:16:33.120
Yeah, I mean, McCulloch and Pitts.
link |
00:16:36.560
Pitts was a much younger than him.
link |
00:16:38.880
In 1943, had written a paper inspired by Bertrand Russell on a calculus for the ideas eminent
link |
00:16:48.240
in neural systems where they had tried to, without any real proof, they had tried to
link |
00:16:56.080
give a formalism for neurons basically in terms of logic and gates or gates and not
link |
00:17:03.280
gates with no real evidence that that was what was going on, but they talked about it
link |
00:17:09.120
and that was picked up by Minsky for his 1953 dissertation on, which was a neural
link |
00:17:16.160
network, we call it today.
link |
00:17:18.400
It was picked up by John von Neumann when he was designing the Edbeck computer in 1945.
link |
00:17:26.640
He talked about its components being neurons based on, and in references, he's only got
link |
00:17:31.680
three references and one of them is the McCulloch Pitts paper.
link |
00:17:35.600
So all these people and then the AI people and the artificial life people, which was
link |
00:17:40.000
John von Neumann originally, there's like overlap between all, they're all going around
link |
00:17:44.560
the same time.
link |
00:17:45.440
And three of these four disciplines turned to computation as their primary metaphor.
link |
00:17:51.760
So I've got a couple of chapters in the book.
link |
00:17:54.480
One is titled, wait, computers are people?
link |
00:17:58.480
Because that's where our computers came from.
link |
00:18:00.800
Yeah.
link |
00:18:01.920
And, you know, from people who were computing stuff.
link |
00:18:05.280
And then I've got another chapter, wait, people are computers?
link |
00:18:08.960
Which is about computational neuroscience.
link |
00:18:10.880
Yeah.
link |
00:18:11.360
So there's this whole circle here.
link |
00:18:14.160
And that computation is it.
link |
00:18:16.560
And, you know, I have talked to people about, well, maybe it's not computation that goes
link |
00:18:21.760
on in the head.
link |
00:18:22.960
Of course it is.
link |
00:18:24.160
Yeah.
link |
00:18:24.480
Okay, well, when Elon Musk's rocket goes up, is it computing?
link |
00:18:31.520
Is that how it gets into orbit?
link |
00:18:32.800
By computing?
link |
00:18:34.080
But we've got this idea, if you want to build an AI system, you write a computer program.
link |
00:18:39.840
Yeah, so the word computation very quickly starts doing a lot of work that it was not
link |
00:18:46.480
initially intended to do.
link |
00:18:48.640
It's the second and same if you talk about the universe as essentially performing a
link |
00:18:53.280
computation.
link |
00:18:53.760
Yeah, right.
link |
00:18:54.320
Wolfram does this.
link |
00:18:55.280
He turns it into computation.
link |
00:18:57.200
You don't turn rockets into computation.
link |
00:18:59.360
Yeah.
link |
00:18:59.840
By the way, when you say computation in our conversation, do you tend to think of computation
link |
00:19:04.640
narrowly in the way Turing thought of computation?
link |
00:19:08.000
It's gotten very, you know, squishy.
link |
00:19:14.080
Yeah.
link |
00:19:14.400
Squishy.
link |
00:19:17.680
But computation in the way Turing thinks about it and the way most people think about it
link |
00:19:22.640
actually fits very well with thinking like a hunter gatherer.
link |
00:19:29.440
There are places and there can be stuff in places and the stuff in places can change
link |
00:19:34.000
and it stays there until someone changes it.
link |
00:19:37.120
And it's this metaphor of place and container, which, you know, is a combination of our place
link |
00:19:44.880
cells in our hippocampus and our cortex.
link |
00:19:48.160
But this is how we use metaphors for mostly to think about.
link |
00:19:52.240
And when we get outside of our metaphor range, we have to invent tools which we can sort
link |
00:19:57.120
of switch on to use.
link |
00:19:58.960
So calculus is an example of a tool.
link |
00:20:01.360
It can do stuff that our raw reasoning can't do, and we've got conventions of when you
link |
00:20:06.640
can use it or not.
link |
00:20:08.480
But sometimes, you know, people try to all the time, we always try to get physical metaphors
link |
00:20:15.280
for things, which is why quantum mechanics has been such a problem for a hundred years.
link |
00:20:21.040
Because it's a particle.
link |
00:20:22.080
No, it's a wave.
link |
00:20:22.880
It's got to be something we understand.
link |
00:20:24.640
And I say, no, it's some weird mathematical logic that's different from those, but we
link |
00:20:29.040
want that metaphor.
link |
00:20:30.720
Well, you know, I suspect that, you know, a hundred years or 200 years from now, neither
link |
00:20:35.680
quantum mechanics nor dark matter will be talked about in the same terms, you know,
link |
00:20:39.920
in the same way that Flogerson's theory eventually went away.
link |
00:20:44.320
Because it just wasn't an adequate explanatory metaphor, you know.
link |
00:20:49.440
That metaphor was the stuff, there is stuff in the burning, the burning is in the matter.
link |
00:20:56.000
As it turns out, the burning was outside the matter, it was the oxygen.
link |
00:20:59.840
So our desire for metaphor and combined with our limited cognitive capabilities gets us
link |
00:21:05.440
into trouble.
link |
00:21:06.400
That's my argument in this book.
link |
00:21:08.320
Now, and people say, well, what is it then?
link |
00:21:10.080
And I say, well, I wish I knew that, right, the book about that.
link |
00:21:12.720
But I, you know, I give some ideas.
link |
00:21:14.640
But so there's the three things.
link |
00:21:17.440
Computation is sort of a particular thing we use.
link |
00:21:22.880
Oh, can I tell you one beautiful thing, one beautiful thing I found?
link |
00:21:26.320
So, you know, I used an example of a thing that's different from computation.
link |
00:21:30.000
You hit a drum and it vibrates, and there are some stationary points on the drum surface,
link |
00:21:35.520
you know, because the waves are going up and down the stationary points.
link |
00:21:37.840
Now, you could compute them to arbitrary precision, but the drum just knows them.
link |
00:21:45.760
The drum doesn't have to compute.
link |
00:21:47.760
What was the very first computer program ever written by Ada Lovelace?
link |
00:21:51.920
To compute Bernoulli numbers, and the Bernoulli numbers are exactly what you need to find those
link |
00:21:56.240
stable points in the drum surface.
link |
00:21:58.320
Wow.
link |
00:21:59.520
And there was a bug in the program.
link |
00:22:03.280
The arguments to divide were, I don't know, I don't know.
link |
00:22:06.400
The arguments to divide were reversed in one place.
link |
00:22:10.000
And it still worked?
link |
00:22:11.040
Well, no, she's never got to run it.
link |
00:22:12.560
They never built the analytical engine.
link |
00:22:14.000
She wrote the program without it, you know.
link |
00:22:19.040
So the computation?
link |
00:22:21.040
Computation is sort of, you know, a thing that's become dominant as a metaphor, but
link |
00:22:27.200
is it the right metaphor?
link |
00:22:29.680
All three of these four fields adopted computation.
link |
00:22:33.360
And, you know, a lot of it swirls around Warren McCulloch and all his students, and he funded
link |
00:22:40.480
a lot of people.
link |
00:22:45.600
And our human metaphors, our limitations to human thinking, all play into this.
link |
00:22:50.000
Those are the three themes of the book.
link |
00:22:52.720
So I have a little to say about computation.
link |
00:22:54.880
So you're saying that there is a gap between the computer or the machine that performs
link |
00:23:05.040
computation and this machine that appears to have consciousness and intelligence.
link |
00:23:13.360
Yeah, that piece of meat in your head.
link |
00:23:16.080
Piece of meat.
link |
00:23:16.800
And maybe it's not just the meat in your head, it's the rest of you too.
link |
00:23:20.720
I mean, you actually have a neural system in your gut.
link |
00:23:24.960
I tend to also believe, not believe, but we're now dancing around things we don't know, but
link |
00:23:31.680
I tend to believe other humans are important.
link |
00:23:36.560
Like, so we're almost like, I just don't think we would ever have achieved the level
link |
00:23:42.080
of intelligence we have with other humans.
link |
00:23:44.880
I'm not saying so confidently, but I have an intuition that some of the intelligence
link |
00:23:49.680
is in the interaction.
link |
00:23:51.200
Yeah, and I think it seems to be very likely, again, this is speculation, but we, our species,
link |
00:24:00.240
and probably neanderthals to some extent, because you can find old bones where they
link |
00:24:06.800
seem to be counting on them by putting notches that were neanderthals, we are able to put
link |
00:24:15.360
some of our stuff outside our body into the world.
link |
00:24:18.400
And then other people can share it.
link |
00:24:20.400
And then we get these tools that become shared tools.
link |
00:24:22.960
And so there's a whole coupling that would not occur in the single deep learning network,
link |
00:24:30.240
which was fed all of literature or something.
link |
00:24:33.840
Yeah, the neural network can't step outside of itself.
link |
00:24:38.320
But is there some, can we explore this dark room a little bit and try to get at something?
link |
00:24:46.640
What is the magic?
link |
00:24:47.840
Where does the magic come from in the human brain that creates the mind?
link |
00:24:52.480
What's your sense as scientists that try to understand it and try to build it?
link |
00:24:58.880
What are the directions it followed might be productive?
link |
00:25:04.240
Is it creative, interactive robots?
link |
00:25:07.040
Is it creating large deep neural networks that do like self supervised learning and
link |
00:25:13.440
just like we'll discover that when you make something large enough, some interesting things
link |
00:25:18.800
will emerge?
link |
00:25:19.840
Is it through physics and chemistry, biology, like artificial life angle?
link |
00:25:23.600
Like we'll sneak up in this four quadrant matrix that you mentioned.
link |
00:25:28.240
Is there anything you're most, if you had to bet all your money, financial?
link |
00:25:33.440
I wouldn't.
link |
00:25:35.040
So every intelligence we know, animal intelligence, dog intelligence,
link |
00:25:40.960
octopus intelligence, which is a very different sort of architecture from us.
link |
00:25:49.920
All the intelligences we know perceive the world in some way and then have action in
link |
00:25:59.520
the world, but they're able to perceive objects in a way which is actually pretty damn phenomenal
link |
00:26:11.520
and surprising.
link |
00:26:13.200
We tend to think that the box over here between us, which is a sound box, I think is a blue
link |
00:26:22.000
box, but blueness is something that we construct with color constancy.
link |
00:26:32.560
The blueness is not a direct function of the photons we're receiving.
link |
00:26:37.120
It's actually context, which is why you can turn, maybe seeing the examples where someone
link |
00:26:47.600
turns a stop sign into some other sort of sign by just putting a couple of marks on
link |
00:26:53.520
them and the deep learning system gets it wrong.
link |
00:26:55.280
And everyone says, but the stop sign's red.
link |
00:26:58.160
Why is it thinking it's the other sort of sign?
link |
00:26:59.920
Because redness is not intrinsic in just the photons.
link |
00:27:02.800
It's actually a construction of an understanding of the whole world and the relationship between
link |
00:27:07.120
objects to get color constancy.
link |
00:27:11.040
But our tendency, in order that we get an archive paper really quickly, is you just
link |
00:27:15.760
show a lot of data and give the labels and hope it figures it out.
link |
00:27:18.880
But it's not figuring it out in the same way we do.
link |
00:27:21.040
We have a very complex perceptual understanding of the world.
link |
00:27:24.720
Dogs have a very different perceptual understanding based on smell.
link |
00:27:28.000
They go smell a post, they can tell how many different dogs have visited it in the last
link |
00:27:34.880
10 hours and how long ago.
link |
00:27:36.320
There's all sorts of stuff that we just don't perceive about the world.
link |
00:27:39.440
And just taking a single snapshot is not perceiving about the world.
link |
00:27:42.400
It's not seeing the registration between us and the object.
link |
00:27:48.400
And registration is a philosophical concept.
link |
00:27:52.160
Brian Cantwell Smith talks about it a lot.
link |
00:27:54.560
Very difficult, squirmy thing to understand.
link |
00:27:59.200
But I think none of our systems do that.
link |
00:28:02.080
We've always talked in AI about the symbol grounding problem, how our symbols that we
link |
00:28:06.000
talk about are grounded in the world.
link |
00:28:08.080
And when deep learning came along and started labeling images, people said, ah, the grounding
link |
00:28:12.320
problem has been solved.
link |
00:28:13.440
No, the labeling problem was solved with some percentage accuracy, which is different from
link |
00:28:18.800
the grounding problem.
link |
00:28:20.560
So you agree with Hans Marvick and what's called the Marvick's paradox that highlights
link |
00:28:28.880
this counterintuitive notion that reasoning is easy, but perception and mobility are hard.
link |
00:28:39.440
Yeah.
link |
00:28:39.840
We shared an office when I was working on computer vision and he was working on his
link |
00:28:45.360
first mobile robot.
link |
00:28:46.640
What were those conversations like?
link |
00:28:48.400
They were great.
link |
00:28:50.160
So do you still kind of, maybe you can elaborate, do you still believe this kind of notion that
link |
00:28:56.160
perception is really hard?
link |
00:28:59.600
Like, can you make sense of why we humans have this poor intuition about what's hard
link |
00:29:04.080
and not?
link |
00:29:04.480
Well, let me give us sort of another story.
link |
00:29:10.640
Sure.
link |
00:29:11.520
If you go back to the original teams working on AI from the late 50s into the 60s, and
link |
00:29:21.680
you go to the AI lab at MIT, who was it that was doing that?
link |
00:29:27.760
It was a bunch of really smart kids who got into MIT and they were intelligent.
link |
00:29:32.480
So what's intelligence about?
link |
00:29:34.160
Well, the stuff they were good at, playing chess, doing integrals, that was hard stuff.
link |
00:29:40.480
But, you know, a baby could see stuff, that wasn't intelligent, anyone could do that,
link |
00:29:45.680
that's not intelligence.
link |
00:29:47.280
And so, you know, there was this intuition that the hard stuff is the things they were
link |
00:29:52.480
good at and the easy stuff was the stuff that everyone could do.
link |
00:29:57.440
Yeah.
link |
00:29:57.760
And maybe I'm overplaying it a little bit, but I think there's an element of that.
link |
00:30:00.880
Yeah, I mean, I don't know how much truth there is to, like chess, for example, was
link |
00:30:08.480
for the longest time seen as the highest level of intellect, right?
link |
00:30:14.720
Until we got computers that were better at it than people.
link |
00:30:17.200
And then we realized, you know, if you go back to the 90s, you'll see, you know, the
link |
00:30:21.120
stories in the press around when Kasparov was beaten by Deep Blue.
link |
00:30:26.320
Oh, this is the end of all sorts of things.
link |
00:30:28.320
Computers are going to be able to do anything from now on.
link |
00:30:30.640
And we saw exactly the same stories with Alpha Zero, the Go Playing program.
link |
00:30:36.160
Yeah.
link |
00:30:37.280
But still, to me, reasoning is a special thing.
link |
00:30:41.200
And perhaps...
link |
00:30:41.920
No, actually, we're really bad at reasoning.
link |
00:30:44.640
We just use these analogies based on our hunter gatherer intuitions.
link |
00:30:48.400
But why is that not, don't you think the ability to construct metaphor is a really powerful
link |
00:30:53.520
thing?
link |
00:30:53.920
Oh, yeah, it is.
link |
00:30:54.400
Tell stories.
link |
00:30:55.200
It is.
link |
00:30:55.520
It's the constructing the metaphor and registering that something constant in our brains.
link |
00:31:00.960
Like, isn't that what we're doing with vision too?
link |
00:31:04.000
And we're telling our stories.
link |
00:31:06.080
We're constructing good models of the world.
link |
00:31:08.560
Yeah, yeah.
link |
00:31:09.760
But I think we jumped between what we're capable of and how we're doing it right there.
link |
00:31:16.400
It was a little confusion that went on as we were telling each other stories.
link |
00:31:21.680
Yes, exactly.
link |
00:31:23.440
Trying to delude each other.
link |
00:31:24.800
No, I just think I'm not exactly so.
link |
00:31:27.280
I'm trying to pull apart this Moravec's paradox.
link |
00:31:30.160
I don't view it as a paradox.
link |
00:31:33.280
What did evolution spend its time on?
link |
00:31:36.000
Yes.
link |
00:31:36.320
It spent its time on getting us to perceive and move in the world.
link |
00:31:39.360
That was 600 million years as multi cell creatures doing that.
link |
00:31:43.600
And then it was relatively recent that we were able to hunt or gather or even animals hunting.
link |
00:31:53.120
That's much more recent.
link |
00:31:54.960
And then anything that we, speech, language, those things are a couple of hundred thousand
link |
00:32:02.960
years probably, if that long.
link |
00:32:05.760
And then agriculture, 10,000 years.
link |
00:32:09.520
All that stuff was built on top of those earlier things, which took a long time to develop.
link |
00:32:14.320
So if you then look at the engineering of these things, so building it into robots,
link |
00:32:20.160
what's the hardest part of robotics?
link |
00:32:22.000
Do you think as the decades that you worked on robots in the context of what we're talking
link |
00:32:29.920
about, vision, perception, the actual sort of the biomechanics of movement, I'm kind
link |
00:32:37.520
of drawing parallels here between humans and machines always.
link |
00:32:40.800
Like what do you think is the hardest part of robotics?
link |
00:32:44.320
I just want to think all of them.
link |
00:32:45.920
I just want to think all of them.
link |
00:32:49.360
There are no easy parts to do well.
link |
00:32:53.040
We sort of go reductionist and we reduce it.
link |
00:32:55.600
If only we had all the location of all the points in 3D, things would be great.
link |
00:33:02.400
If only we had labels on the images, things would be great.
link |
00:33:07.440
But as we see, that's not good enough.
link |
00:33:10.640
Some deeper understanding.
link |
00:33:13.040
But if I came to you and I could solve one category of problems in robotics instantly,
link |
00:33:21.680
what would give you the greatest pleasure?
link |
00:33:28.160
I mean, you look at robots that manipulate objects, what's hard about that?
link |
00:33:36.400
You know, is it the perception, is it the reasoning about the world, that common sense
link |
00:33:43.040
reasoning, is it the actual building a robot that's able to interact with the world?
link |
00:33:49.680
Is it like human aspects of a robot that's interacting with humans in that game theory
link |
00:33:54.960
of how they work well together?
link |
00:33:56.080
Well, let's talk about manipulation for a second because I had this really blinding
link |
00:34:00.000
moment, you know, I'm a grandfather, so grandfathers have blinding moments.
link |
00:34:05.360
Just three or four miles from here, last year, my 16 month old grandson was in his new house
link |
00:34:16.240
for the first time, right?
link |
00:34:18.240
First time in this house.
link |
00:34:19.760
And he'd never been able to get to a window before, but this had some low windows.
link |
00:34:25.040
And he goes up to this window with a handle on it that he's never seen before.
link |
00:34:29.360
And he's got one hand pushing the window and the other hand turning the handle to open
link |
00:34:34.800
the window.
link |
00:34:36.640
He knew two different hands, two different things he knew how to put together.
link |
00:34:44.080
And he's 16 months old.
link |
00:34:45.520
And there you are watching in awe.
link |
00:34:51.840
In an environment he'd never seen before, a mechanism he'd never seen.
link |
00:34:55.200
How did he do that?
link |
00:34:56.320
Yes, that's a good question.
link |
00:34:57.600
How did he do that?
link |
00:34:58.880
That's why.
link |
00:34:59.380
It's like, okay, like you could see the leap of genius from using one hand to perform a
link |
00:35:05.700
task to combining, doing, I mean, first of all, in manipulation, that's really difficult.
link |
00:35:11.460
It's like two hands, both necessary to complete the action.
link |
00:35:15.940
And completely different.
link |
00:35:16.820
And he'd never seen a window open before, but he inferred somehow handle open something.
link |
00:35:25.140
Yeah, there may have been a lot of slightly different failure cases that you didn't see.
link |
00:35:32.180
Not with a window, but with other objects of turning and twisting and handles.
link |
00:35:37.540
There's a great counter to reinforcement learning.
link |
00:35:42.900
We'll just give the robot plenty of time to try everything.
link |
00:35:50.260
Can I tell a little side story here?
link |
00:35:52.260
Yeah, so I'm in DeepMind in London, this is three, four years ago, where there's a big
link |
00:36:01.940
Google building, and then you go inside and you go through this more security, and then
link |
00:36:06.020
you get to DeepMind where the other Google employees can't go.
link |
00:36:09.060
And I'm in a conference room, a conference room with some of the people, and they tell
link |
00:36:15.540
me about their reinforcement learning experiment with robots, which are just trying stuff out.
link |
00:36:23.940
And they're my robots.
link |
00:36:25.380
They're Sawyer's.
link |
00:36:26.900
We sold them.
link |
00:36:29.060
And they really like them because Sawyer's are compliant and can sense forces, so they
link |
00:36:33.300
don't break when they're bashing into walls.
link |
00:36:36.180
They stop and they do all this stuff.
link |
00:36:38.980
So you just let the robot do stuff, and eventually it figures stuff out.
link |
00:36:42.580
By the way, Sawyer, we're talking about robot manipulation, so robot arms and so on.
link |
00:36:47.380
Yeah, Sawyer's a robot.
link |
00:36:50.180
What's Sawyer?
link |
00:36:51.220
Sawyer's a robot arm that my company Rethink Robotics built.
link |
00:36:55.140
Thank you for the context.
link |
00:36:56.580
Sorry.
link |
00:36:57.060
Okay, cool.
link |
00:36:57.540
So we're in DeepMind.
link |
00:36:59.380
And it's in the next room, these robots are just bashing around to try and use reinforcement
link |
00:37:04.100
learning to learn how to act.
link |
00:37:05.940
Can I go see them?
link |
00:37:06.740
Oh no, they're secret.
link |
00:37:08.340
They were my robots.
link |
00:37:09.300
They were secret.
link |
00:37:10.820
That's hilarious.
link |
00:37:11.700
Okay.
link |
00:37:12.100
Anyway, the point is, you know, this idea that you just let reinforcement learning figure
link |
00:37:17.860
everything out is so counter to how a kid does stuff.
link |
00:37:21.780
So again, story about my grandson.
link |
00:37:24.740
I gave him this box that had lots of different lock mechanisms.
link |
00:37:29.780
He didn't randomly, you know, and he was 18 months old, he didn't randomly try to touch
link |
00:37:34.260
every surface or push everything.
link |
00:37:35.940
He found he could see where the mechanism was, and he started exploring the mechanism
link |
00:37:42.020
for each of these different lock mechanisms.
link |
00:37:44.580
And there was reinforcement, no doubt, of some sort going on there.
link |
00:37:48.660
But he applied a pre filter, which cut down the search space dramatically.
link |
00:37:55.540
I wonder to what level we're able to introspect what's going on.
link |
00:37:59.700
Because what's also possible is you have something like reinforcement learning going
link |
00:38:03.780
on in the mind in the space of imagination.
link |
00:38:05.860
So like you have a good model of the world you're predicting and you may be running those
link |
00:38:10.900
tens of thousands of like loops, but you're like, as a human, you're just looking at yourself
link |
00:38:16.820
trying to tell a story of what happened.
link |
00:38:18.740
And it might seem simple, but maybe there's a lot of computation going on.
link |
00:38:24.500
Whatever it is, but there's also a mechanism that's being built up.
link |
00:38:28.020
It's not just random search.
link |
00:38:30.420
Yeah, that mechanism prunes it dramatically.
link |
00:38:33.780
Yeah, that pruning, that pruning stuff, but it doesn't, it's possible that that's, so
link |
00:38:40.980
you don't think that's akin to a neural network inside a reinforcement learning algorithm.
link |
00:38:46.740
Is it possible?
link |
00:38:49.140
It's, yeah, until it's possible.
link |
00:38:52.340
It's possible, but I'll be incredibly surprised if that happens.
link |
00:39:01.380
I'll also be incredibly surprised that after all the decades that I've been doing this,
link |
00:39:06.020
where every few years someone thinks, now we've got it.
link |
00:39:10.100
Now we've got it.
link |
00:39:12.580
Four or five years ago, I was saying, I don't think we've got it yet.
link |
00:39:15.620
And everyone was saying, you don't understand how powerful AI is.
link |
00:39:18.820
I had people tell me, you don't understand how powerful it is.
link |
00:39:22.420
I sort of had a track record of what the world had done to think, well, this is no different
link |
00:39:30.420
from before.
link |
00:39:31.460
Or we have bigger computers.
link |
00:39:33.060
We had bigger computers in the 90s and we could do more stuff.
link |
00:39:37.940
But okay, so let me push back because I'm generally sort of optimistic and try to find
link |
00:39:43.380
the beauty in things.
link |
00:39:44.260
I think there's a lot of surprising and beautiful things that neural networks, this new generation
link |
00:39:51.860
of deep learning revolution has revealed to me, has continually been very surprising
link |
00:39:57.460
the kind of things it's able to do.
link |
00:39:59.300
Now, generalizing that over saying like this, we've solved intelligence.
link |
00:40:03.140
That's another big leap.
link |
00:40:05.220
But is there something surprising and beautiful to you about neural networks that were actually
link |
00:40:10.500
you said back and said, I did not expect this?
link |
00:40:16.100
Oh, I think their performance on ImageNet was shocking.
link |
00:40:22.260
The computer vision in those early days was just very like, wow, okay.
link |
00:40:26.340
That doesn't mean that they're solving everything in computer vision we need to solve or in
link |
00:40:32.500
vision for robots.
link |
00:40:33.700
What about AlphaZero and self play mechanisms and reinforcement learning?
link |
00:40:37.220
Yeah, that was all in the 90s.
link |
00:40:39.300
Yeah, that was all in Donald Mickey's 1961 paper.
link |
00:40:44.020
Everything that was there, which introduced reinforcement learning.
link |
00:40:48.340
No, but come on.
link |
00:40:49.300
So no, you're talking about the actual techniques.
link |
00:40:52.020
But isn't it surprising to you the level it's able to achieve with no human supervision
link |
00:40:58.740
of chess play?
link |
00:40:59.700
Like, to me, there's a big, big difference between Deep Blue and...
link |
00:41:05.860
Maybe what that's saying is how overblown our view of ourselves is.
link |
00:41:13.140
You know, the chess is easy.
link |
00:41:16.740
Yeah, I mean, I came across this 1946 report that, and I'd seen this as a kid in one of
link |
00:41:28.340
those books that my mother had given me actually.
link |
00:41:30.340
The 1946 report, which pitted someone with an abacus against an electronic calculator,
link |
00:41:39.620
and he beat the electronic calculator.
link |
00:41:42.500
You know, so there at that point was, well, humans are still better than machines at calculating.
link |
00:41:48.980
Are you surprised today that a machine can, you know, do a billion floating point operations
link |
00:41:54.420
a second and, you know, you're puzzling for minutes through one?
link |
00:41:58.500
I mean, I don't know, but I am certainly surprised there's something, to me, different about
link |
00:42:07.460
learning, so a system that's able to learn.
link |
00:42:10.420
Learning.
link |
00:42:10.980
See, now you're getting into one of the deadly sins.
link |
00:42:15.300
Because of using terms overly broadly.
link |
00:42:19.220
Yeah, I mean, there's so many different forms of learning.
link |
00:42:21.700
Yeah.
link |
00:42:22.260
So many different forms.
link |
00:42:23.300
You know, I learned my way around the city.
link |
00:42:24.980
I learned to play chess.
link |
00:42:26.500
I learned Latin.
link |
00:42:28.580
I learned to ride a bicycle.
link |
00:42:30.100
All of those are, you know, very different capabilities.
link |
00:42:33.700
Yeah.
link |
00:42:34.180
And if someone, you know, has a, you know, in the old days, people would write a paper
link |
00:42:41.860
about learning something.
link |
00:42:43.220
Now the corporate press office puts out a press release about how Company X is leading
link |
00:42:52.580
the world because they have a system that can...
link |
00:42:56.820
Yeah, but here's the thing.
link |
00:42:58.180
Okay.
link |
00:42:58.500
So what is learning?
link |
00:43:00.100
When I refer to...
link |
00:43:00.820
Learning is many things.
link |
00:43:02.420
But...
link |
00:43:02.580
It's a suitcase word.
link |
00:43:04.660
It's a suitcase word, but loosely, there's a dumb system, and over time, it becomes smart.
link |
00:43:13.700
Well, it becomes less dumb at the thing that it's doing.
link |
00:43:16.340
Smart is a loaded word.
link |
00:43:19.140
Yes, less dumb at the thing it's doing.
link |
00:43:21.220
It gets better performance under some measure, under some set of conditions at that thing.
link |
00:43:27.060
And most of these learning algorithms, learning systems, fail when you change the conditions
link |
00:43:35.780
just a little bit in a way that humans don't.
link |
00:43:37.940
So I was at DeepMind, the AlphaGo had just come out, and I said, what would have happened
link |
00:43:45.940
if you'd given it a 21 by 21 board instead of a 19 by 19 board?
link |
00:43:49.940
They said, fail totally.
link |
00:43:51.620
But a human player would actually be able to play.
link |
00:43:55.540
And actually, funny enough, if you look at DeepMind's work since then, they're presenting
link |
00:44:02.980
a lot of algorithms that would do well at the bigger board.
link |
00:44:07.620
So they're slowly expanding this generalization.
link |
00:44:10.340
I mean, to me, there's a core element there.
link |
00:44:12.580
I think it is very surprising to me that even in a constrained game of chess or Go, that
link |
00:44:20.100
through self play, by a system playing itself, that it can achieve superhuman level performance
link |
00:44:28.580
through learning alone.
link |
00:44:29.940
Okay, so you didn't like it when I referred to Donald Mickey's 1961 paper.
link |
00:44:38.980
There, in the second part of it, which came a year later, they had self play on an electronic
link |
00:44:46.020
computer at tic tac toe, okay, but it learned to play tic tac toe through self play.
link |
00:44:52.180
And it learned to play optimally.
link |
00:44:54.580
What I'm saying is, okay, I have a little bit of a bias, but I find ideas beautiful,
link |
00:45:02.740
but only when they actually realize the promise.
link |
00:45:06.660
That's another level of beauty.
link |
00:45:08.420
For example, what Bezos and Elon Musk are doing with rockets.
link |
00:45:13.540
We had rockets for a long time, but doing reusable cheap rockets, it's very impressive.
link |
00:45:18.900
In the same way, I would have not predicted.
link |
00:45:22.980
First of all, when I started and fell in love with AI, the game of Go was seen to be impossible
link |
00:45:30.820
to solve.
link |
00:45:31.300
Okay, so I thought maybe, you know, maybe it'd be possible to maybe have big leaps in
link |
00:45:38.500
a Moore's law style of way, in computation, I'll be able to solve it.
link |
00:45:42.020
But I would never have guessed that you can learn your way, however, I mean, in the narrow
link |
00:45:50.500
sense of learning, learn your way to beat the best people in the world at the game of
link |
00:45:55.620
Go without human supervision, not studying the game of experts.
link |
00:45:59.300
Okay, so using a different learning technique, Arthur Samuel in the early 60s, and he was
link |
00:46:08.900
the first person to use machine learning, had a program that could beat the world champion
link |
00:46:14.900
at checkers.
link |
00:46:16.100
And that at the time was considered amazing.
link |
00:46:19.860
By the way, Arthur Samuel had some fantastic advantages.
link |
00:46:23.460
Do you want to hear Arthur Samuel's advantages?
link |
00:46:25.700
Two things.
link |
00:46:26.660
One, he was at the 1956 AI conference.
link |
00:46:30.500
I knew Arthur later in life.
link |
00:46:32.420
He was at Stanford when I was a graduate student there.
link |
00:46:34.500
He wore a tie and a jacket every day, the rest of us didn't.
link |
00:46:38.900
Delightful man, delightful man.
link |
00:46:42.980
It turns out Claude Shannon, in a 1950 Scientific American article, on chess playing, outlined
link |
00:46:51.620
the learning mechanism that Arthur Samuel used, and they had met in 1956.
link |
00:46:57.140
I assume there was some communication, but I don't know that for sure.
link |
00:47:00.580
But Arthur Samuel had been a vacuum tube engineer, getting reliability of vacuum tubes, and then
link |
00:47:07.060
had overseen the first transistorized computers at IBM.
link |
00:47:11.860
And in those days, before you shipped a computer, you ran it for a week to get early failures.
link |
00:47:18.180
So he had this whole farm of computers running random code for hours and hours for each computer.
link |
00:47:28.580
He had a whole bunch of them.
link |
00:47:29.940
So he ran his chess learning program with self play on IBM's production line.
link |
00:47:38.820
He had more computation available to him than anyone else in the world, and then he was
link |
00:47:43.700
able to produce a chess playing program, I mean a checkers playing program, that could
link |
00:47:48.260
beat the world champion.
link |
00:47:49.940
So that's amazing.
link |
00:47:51.540
The question is, what I mean surprised, I don't just mean it's nice to have that accomplishment,
link |
00:47:58.020
is there is a stepping towards something that feels more intelligent than before.
link |
00:48:06.180
Yeah, but that's in your view of the world.
link |
00:48:08.740
Okay, well let me then, it doesn't mean I'm wrong.
link |
00:48:11.380
No, no it doesn't.
link |
00:48:13.540
So the question is, if we keep taking steps like that, how far that takes us?
link |
00:48:18.740
Are we going to build a better recommender systems?
link |
00:48:21.780
Are we going to build a better robot?
link |
00:48:23.860
Or will we solve intelligence?
link |
00:48:25.940
So, you know, I'm putting my bet on, but still missing a whole lot.
link |
00:48:33.300
A lot.
link |
00:48:34.500
And why would I say that?
link |
00:48:36.020
Well, in these games, they're all, you know, 100% information games, but again, but each
link |
00:48:43.060
of these systems is a very short description of the current state, which is different from
link |
00:48:50.420
registering and perception in the world, which gets back to Marovec's paradox.
link |
00:48:55.620
I'm definitely not saying that chess is somehow harder than perception or any kind of, even
link |
00:49:05.780
any kind of robotics in the physical world, I definitely think is way harder than the
link |
00:49:10.180
game of chess.
link |
00:49:10.820
So I was always much more impressed by the workings of the human mind.
link |
00:49:15.300
It's incredible.
link |
00:49:15.940
The human mind is incredible.
link |
00:49:17.700
I believe that from the very beginning, I wanted to be a psychiatrist for the longest
link |
00:49:20.340
time.
link |
00:49:20.740
I always thought that's way more incredible in the game of chess.
link |
00:49:23.140
I think the game of chess is, I love the Olympics.
link |
00:49:26.740
It's just another example of us humans picking a task and then agreeing that a million humans
link |
00:49:31.860
will dedicate their whole life to that task.
link |
00:49:33.860
And that's the cool thing that the human mind is able to focus on one task and then compete
link |
00:49:39.860
against each other and achieve like weirdly incredible levels of performance.
link |
00:49:44.500
That's the aspect of chess that's super cool.
link |
00:49:46.740
Not that chess in itself is really difficult.
link |
00:49:49.700
It's like the Fermat's last theorem is not in itself to me that interesting.
link |
00:49:53.460
The fact that thousands of people have been struggling to solve that particular problem
link |
00:49:57.780
is fascinating.
link |
00:49:58.500
So can I tell you my disease in this way?
link |
00:50:00.500
Sure.
link |
00:50:01.460
Which actually is closer to what you're saying.
link |
00:50:03.380
So as a child, I was building various, I called them computers.
link |
00:50:07.620
They weren't general purpose computers.
link |
00:50:09.380
Ice cube tray.
link |
00:50:10.180
The ice cube tray was one.
link |
00:50:11.380
But I built other machines.
link |
00:50:12.660
And what I liked to build was machines that could beat adults at a game and the adults
link |
00:50:18.100
couldn't beat my machine.
link |
00:50:19.700
Yeah.
link |
00:50:19.940
So you were like, that's powerful.
link |
00:50:22.660
That's a way to rebel.
link |
00:50:24.820
Oh, by the way, when was the first time you built something that outperformed you?
link |
00:50:33.220
Do you remember?
link |
00:50:34.660
Well, I knew how it worked.
link |
00:50:36.340
I was probably nine years old and I built a thing that was a game where you take turns
link |
00:50:42.020
in taking matches from a pile and either the one who takes the last one or the one who
link |
00:50:47.460
doesn't take the last one wins.
link |
00:50:48.660
I forget.
link |
00:50:49.460
And so it was pretty easy to build that out of wires and nails and little coils that were
link |
00:50:54.500
like plugging in the number and a few light bulbs.
link |
00:50:59.060
The one I was proud of, I was 12 when I built a thing out of old telephone switchboard switches
link |
00:51:07.220
that could always win at tic tac toe.
link |
00:51:11.380
And that was a much harder circuit to design.
link |
00:51:14.500
But again, it was no active components.
link |
00:51:17.620
It was just three position switches, empty, X, zero, O.
link |
00:51:23.300
And nine of them and a light bulb on which move it wanted next.
link |
00:51:29.460
And then the human would go and move that.
link |
00:51:31.540
See, there's magic in that creation.
link |
00:51:33.060
There was.
link |
00:51:33.860
Yeah, yeah.
link |
00:51:34.580
I tend to see magic in robots that like I also think that intelligence is a little bit
link |
00:51:43.700
overrated.
link |
00:51:44.740
I think we can have deep connections with robots very soon.
link |
00:51:49.140
And well, we'll come back to connections for sure.
link |
00:51:52.500
But I do want to say, I think too many people make the mistake of seeing that magic and
link |
00:52:00.100
thinking, well, we'll just continue.
link |
00:52:02.820
But each one of those is a hard fought battle for the next step, the next step.
link |
00:52:07.300
Yes.
link |
00:52:08.180
The open question here is, and this is why I'm playing devil's advocate, but I often
link |
00:52:11.940
do when I read your blog post in my mind because I have like this eternal optimism, is it's
link |
00:52:18.420
not clear to me.
link |
00:52:19.380
So I don't do what obviously the journalists do or they give into the hype, but it's not
link |
00:52:23.940
obvious to me how many steps away we are from a truly transformational understanding of
link |
00:52:34.740
what it means to build intelligent systems or how to build intelligent systems.
link |
00:52:40.580
I'm also aware of the whole history of artificial intelligence, which is where your deep grounding
link |
00:52:45.140
of this is, is there has been an optimism for decades and that optimism, just like reading
link |
00:52:51.860
old optimism is absurd because people were like, this is, they were saying things are
link |
00:52:57.300
trivial for decades since the sixties, they're saying everything is true.
link |
00:53:00.740
Computer vision is trivial, but I think my mind is working crisply enough to where, I
link |
00:53:07.700
mean, we can dig into if you want.
link |
00:53:09.700
I'm really surprised by the things DeepMind has done.
link |
00:53:12.900
I don't think they're so, they're yet close to solving intelligence, but I'm not sure
link |
00:53:19.300
it's not 10 to 10 years away.
link |
00:53:22.500
What I'm referring to is interesting to see when the engineering, it takes that idea to
link |
00:53:30.100
scale and the idea works.
link |
00:53:32.660
And no, it fools people.
link |
00:53:34.900
Okay.
link |
00:53:35.300
Honestly, Rodney, if it was you, me and Demis inside a room, forget the press, forget all
link |
00:53:40.420
those things, just as a scientist, as a roboticist, that wasn't surprising to you that at scale.
link |
00:53:47.060
So we're talking about very large now, okay, let's pick one.
link |
00:53:50.180
That's the most surprising to you.
link |
00:53:52.340
Okay.
link |
00:53:52.820
Please don't yell at me.
link |
00:53:53.940
GPT three, okay.
link |
00:53:56.180
Hold on, hold on, I was going to say, okay, alpha zero, alpha go, alpha go, zero, alpha
link |
00:54:03.300
zero, and then alpha fold one and two.
link |
00:54:06.340
So do any of these kind of have this core of, forget usefulness or application and so
link |
00:54:13.460
on, which you could argue for alpha fold, like, as a scientist, was those surprising
link |
00:54:19.220
to you that it worked as well as it did?
link |
00:54:23.140
Okay, so if we're going to make the distinction between surprise and usefulness, and I have
link |
00:54:30.820
to explain this, I would say alpha fold, and one of the problems at the moment with alpha
link |
00:54:40.580
fold is, you know, it gets a lot of them right, which is a surprise to me, because they're
link |
00:54:44.820
a really complex thing, but you don't know which ones it gets right, which then is a
link |
00:54:51.940
bit of a problem.
link |
00:54:52.500
Now they've come out with a recent...
link |
00:54:53.620
You mean the structure of the proteins, it gets a lot of those right.
link |
00:54:56.180
Yeah, it's a surprising number of them right, it's been a really hard problem.
link |
00:55:00.180
So that was a surprise how many it gets right.
link |
00:55:03.220
So far, the usefulness is limited, because you don't know which ones are right or not,
link |
00:55:07.460
and now they've come out with a thing in the last few weeks, which is trying to get a useful
link |
00:55:12.900
tool out of it, and they may well do it.
link |
00:55:15.620
In that sense, at least alpha fold is different, because your alpha fold tool is different,
link |
00:55:21.940
because now it's producing data sets that are actually, you know, potentially revolutionizing
link |
00:55:27.460
competition biology, like they will actually help a lot of people, but...
link |
00:55:31.620
You would say potentially revolutionizing, we don't know yet, but yeah.
link |
00:55:36.020
That's true, yeah.
link |
00:55:36.820
But they're, you know, but I got you.
link |
00:55:39.220
I mean, this is...
link |
00:55:40.020
Okay, so you know what, this is gonna be so fun, so let's go right into it.
link |
00:55:45.860
Speaking of robots that operate in the real world, let's talk about self driving cars.
link |
00:55:52.740
Oh, okay.
link |
00:55:54.740
Okay, because you have built robotics companies, you're one of the greatest roboticists in
link |
00:56:00.500
history, and that's not just in the space of ideas, we'll also probably talk about that,
link |
00:56:06.500
but in the actual building and execution of businesses that make robots that are useful
link |
00:56:13.220
for people and that actually work in the real world and make money.
link |
00:56:18.660
You also sometimes are critical of Mr. Elon Musk, or let's more specifically focus on
link |
00:56:24.420
this particular technology, which is autopilot inside Teslas.
link |
00:56:29.380
What are your thoughts about Tesla autopilot, or more generally vision based machine learning
link |
00:56:33.780
approach to semi autonomous driving?
link |
00:56:38.580
These are robots, they're being used in the real world by hundreds of thousands of people,
link |
00:56:43.700
and if you want to go there, I can go there, but that's not too much, which they're...
link |
00:56:49.940
Let's say they're on par safety wise as humans currently, meaning human alone versus human
link |
00:56:57.220
plus robot.
link |
00:56:58.500
Okay, so first let me say I really like the car I came in here today.
link |
00:57:03.860
Which is?
link |
00:57:06.260
2021 model, Mercedes E450.
link |
00:57:12.740
I am impressed by the machine vision, sonar, other things.
link |
00:57:19.620
I'm impressed by what it can do.
link |
00:57:21.700
I'm really impressed with many aspects of it.
link |
00:57:29.540
It's able to stay in lane, is it?
link |
00:57:31.380
Oh yeah, it does the lane stuff.
link |
00:57:35.940
It's looking on either side of me, it's telling me about nearby cars.
link |
00:57:40.260
For blind spots and so on.
link |
00:57:41.540
Yeah, when I'm going in close to something in the park, I get this beautiful, gorgeous,
link |
00:57:48.100
top down view of the world.
link |
00:57:49.780
I am impressed up the wazoo of how registered and metrical that is.
link |
00:57:56.900
So it's like multiple cameras and it's all ready to go to produce the 360 view kind of
link |
00:58:00.900
thing?
link |
00:58:00.900
360 view, it's synthesized so it's above the car, and it is unbelievable.
link |
00:58:06.580
I got this car in January, it's the longest I've ever owned a car without digging it.
link |
00:58:11.460
So it's better than me.
link |
00:58:13.540
Me and it together are better.
link |
00:58:15.940
So I'm not saying technology's bad or not useful, but here's my point.
link |
00:58:24.980
Yes, it's a replay of the same movie.
link |
00:58:31.380
Okay, so maybe you've seen me ask this question before.
link |
00:58:34.900
But when did the first car go over 55 miles an hour for over 10 miles on a public freeway
link |
00:58:54.100
with other traffic around driving completely autonomously?
link |
00:58:56.660
When did that happen?
link |
00:58:59.060
Was it CMU in the 80s or something?
link |
00:59:01.380
It was a long time ago.
link |
00:59:02.340
It was actually in 1987 in Munich at the Bundeswehr.
link |
00:59:09.380
So they had it running in 1987.
link |
00:59:12.500
When do you think, and Elon has said he's going to do this, when do you think we'll
link |
00:59:16.660
have the first car drive coast to coast in the US, hands off the wheel, feet off the
link |
00:59:23.780
pedals, coast to coast?
link |
00:59:25.940
As far as I know, a few people have claimed to do it.
link |
00:59:28.340
1995, that was Carnegie Mellon.
link |
00:59:30.660
I didn't know, but oh, that was the, they didn't claim, did they claim 100%?
link |
00:59:35.700
Not 100%, not 100%.
link |
00:59:37.540
And then there's a few marketing people who have claimed 100% since then.
link |
00:59:41.940
My point is that, you know, what I see happening again is someone sees a demo and they overgeneralize
link |
00:59:50.740
and say, we must be almost there.
link |
00:59:52.340
But we've been working on it for 35 years.
link |
00:59:54.900
So that's demos.
link |
00:59:56.180
But this is going to take us back to the same conversation with AlphaZero.
link |
00:59:59.540
Are you not, okay, I'll just say what I am because I thought, okay, when I first started
link |
01:00:06.100
interacting with the Mobileye implementation of Tesla Autopilot, I've driven a lot of car,
link |
01:00:12.740
you know, I've been in Google self driving car since the beginning.
link |
01:00:18.020
I thought there was no way before I sat and used Mobileye, I thought they're just knowing
link |
01:00:23.300
computer vision.
link |
01:00:24.100
I thought there's no way it could work as well as it was working.
link |
01:00:26.980
So my model of the limits of computer vision was way more limited than the actual implementation
link |
01:00:35.300
of Mobileye.
link |
01:00:35.940
I was so that's one example.
link |
01:00:37.860
I was really surprised.
link |
01:00:39.380
It's like, wow, that was that was incredible.
link |
01:00:41.700
The second surprise came when Tesla threw away Mobileye and started from scratch.
link |
01:00:50.580
I thought there's no way they can catch up to Mobileye.
link |
01:00:52.740
I thought what Mobileye was doing was kind of incredible, like the amount of work and
link |
01:00:56.260
the annotation.
link |
01:00:56.980
Yeah, well, Mobileye was started by Amnon Shashua and used a lot of traditional, you
link |
01:01:01.620
know, hard fought computer vision techniques.
link |
01:01:04.420
But they also did a lot of good sort of like non research stuff, like actual like just
link |
01:01:11.620
good, like what you do to make a successful product, right?
link |
01:01:14.420
Scale, all that kind of stuff.
link |
01:01:16.020
And so I was very surprised when they from scratch were able to catch up to that.
link |
01:01:20.660
That's very impressive.
link |
01:01:21.620
And I've talked to a lot of engineers that was involved.
link |
01:01:23.780
This is that was impressive.
link |
01:01:25.620
That was impressive.
link |
01:01:27.300
And the recent progress, especially under the involvement of Andrej Karpathy, what they
link |
01:01:34.900
were what they're doing with the data engine, which is converting into the driving task
link |
01:01:40.340
into these multiple tasks and then doing this edge case discovery when they're pulling back
link |
01:01:45.140
like the level of engineering made me rethink what's possible.
link |
01:01:49.940
I don't I still, you know, I don't know to that intensity, but I always thought it was
link |
01:01:55.380
very difficult to solve autonomous driving with all the sensors, with all the computation.
link |
01:02:00.260
I just thought it's a very difficult problem.
link |
01:02:02.420
But I've been continuously surprised how much you can engineer.
link |
01:02:07.860
First of all, the data acquisition problem, because I thought, you know, just because
link |
01:02:12.100
I worked with a lot of car companies and they're they're so a little a little bit old school
link |
01:02:20.180
to where I didn't think they could do this at scale like AWS style data collection.
link |
01:02:25.940
So when Tesla was able to do that, I started to think, OK, so what are the limits of this?
link |
01:02:33.140
I still believe that driver like sensing and the interaction with the driver and like studying
link |
01:02:40.980
the human factor psychology problem is essential.
link |
01:02:43.700
It's it's always going to be there.
link |
01:02:45.460
It's always going to be there, even with fully autonomous driving.
link |
01:02:48.740
But I've been surprised what is the limit, especially a vision based alone, how far that
link |
01:02:55.220
can take us.
link |
01:02:57.060
So that's my levels of surprise now.
link |
01:03:00.900
OK, can you explain in the same way you said, like Alpha Zero, that's a homework problem
link |
01:03:07.380
that's scaled large in its chest, like who cares?
link |
01:03:10.260
Go with here's actual people using an actual car and driving.
link |
01:03:15.380
Many of them drive more than half their miles using the system.
link |
01:03:19.380
Right.
link |
01:03:20.420
So, yeah, they're doing well with with pure vision for your vision.
link |
01:03:24.980
Yeah.
link |
01:03:25.480
And, you know, and now no radar, which is I suspect that can't go all the way.
link |
01:03:30.820
And one reason is without without new cameras that have a dynamic range closer to the human
link |
01:03:36.340
eye, because human eye has incredible dynamic range.
link |
01:03:39.300
And we make use of that dynamic range in its 11 orders of magnitude or some crazy number
link |
01:03:46.500
like that.
link |
01:03:47.700
The cameras don't have that, which is why you see the the the bad cases where the sun
link |
01:03:53.140
on a white thing and it blinds it in a way it wouldn't blind the person.
link |
01:03:59.860
I think there's a bunch of things to think about before you say this is so good, it's
link |
01:04:06.020
just going to work.
link |
01:04:06.660
OK, and I'll come at it from multiple angles.
link |
01:04:12.180
And I know you've got a lot of time.
link |
01:04:13.700
Yeah.
link |
01:04:14.420
OK, let's let's I have thought about these things.
link |
01:04:17.220
Yeah, I know.
link |
01:04:18.740
You've been writing a lot of great blog posts about it for a while before Tesla had autopilot.
link |
01:04:24.980
Right.
link |
01:04:25.480
So you've been thinking about autonomous driving for a while from every angle.
link |
01:04:29.220
So so a few things, you know, in the US, I think that the death rate for autonomous driving
link |
01:04:36.020
death rate from motor vehicle accidents is about thirty five thousand a year,
link |
01:04:44.900
which is an outrageous number, not outrageous compared to covid deaths.
link |
01:04:49.140
But, you know, there is no rationality.
link |
01:04:52.100
And that's part of the thing people have said.
link |
01:04:54.340
Engineers say to me, well, if we cut down the number of deaths by 10 percent by having
link |
01:04:58.900
autonomous driving, that's going to be great.
link |
01:05:01.300
Everyone will love it.
link |
01:05:02.100
And my prediction is that if autonomous vehicles kill more than 10 people a year, they'll be
link |
01:05:09.620
screaming and hollering, even though thirty five thousand people a year have been killed
link |
01:05:14.260
by human drivers.
link |
01:05:16.260
It's not rational.
link |
01:05:17.860
It's a different set of expectations.
link |
01:05:20.100
And that will probably continue.
link |
01:05:23.860
So there's that aspect of it.
link |
01:05:25.300
The other aspect of it is that when we introduce new technology, we often change the rules
link |
01:05:34.420
of the game.
link |
01:05:36.020
So when we introduced cars first into our daily lives, we completely rebuilt our cities
link |
01:05:45.060
and we changed all the laws.
link |
01:05:46.900
Yeah, jaywalking was not an offense that was pushed by the car companies so that people
link |
01:05:52.820
would stay off the road so there wouldn't be deaths from pedestrians getting hit.
link |
01:05:57.460
We completely changed the structure of our cities and had these foul smelling things
link |
01:06:02.580
everywhere around us.
link |
01:06:04.580
And now you see pushback in cities like Barcelona is really trying to exclude cars, et cetera.
link |
01:06:11.060
So I think that to get to self driving, we will, large adoption, it's not going to be
link |
01:06:21.460
just take the current situation, take out the driver and put the same car doing the
link |
01:06:27.300
same stuff because the end case is too many.
link |
01:06:31.860
Here's an interesting question.
link |
01:06:33.300
How many fully autonomous train systems do we have in the U.S.?
link |
01:06:41.860
I mean, do you count them as fully autonomous?
link |
01:06:43.860
I don't know because they're usually as a driver, but they're kind of autonomous, right?
link |
01:06:47.860
No, let's get rid of the driver.
link |
01:06:51.380
Okay.
link |
01:06:51.860
I don't know.
link |
01:06:52.820
It's either 15 or 16.
link |
01:06:54.660
Most of them are in airports.
link |
01:06:56.900
There's a few that are fully autonomous.
link |
01:06:59.860
Seven are in airports, there's a few that go about five, two that go about five kilometers
link |
01:07:06.260
out of airports.
link |
01:07:11.460
When is the first fully autonomous train system for mass transit expected to operate fully
link |
01:07:17.460
autonomously with no driver in a U.S.
link |
01:07:22.420
City?
link |
01:07:23.540
It's expected to operate in 2017 in Honolulu.
link |
01:07:27.940
Oh, wow.
link |
01:07:29.300
It's delayed, but they will get there.
link |
01:07:32.020
BART, by the way, was originally going to be autonomous here in the Bay Area.
link |
01:07:35.780
I mean, they're all very close to fully autonomous, right?
link |
01:07:38.820
Yeah, but getting that close is the thing.
link |
01:07:41.540
And I've often gone on a fully autonomous train in Japan, one that goes out to that
link |
01:07:48.660
fake island in the middle of Tokyo Bay.
link |
01:07:50.660
I forget the name of that.
link |
01:07:53.460
And what do you see when you look at that?
link |
01:07:55.540
What do you see when you go to a fully autonomous train in an airport?
link |
01:08:03.380
It's not like regular trains.
link |
01:08:07.060
At every station, there's a double set of doors so that there's a door of the train
link |
01:08:12.100
and there's a door off the platform.
link |
01:08:18.020
And this is really visible in this Japanese one because it goes out in amongst buildings.
link |
01:08:23.540
The whole track is built so that people can't climb onto it.
link |
01:08:27.060
Yeah.
link |
01:08:27.860
So there's an engineering that then makes the system safe and makes them acceptable.
link |
01:08:32.260
I think we'll see similar sorts of things happen in the U.S.
link |
01:08:37.620
What surprised me, I thought, wrongly, that we would have special purpose lanes on 101
link |
01:08:46.180
in the Bay Area, the leftmost lane, so that it would be normal for Teslas or other cars
link |
01:08:55.140
to move into that lane and then say, okay, now it's autonomous and have that dedicated lane.
link |
01:09:00.900
I was expecting movement to that.
link |
01:09:03.380
Five years ago, I was expecting we'd have a lot more movement towards that.
link |
01:09:06.500
We haven't.
link |
01:09:07.460
And it may be because Tesla's been overpromising by saying this, calling their system fully
link |
01:09:12.820
self driving, I think they may have been gotten there quicker by collaborating to change the
link |
01:09:21.780
infrastructure.
link |
01:09:23.460
This is one of the problems with long haul trucking being autonomous.
link |
01:09:30.180
I think it makes sense on freeways at night for the trucks to go autonomously, but then
link |
01:09:38.020
is that how do you get onto and off of the freeway?
link |
01:09:40.260
What sort of infrastructure do you need for that?
link |
01:09:43.780
Do you need to have the human in there to do that or can you get rid of the human?
link |
01:09:48.500
So I think there's ways to get there, but it's an infrastructure argument because the
link |
01:09:55.060
long tail of cases is very long and the acceptance of it will not be at the same level as human
link |
01:10:02.020
drivers.
link |
01:10:02.580
So I'm with you still, and I was with you for a long time, but I am surprised how well
link |
01:10:09.780
how many edge cases of machine learning and vision based methods can cover.
link |
01:10:15.540
This is what I'm trying to get at is I think there's something fundamentally different
link |
01:10:22.260
with vision based methods and Tesla Autopilot and any company that's trying to do the same.
link |
01:10:27.460
Okay, well, I'm not going to argue with you because, you know, we're speculating.
link |
01:10:34.260
Yes, but, you know, my gut feeling tells me it's going to be things will speed up when
link |
01:10:43.620
there is engineering of the environment because that's what happened with every other technology.
link |
01:10:48.260
I'm a bit, I don't know about you, but I'm a bit cynical that infrastructure is going
link |
01:10:53.940
to rely on government to help out in these cases.
link |
01:11:00.340
If you just look at infrastructure in all domains, it's just a government always drags
link |
01:11:05.540
behind on infrastructure.
link |
01:11:07.540
There's like there's so many just well in this country in the future.
link |
01:11:11.780
Sorry.
link |
01:11:12.260
Yes, in this country.
link |
01:11:13.700
And of course, there's many, many countries that are actually much worse on infrastructure.
link |
01:11:17.780
Oh, yes, many of the much worse and there's some that are much worse.
link |
01:11:21.220
You know, like high speed rail, the other countries are much better.
link |
01:11:25.940
I guess my question is, like, which is at the core of what I was trying to think through
link |
01:11:31.220
here and ask is like, how hard is the driving problem as it currently stands?
link |
01:11:37.540
So you mentioned, like, we don't want to just take the human out and duplicate whatever
link |
01:11:41.220
the human was doing.
link |
01:11:42.260
But if we were to try to do that, what, how hard is that problem?
link |
01:11:48.340
Because I used to think is way harder.
link |
01:11:52.420
Like, I used to think it's with vision alone, it would be three decades, four decades.
link |
01:11:59.220
Okay, so I don't know the answer to this thing I'm about to pose, but I do notice that on
link |
01:12:06.740
Highway 280 here in the Bay Area, which largely has concrete surface rather than blacktop
link |
01:12:13.380
surface, the white lines that are painted there now have black boundaries around them.
link |
01:12:20.900
And my lane drift system in my car would not work without those black boundaries.
link |
01:12:27.460
Interesting.
link |
01:12:28.260
So I don't know whether they started doing it to help the lane drift, whether it is an
link |
01:12:32.420
instance of infrastructure following the technology, but my car would not perform as well as the
link |
01:12:41.220
lane, my car would not perform as well without that change in the way they paint the line.
link |
01:12:45.460
Unfortunately, really good lane keeping is not as valuable.
link |
01:12:50.340
Like, it's orders of magnitude more valuable to have a fully autonomous system.
link |
01:12:54.900
Like, yeah, but for me, lane keeping is really helpful because I'm more healthy at it.
link |
01:13:00.900
But you wouldn't pay 10 times.
link |
01:13:03.700
Like, the problem is there's not financial, like, it doesn't make sense to revamp the
link |
01:13:11.540
infrastructure to make lane keeping easier.
link |
01:13:14.820
It does make sense to revamp the infrastructure.
link |
01:13:17.300
If you have a large fleet of autonomous vehicles, now you change what it means to own cars,
link |
01:13:22.260
you change the nature of transportation.
link |
01:13:24.980
But for that, you need autonomous vehicles.
link |
01:13:29.620
Let me ask you about Waymo then.
link |
01:13:31.540
I've gotten a bunch of chances to ride in a Waymo self driving car.
link |
01:13:37.380
And they're, I don't know if you'd call them self driving, but.
link |
01:13:40.980
Well, I mean, I rode in one before they were called Waymo when I was still at X.
link |
01:13:45.780
So there's currently, there's a big leap, another surprising leap I didn't think would
link |
01:13:50.740
happen, which is they have no driver currently.
link |
01:13:53.780
Yeah, in Chandler.
link |
01:13:55.060
In Chandler, Arizona.
link |
01:13:56.100
And I think they're thinking of doing that in Austin as well.
link |
01:13:58.980
But they're expanding.
link |
01:14:01.540
Although, you know, and I do an annual checkup on this.
link |
01:14:06.100
So as of late last year, they were aiming for hundreds of rides a week, not thousands.
link |
01:14:14.020
And there is no one in the car, but there's certainly safety people in the loop.
link |
01:14:22.660
And it's not clear how many, you know, what the ratio of cars to safety people is.
link |
01:14:26.820
It wasn't, obviously, they're not 100% transparent about this.
link |
01:14:31.620
None of them are 100% transparent.
link |
01:14:33.220
They're very untransparent.
link |
01:14:34.420
But at least the way they're, I don't want to make definitively, but they're saying
link |
01:14:39.540
there's no teleoperation.
link |
01:14:42.580
So like, they're, I mean, okay.
link |
01:14:45.620
And that sort of fits with YouTube videos I've seen of people being trapped in the car
link |
01:14:52.820
by a red cone on the street.
link |
01:14:55.460
And they do have rescue vehicles that come, and then a person gets in and drives it.
link |
01:15:01.620
Yeah.
link |
01:15:02.580
But isn't it incredible to you, it was to me, to get in a car with no driver and watch
link |
01:15:09.700
the steering wheel turn, like for somebody who has been studying, at least certainly
link |
01:15:15.060
the human side of autonomous vehicles for many years, and you've been doing it for way
link |
01:15:18.980
longer, like it was incredible to me that this was actually could happen.
link |
01:15:22.420
I don't care if that scale is 100 cars.
link |
01:15:24.100
This is not a demo.
link |
01:15:25.860
This is not, this is me as a regular human.
link |
01:15:28.820
The argument I have is that people make interpolations from that.
link |
01:15:33.060
Interpolations.
link |
01:15:33.940
That, you know, it's here, it's done.
link |
01:15:37.060
You know, it's just, you know, we've solved it.
link |
01:15:39.380
No, we haven't yet.
link |
01:15:40.980
And that's my argument.
link |
01:15:42.500
Okay.
link |
01:15:42.900
So I'd like to go to, you keep a list of predictions on your amazing blog post.
link |
01:15:48.420
It'd be fun to go through them.
link |
01:15:49.700
But before then, let me ask you about this.
link |
01:15:51.620
You have a harshness to you sometimes in your criticisms of what is perceived as hype.
link |
01:16:05.940
And so like, because people extrapolate, like you said, and they kind of buy into the hype
link |
01:16:10.980
and then they kind of start to think that the technology is way better than it is.
link |
01:16:18.900
But let me ask you maybe a difficult question.
link |
01:16:22.260
Sure.
link |
01:16:23.780
Do you think if you look at history of progress, don't you think to achieve the quote impossible,
link |
01:16:30.740
you have to believe that it's possible?
link |
01:16:32.740
Oh, absolutely.
link |
01:16:34.260
Yeah.
link |
01:16:34.820
Look, his two great runs, great, unbelievable, 1903, first human power, human, you know,
link |
01:16:46.980
human, you know, heavier than their flight.
link |
01:16:49.300
Yeah.
link |
01:16:50.580
1969, we land on the moon.
link |
01:16:52.740
That's 66 years.
link |
01:16:53.940
I'm 66 years old in my lifetime, that span of my lifetime, barely, you know, flying,
link |
01:17:00.260
I don't know what it was, 50 feet, the length of the first flight or something to landing
link |
01:17:05.380
on the moon.
link |
01:17:06.340
Unbelievable.
link |
01:17:08.100
Fantastic.
link |
01:17:08.980
But that requires, by the way, one of the Wright brothers, both of them, but one of
link |
01:17:13.060
them didn't believe it's even possible like a year before.
link |
01:17:16.180
Right.
link |
01:17:16.680
So, like, not just possible soon, but like ever.
link |
01:17:20.420
So, you know.
link |
01:17:21.940
How important is it to believe and be optimistic is what I guess.
link |
01:17:24.820
Oh, yeah, it is important.
link |
01:17:26.100
It's when it goes crazy, when I, you know, you said that, what was the word you used
link |
01:17:32.100
for my bad?
link |
01:17:33.060
Harshness.
link |
01:17:33.780
Harshness.
link |
01:17:34.580
Yes.
link |
01:17:40.180
I just get so frustrated.
link |
01:17:41.940
Yes.
link |
01:17:42.440
When people make these leaps and tell me that I'm, that I don't understand, you know, yeah.
link |
01:17:53.020
There's just from iRobot, which I was co founder of.
link |
01:17:57.420
Yeah.
link |
01:17:57.740
I don't know the exact numbers now because I haven't, it's 10 years since I stepped
link |
01:18:00.860
off the board, but I believe it's well over 30 million robots cleaning houses from that
link |
01:18:06.220
one company.
link |
01:18:06.780
And now there's lots of other companies.
link |
01:18:08.140
Yes.
link |
01:18:08.140
Was that a crazy idea that we had to believe in 2002 when we released it?
link |
01:18:14.940
Yeah, that was, we had, we had to, you know, believe that it could be done.
link |
01:18:20.540
Let me ask you about this.
link |
01:18:21.740
So iRobot, one of the greatest robotics companies ever in terms of creating a robot that actually
link |
01:18:28.380
works in the real world, probably the greatest robotics company ever.
link |
01:18:31.900
You were the co founder of it.
link |
01:18:33.660
If, if the Rodney Brooks of today talked to the Rodney of back then, what would you tell
link |
01:18:40.860
him?
link |
01:18:41.340
Cause I have a sense that would you pat him on the back and say, well, you're doing is
link |
01:18:47.100
going to fail, but go at it anyway.
link |
01:18:50.780
That's what I'm referring to with the harshness.
link |
01:18:54.060
You've accomplished an incredible thing there.
link |
01:18:56.700
One of the several things we'll talk about was, you know, you know, you know, you've
link |
01:19:01.500
done several things we'll talk about.
link |
01:19:03.820
Well, like that's what I'm trying to get at that line.
link |
01:19:06.940
No, it's, it's when my harshness is reserved for people who are not doing it, who claim
link |
01:19:14.140
it's just, well, this shows that it's just going to happen.
link |
01:19:16.860
But here, here's the thing.
link |
01:19:18.300
This shows.
link |
01:19:19.020
But you have that harshness for Elon too.
link |
01:19:24.060
And no, no, it's a different harshness.
link |
01:19:26.380
No, it's, it's a different argument with Elon.
link |
01:19:30.540
I think SpaceX is an amazing company.
link |
01:19:34.780
On the other hand, you know, I, in one of my blog posts, I said, what's easy and what's
link |
01:19:40.060
hard.
link |
01:19:40.460
I said, yeah, space X vertical landing rockets.
link |
01:19:44.300
It had been done before.
link |
01:19:46.380
Grid fins had been done since the sixties.
link |
01:19:48.700
Every Soyuz has them.
link |
01:19:52.780
Reusable space DCX reuse those rockets that landed vertically.
link |
01:19:58.220
There's a whole insurance industry in place for rocket launches.
link |
01:20:02.780
There are all sorts of infrastructure that was doable.
link |
01:20:07.980
It took a great entrepreneur, a great personal expense.
link |
01:20:11.980
He almost drove himself, you know, bankrupt doing it, a great belief to do it.
link |
01:20:18.860
Whereas Hyperloop, there's a whole bunch more stuff that's never been thought about and
link |
01:20:25.740
never been demonstrated.
link |
01:20:28.380
So my estimation is Hyperloop is a long, long, long, a lot further off.
link |
01:20:33.660
But, and if I've got a criticism of, of, of Elon, it's that he doesn't make distinctions
link |
01:20:39.740
between when the technology's coming along and ready.
link |
01:20:44.780
And then he'll go off and mouth off about other things, which then people go and compete
link |
01:20:50.140
about and try and do.
link |
01:20:51.100
And so this is where I, I, I, I understand what you're saying.
link |
01:20:57.580
I tend to draw a different distinction.
link |
01:21:00.060
I, I have a similar kind of harshness towards people who are not telling the truth, who
link |
01:21:06.220
are basically fabricating stuff to make money or to, well, he believes what he says.
link |
01:21:11.420
I just think that's a very important difference because I think in order to fly, in order
link |
01:21:18.300
to get to the moon, you have to believe even when most people tell you you're wrong and
link |
01:21:24.060
most likely you're wrong, but sometimes you're right.
link |
01:21:26.940
I mean, that's the same thing I have with Tesla autopilot.
link |
01:21:29.900
I think that's an interesting one.
link |
01:21:31.900
I was, especially when I was at MIT and just the entire human factors in the robotics community
link |
01:21:38.780
were very negative towards Elon.
link |
01:21:40.300
It was very interesting for me to observe colleagues at MIT.
link |
01:21:45.020
I wasn't sure what to make of that.
link |
01:21:46.620
That was very upsetting to me because I understood where that, where that's coming from.
link |
01:21:51.900
And I agreed with them and I kind of almost felt the same thing in the beginning until
link |
01:21:56.300
I kind of opened my eyes and realized there's a lot of interesting ideas here that might
link |
01:22:01.660
be over hype.
link |
01:22:02.540
You know, if you focus yourself on the idea that you shouldn't call a system full self
link |
01:22:09.740
driving when it's obviously not autonomous, fully autonomous, you're going to miss the
link |
01:22:16.220
magic.
link |
01:22:16.860
Oh, yeah, you are going to miss the magic.
link |
01:22:18.940
But at the same time, there are people who buy it, literally pay money for it and take
link |
01:22:25.340
those words as given.
link |
01:22:27.180
So it's, but I haven't.
link |
01:22:30.300
So that I take words as given is one thing.
link |
01:22:33.420
I haven't actually seen people that use autopilot that believe that the behavior is really important,
link |
01:22:39.500
like the actual action.
link |
01:22:40.700
So like, this is to push back on the very thing that you're frustrated about, which
link |
01:22:45.740
is like journalists and general people buying all the hype and going out in the same way.
link |
01:22:52.460
I think there's a lot of hype about the negatives of this, too, that people are buying without
link |
01:22:57.980
using people use the way this is what this was.
link |
01:23:01.020
This opened my eyes.
link |
01:23:02.060
Actually, the way people use a product is very different than the way they talk about
link |
01:23:07.580
it.
link |
01:23:07.820
This is true with robotics, with everything.
link |
01:23:09.500
Everybody has dreams of how a particular product might be used or so on.
link |
01:23:13.660
And then when it meets reality, there's a lot of fear of robotics, for example, that
link |
01:23:17.980
robots are somehow dangerous and all those kinds of things.
link |
01:23:20.380
But when you actually have robots in your life, whether it's in the factory or in the
link |
01:23:23.980
home, making your life better, that's going to be that's way different.
link |
01:23:28.300
Your perceptions of it are going to be way different.
link |
01:23:30.460
And so my just tension was like, here's an innovator.
link |
01:23:34.780
Supercruise from Cadillac was super interesting, too.
link |
01:23:41.500
That's a really interesting system.
link |
01:23:43.020
We should be excited by those innovations.
link |
01:23:45.580
OK, so can I tell you something that's really annoyed me recently?
link |
01:23:49.020
It's really annoyed me that the press and friends of mine on Facebook are going, these
link |
01:23:56.380
billionaires and their space games, why are they doing that?
link |
01:23:59.740
And that really, really pisses me off.
link |
01:24:02.300
I must say, I applaud that.
link |
01:24:05.100
I applaud it.
link |
01:24:06.780
It's the taking and not necessarily the people who are doing the things, but, you know, that
link |
01:24:13.180
I keep having to push back against unrealistic expectations when these things can become
link |
01:24:19.740
real.
link |
01:24:20.300
Yeah, I this was interesting on because there's been a particular focus for me is autonomous
link |
01:24:26.220
driving, Elon's prediction of when certain milestones will be hit.
link |
01:24:30.140
There's several things to be said there that I always I thought about, because whenever
link |
01:24:37.660
you said them, it was obvious that's not going to me as a person that kind of not inside
link |
01:24:44.860
the system is obvious.
link |
01:24:46.940
It's unlikely to hit those.
link |
01:24:48.700
There's two comments I want to make.
link |
01:24:50.700
One, he legitimately believes it.
link |
01:24:54.220
And two, much more importantly, I think that having ambitious deadlines drives people to
link |
01:25:04.140
do the best work of their life, even when the odds of those deadlines are very low.
link |
01:25:09.420
To a point, and I'm not talking about anyone here, I'm just saying.
link |
01:25:12.780
So there's a line there, right?
link |
01:25:14.220
You have to have a line because you overextend and it's demoralizing.
link |
01:25:20.140
It's demoralizing, but I will say that there's an additional thing here that those words
link |
01:25:28.860
also drive the stock market.
link |
01:25:34.140
And we have because of the way that rich people in the past have manipulated the rubes through
link |
01:25:42.060
investment, we have developed laws about what you're allowed to say.
link |
01:25:49.260
And you know, there's an area here which is I tend to be maybe I'm naive, but I tend to
link |
01:25:58.380
believe that like engineers, innovators, people like that, they're not they're my they don't
link |
01:26:06.620
think like that, like manipulating the price of the stock price.
link |
01:26:09.500
But it's possible that I'm I'm certain it's possible that I'm wrong.
link |
01:26:13.980
It's a very cynical view of the world because I think most people that run companies, especially
link |
01:26:21.820
original founders, they yeah, I'm not saying that's the intent.
link |
01:26:27.260
I'm saying it's eventually it's kind of you you you you fall into that kind of behavior
link |
01:26:33.340
pattern.
link |
01:26:33.340
I don't know.
link |
01:26:33.900
I tend to I wasn't saying I wasn't saying it's falling into that intent.
link |
01:26:37.980
It's just you also have to protect investors in this environment.
link |
01:26:43.740
In this market.
link |
01:26:44.620
Yeah.
link |
01:26:45.580
OK, so you have first of all, you have an amazing blog that people should check out.
link |
01:26:50.060
But you also have this in that blog, a set of predictions.
link |
01:26:54.780
Such a cool idea.
link |
01:26:55.740
I don't know how long ago you started, like three, four years ago.
link |
01:26:58.220
It was January 1st, 2018.
link |
01:27:01.820
18.
link |
01:27:02.940
And I made these predictions and I said that every January 1st, I was going to check back
link |
01:27:07.740
on how my predictions.
link |
01:27:09.020
That's such a great thought experiment.
link |
01:27:10.220
For 32 years.
link |
01:27:11.900
Oh, you said 32 years.
link |
01:27:13.340
I said 32 years because it's still that'll be January 1st, 2050.
link |
01:27:16.940
I'll be I will just turn ninety.
link |
01:27:21.660
Five, you know, and so people know that your predictions, at least for now, are in the
link |
01:27:31.180
space of artificial intelligence.
link |
01:27:33.180
Yeah, I didn't say I was going to make new predictions.
link |
01:27:34.860
I was just going to measure this set of predictions that I made because I was sort of I was sort
link |
01:27:38.380
of annoyed that everyone could make predictions.
link |
01:27:40.620
They didn't come true and everyone forgot.
link |
01:27:42.460
So I should hold myself to a high standard.
link |
01:27:44.860
Yeah, but also just putting years and like date ranges on things.
link |
01:27:48.700
It's a good thought exercise.
link |
01:27:50.140
Yeah, like and like reasoning your thoughts out.
link |
01:27:52.940
And so the topics are artificial intelligence, autonomous vehicles and space.
link |
01:27:58.300
Yeah.
link |
01:28:00.940
I was wondering if we could just go through some that stand out maybe from memory.
link |
01:28:04.700
I can just mention to you some.
link |
01:28:06.140
Let's talk about self driving cars, like some predictions that you're particularly proud
link |
01:28:10.780
of or are particularly interesting from flying cars to the other element here is like how
link |
01:28:20.220
widespread the location where the deployment of the autonomous vehicles is.
link |
01:28:25.900
And there's also just a few fun ones.
link |
01:28:27.580
Is there something that jumps to mind that you remember from the predictions?
link |
01:28:31.980
Well, I think I did put in there that there would be a dedicated self driving lane on
link |
01:28:37.500
101 by some year, and I think I was over optimistic on that one.
link |
01:28:42.380
Yeah, actually.
link |
01:28:42.860
Yeah, I actually do remember that.
link |
01:28:44.140
But you I think you were mentioning like difficulties at different cities.
link |
01:28:48.620
Yeah.
link |
01:28:50.460
Cambridge, Massachusetts, I think was an example.
link |
01:28:52.460
Yeah, like in Cambridge Port, you know, I lived in Cambridge Port for a number of years
link |
01:28:56.860
and you know, the roads are narrow and getting getting anywhere as a human driver is incredibly
link |
01:29:02.780
frustrating when you start to put and people drive the wrong way on one way streets there.
link |
01:29:07.660
It's just your prediction was driverless taxi services operating on all streets in
link |
01:29:14.860
Cambridge Port, Massachusetts in 2035.
link |
01:29:21.100
Yeah.
link |
01:29:21.740
And that may have been too optimistic.
link |
01:29:25.020
You think so?
link |
01:29:26.060
You know, I've gotten a little more pessimistic since I made these internally on some of these
link |
01:29:31.020
things.
link |
01:29:31.500
So what can you put a year to a major milestone of deployment of a taxi service in in a few
link |
01:29:42.780
major cities like something where you feel like autonomous vehicles are here.
link |
01:29:47.500
So let's let's take the grid streets of San Francisco north of market.
link |
01:29:55.900
Okay.
link |
01:29:56.540
Okay.
link |
01:29:57.040
Relatively benign environment, the streets are wide, the major problem is delivery trucks
link |
01:30:07.040
stopping everywhere, which made things more complicated.
link |
01:30:12.880
Taxi system there with somewhat designated pickup and drop offs, unlike with Uber and
link |
01:30:21.280
Lyft, where you can sort of get to any place and the drivers will figure out how to get
link |
01:30:28.160
in there.
link |
01:30:30.720
We're still a few years away.
link |
01:30:32.080
I, you know, I live in that area.
link |
01:30:35.200
So I see, you know, the self driving car companies cars, multiple multiple ones every day.
link |
01:30:42.240
Now if they're cruise, Zooks less often, Waymo all the time, different and different ones
link |
01:30:52.480
come and go.
link |
01:30:53.440
And there's always a driver.
link |
01:30:55.520
There's always a driver at the moment, although I have noticed that sometimes the driver does
link |
01:31:02.240
not have the authority to take over without talking to the home office, because they will
link |
01:31:08.000
sit there waiting for a long time, and clearly something's going on where the home office
link |
01:31:14.640
is making a decision.
link |
01:31:16.960
So they're, you know, and, and so you can see whether they've got their hands on the
link |
01:31:21.600
wheel or not.
link |
01:31:22.400
And, and it's the incident resolution time that tells you, gives you some clues.
link |
01:31:28.240
So what year do you think, what's your intuition?
link |
01:31:30.720
What date range are you currently thinking San Francisco would be?
link |
01:31:34.880
Are you currently thinking San Francisco would be autonomous taxi service from any point
link |
01:31:42.960
A to any point B without a driver?
link |
01:31:47.760
Are you still, are you thinking 10 years from now, 20 years from now, 30 years from now?
link |
01:31:53.040
Certainly not 10 years from now.
link |
01:31:55.440
It's going to be longer.
link |
01:31:56.880
If you're allowed to go south of market way longer.
link |
01:31:59.520
And unless it's reengineering of roads.
link |
01:32:03.440
By the way, what's the biggest challenge?
link |
01:32:05.120
You mentioned a few.
link |
01:32:06.080
Is it, is it the delivery trucks?
link |
01:32:09.360
Is it the edge cases, the computer perception, well, here's a case that I saw outside my
link |
01:32:15.040
house a few weeks ago, about 8pm on a Friday night, it was getting dark, it was before
link |
01:32:20.560
the solstice.
link |
01:32:23.520
It was a cruise vehicle come down the hill, turned right and stopped dead, covering the
link |
01:32:32.080
crosswalk.
link |
01:32:33.600
Why did it stop dead?
link |
01:32:35.120
Because there was a human just two feet from it.
link |
01:32:38.480
Now, I just glanced, I knew what was happening.
link |
01:32:41.680
The human was a woman was at the door of her car trying to unlock it with one of those
link |
01:32:47.840
things that, you know, when you don't have a key.
link |
01:32:50.480
That car thought, oh, she could jump out in front of me any second.
link |
01:32:55.520
As a human, I could tell, no, she's not going to jump out.
link |
01:32:57.760
She's busy trying to unlock her.
link |
01:32:59.360
She's lost her keys.
link |
01:33:00.240
She's trying to get in the car.
link |
01:33:01.200
And it stayed there for, until I got bored.
link |
01:33:05.440
And so the human driver in there did not take over.
link |
01:33:11.600
But here's the kicker to me.
link |
01:33:14.080
A guy comes down the hill with a stroller, I assume there's a baby in there, and now
link |
01:33:22.720
the crosswalk's blocked by this cruise vehicle.
link |
01:33:25.760
What's he going to do?
link |
01:33:27.440
Cleverly, I think, he decided not to go in front of the car.
link |
01:33:30.800
But he had to go behind it.
link |
01:33:34.960
He had to get off the crosswalk, out into the intersection, to push his baby around
link |
01:33:39.360
this car, which was stopped there.
link |
01:33:41.200
And no human driver would have stopped there for that length of time.
link |
01:33:44.880
They would have got out and out of the way.
link |
01:33:46.880
And that's another one of my pet peeves, that safety is being compromised for individuals
link |
01:33:56.000
who didn't sign up for having this happen in their neighborhood.
link |
01:33:59.760
Now you can say that's an edge case, but...
link |
01:34:03.200
Yeah, well, I'm in general not a fan of anecdotal evidence for stuff like this is one of my
link |
01:34:13.040
biggest problems with the discussion of autonomous vehicles in general, people that criticize
link |
01:34:17.920
them or support them are using edge cases, are using anecdotal evidence, but I got you.
link |
01:34:24.640
Your question is, when is it going to happen in San Francisco?
link |
01:34:26.800
I say not soon, but it's going to be one of them.
link |
01:34:29.040
But where it is going to happen is in limited domains, campuses of various sorts, gated
link |
01:34:38.640
communities where the other drivers are not arbitrary people.
link |
01:34:46.000
They're people who know about these things, they've been warned about them, and at velocities
link |
01:34:52.800
where it's always safe to stop dead.
link |
01:34:57.120
You can't do that on the freeway.
link |
01:34:58.720
That I think we're going to start to see, and they may not be shaped like current cars,
link |
01:35:06.160
they may be things like May Mobility has those things and various companies have these.
link |
01:35:12.560
Yeah, I wonder if that's a compelling experience.
link |
01:35:14.400
To me, it's not just about automation, it's about creating a product that makes your...
link |
01:35:20.320
It's not just cheaper, but it's fun to ride.
link |
01:35:23.680
One of the least fun things is for a car that stops and waits.
link |
01:35:29.600
There's something deeply frustrating for us humans for the rest of the world to take advantage
link |
01:35:34.400
of us as we wait.
link |
01:35:35.520
But think about not you as the customer, but someone who's in their 80s in a retirement
link |
01:35:47.520
village whose kids have said, you're not driving anymore, and this gives you the freedom to
link |
01:35:53.200
go to the market.
link |
01:35:54.240
That's a hugely beneficial thing, but it's a very few orders of magnitude less impact
link |
01:35:59.840
on the world.
link |
01:36:00.800
It's just a few people in a small community using cars as opposed to the entirety of the
link |
01:36:05.760
world.
link |
01:36:07.920
I like that the first time that a car equipped with some version of a solution to the trolley
link |
01:36:13.600
problem is...
link |
01:36:14.800
What's NIML stand for?
link |
01:36:16.400
Not in my life.
link |
01:36:17.040
Not in my life.
link |
01:36:17.680
I define my lifetime as up to 2050.
link |
01:36:20.080
You know, I ask you, when have you had to decide which person shall I kill?
link |
01:36:29.360
No, you put the brakes on and you break as hard as you can.
link |
01:36:31.760
You're not making that decision.
link |
01:36:35.360
I do think autonomous vehicles or semi autonomous vehicles do need to solve the whole pedestrian
link |
01:36:41.280
problem that has elements of the trolley problem within it, but it's not...
link |
01:36:45.520
Yeah, well, and I talk about it in one of the articles or blog posts that I wrote, and
link |
01:36:51.760
people have told me, one of my coworkers has told me he does this.
link |
01:36:56.480
He tortures autonomously driven vehicles and pedestrians will torture them.
link |
01:37:01.600
Now, once they realize that putting one foot off the curb makes the car think that they
link |
01:37:07.360
might walk into the road, teenagers will be doing that all the time.
link |
01:37:10.800
I, by the way, one of my, and this is a whole nother discussion, because my main interest
link |
01:37:15.440
with robotics is HRI, human robot interaction.
link |
01:37:19.200
I believe that robots that interact with humans will have to push back.
link |
01:37:25.520
Like they can't just be bullied because that creates a very uncompelling experience for
link |
01:37:30.480
the humans.
link |
01:37:31.280
Yeah, well, you know, Waymo, before it was called Waymo, discovered that, you know, they
link |
01:37:35.600
had to do that at four way intersections.
link |
01:37:38.080
They had to nudge forward to give the cue that they were going to go, because otherwise
link |
01:37:42.800
the other drivers would just beat them all the time.
link |
01:37:46.400
So you cofounded iRobot, as we mentioned, one of the most successful robotics companies
link |
01:37:52.320
ever.
link |
01:37:53.520
What are you most proud of with that company and the approach you took to robotics?
link |
01:38:00.480
Well, there's something I'm quite proud of there, which may be a surprise, but, you know,
link |
01:38:07.840
I was still on the board when this happened, it was March 2011, and we sent robots to Japan
link |
01:38:17.280
and they were used to help shut down the Fukushima Daiichi nuclear power plant, which was, everything
link |
01:38:27.520
was, I've been there since, I was there in 2014, and the robots, some of the robots were
link |
01:38:32.240
still there.
link |
01:38:33.120
I was proud that we were able to do that.
link |
01:38:35.600
Why were we able to do that?
link |
01:38:38.000
And, you know, people have said, well, you know, Japan is so good at robotics.
link |
01:38:42.960
It was because we had had about 6,500 robots deployed in Iraq and Afghanistan, teleopt,
link |
01:38:51.600
but with intelligence, dealing with roadside bombs.
link |
01:38:56.480
So we had, it was at that time, nine years of in field experience with the robots in
link |
01:39:03.360
harsh conditions, whereas the Japanese robots, which were, you know, getting, this goes back
link |
01:39:09.200
to what annoys me so much, getting all the hype, look at that, look at that Honda robot,
link |
01:39:14.560
it can walk, wow, the future's here, couldn't do a thing because they weren't deployed,
link |
01:39:20.800
but we had deployed in really harsh conditions for a long time, and so we're able to do
link |
01:39:26.960
something very positive in a very bad situation.
link |
01:39:30.400
What about just the simple, and for people who don't know, one of the things that iRobot
link |
01:39:36.640
has created is the Roomba vacuum cleaner.
link |
01:39:42.320
What about the simple robot that, that is the Roomba, quote unquote, simple, that's
link |
01:39:47.760
deployed in tens of millions of, in tens of millions of homes?
link |
01:39:53.200
What do you think about that?
link |
01:39:54.240
Well, I make the joke that I started out life as a pure mathematician and turned into a
link |
01:39:59.440
vacuum cleaner salesman, so if you're going to be an entrepreneur, be ready for, be ready
link |
01:40:05.440
to do anything, but I was, you know, there was a, there was a wacky lawsuit that I got
link |
01:40:15.040
opposed for not too many years ago, and I was the only one who had emailed from the
link |
01:40:20.800
1990s, and no one in the company had it, so I went and went through my email, and it
link |
01:40:27.520
reminded me of, you know, the joy of what we were doing, and what was I doing?
link |
01:40:34.880
What was I doing at the time we were building, building the Roomba?
link |
01:40:41.920
One of the things was we had this, you know, incredibly tight budget because we wanted
link |
01:40:46.160
to put it on the shelves at $200.
link |
01:40:50.960
There was another home cleaning robot at the time, it was the Electrolux Trilobite, which
link |
01:40:59.120
sold for 2,000 euros, and to us that was not going to be a consumer product, so we had
link |
01:41:05.360
reason to believe that $200 was a, was a thing that people would buy at.
link |
01:41:10.480
That was our aim, but that meant we had, you know, that's on the shelf making profit.
link |
01:41:19.120
That means the cost of goods has to be minimal, so I find all these emails of me going, you
link |
01:41:26.560
know, I'd be in Taipei for a MIT meeting, and I'd stay a few extra days and go down
link |
01:41:32.000
to Hsinchu and talk to these little tiny companies, lots of little tiny companies outside of TSMC,
link |
01:41:38.800
Taiwan Semiconductor Manufacturing Corporation, which let all these little companies be fabulous.
link |
01:41:45.440
They didn't have to have their own fab so they could innovate, and they were building,
link |
01:41:51.760
their innovations were to build, strip down 6802s, 6802 was what was in an Apple I, get
link |
01:41:57.840
rid of half the silicon and still have it be viable, and I'd previously got some of
link |
01:42:03.600
those for some earlier failed products of iRobot, and that was in Hong Kong going to
link |
01:42:11.520
all these companies that built, you know, they weren't gaming in the current sense,
link |
01:42:16.800
there were these handheld games that you would play, or birthday cards, because we had about
link |
01:42:23.360
a 50 cent budget for computation, so I'm trekking from place to place looking at their chips,
link |
01:42:30.640
looking at what they'd removed, ah, their interrupt handling is too weak for a general
link |
01:42:38.320
purpose, so I was going deep technical detail, and then I found this one from a company called
link |
01:42:43.440
Winbond, which had, and I'd forgotten it had this much RAM, it had 512 bytes of RAM,
link |
01:42:50.000
and it was in our budget, and it had all the capabilities we needed.
link |
01:42:54.640
Yeah, and you were excited.
link |
01:42:57.200
Yeah, and I was reading all these emails, Colin, I found this, so.
link |
01:43:02.400
Did you think, did you ever think that you guys could be so successful?
link |
01:43:07.200
Like, eventually this company would be so successful, could you possibly have imagined?
link |
01:43:12.240
No, we never did think that.
link |
01:43:13.760
We'd had 14 failed business models up to 2002, and then we had two winners the same year.
link |
01:43:19.200
No, and then, you know, we, I remember the board, because by this time we had some venture
link |
01:43:27.600
capital in, the board went along with us building some robots for, you know, aiming at the Christmas
link |
01:43:36.240
2002 market, and we went three times over what they authorized and built 70,000 of them,
link |
01:43:44.640
and sold them all in that first, because we released on September 18th, and they were
link |
01:43:51.200
all sold by Christmas.
link |
01:43:52.560
So it was, so we were gutsy, but.
link |
01:43:57.040
But yeah, you didn't think this will take over the world.
link |
01:44:00.640
Well, this is, so a lot of amazing robotics companies have gone under over the past few
link |
01:44:09.040
decades.
link |
01:44:10.560
Why do you think it's so damn hard to run a successful robotics company?
link |
01:44:17.680
There's a few things.
link |
01:44:20.960
One is expectations of capabilities by the founders that are off base.
link |
01:44:29.680
The founders, not the consumer, the founders.
link |
01:44:31.600
Yeah, expectations of what can be delivered.
link |
01:44:34.000
Sure.
link |
01:44:34.500
Mispricing, and what a customer thinks is a valid price, is not rational, necessarily.
link |
01:44:42.180
Yeah.
link |
01:44:43.620
And expectations of customers, and just the sheer hardness of getting people to adopt a
link |
01:44:56.100
new technology.
link |
01:44:57.060
And I've suffered from all three of these, you know.
link |
01:44:59.700
I've had more failures than successes, in terms of companies.
link |
01:45:04.820
I've suffered from all three.
link |
01:45:07.860
So, do you think one day there will be a robotics company, and by robotics company, I mean, where
link |
01:45:18.580
your primary source of income is from robots, that will be a trillion plus dollar company?
link |
01:45:24.740
And if so, what would that company do?
link |
01:45:31.460
I can't, you know, because I'm still starting robot companies.
link |
01:45:35.300
Yeah.
link |
01:45:38.180
I'm not making any such predictions in my own mind.
link |
01:45:41.380
I'm not thinking about a trillion dollar company.
link |
01:45:43.140
And by the way, I don't think, you know, in the 90s, anyone was thinking that Apple would
link |
01:45:47.220
ever be a trillion dollar company.
link |
01:45:48.580
So, these are, these are, you know, these are, you know, these are, you know, these
link |
01:45:52.580
would be a trillion dollar company, so these are, these are very hard to predict.
link |
01:45:57.220
But, sorry to interrupt, but don't you, because I kind of have a vision in a small way, and
link |
01:46:03.460
it's a big vision in a small way, that I see that there would be robots in the home,
link |
01:46:10.180
at scale, like Roomba, but more.
link |
01:46:13.540
And that's trillion dollar.
link |
01:46:15.620
Right.
link |
01:46:16.120
And I think there's a real market pull for them because of the demographic inversion,
link |
01:46:22.100
you know, who's going to do all the stuff for the older people?
link |
01:46:26.180
There's too many, you know, I'm leading here.
link |
01:46:31.700
There's going to be too many of us.
link |
01:46:36.420
But we don't have capable enough robots to make that economic argument at this point.
link |
01:46:42.340
Do I expect that that will happen?
link |
01:46:44.180
Yes, I expect it will happen.
link |
01:46:45.380
But I got to tell you, we introduced the Roomba in 2002, and I stayed another
link |
01:46:50.580
nine years.
link |
01:46:51.780
We were always trying to find what the next home robot would be, and still today, the
link |
01:46:57.700
primary product of 20 years late, almost 20 years later, 19 years later, the primary product
link |
01:47:02.660
is still the Roomba.
link |
01:47:03.620
So iRobot hasn't found the next one.
link |
01:47:07.060
Do you think it's possible for one person in the garage to build it versus, like, Google
link |
01:47:12.580
launching Google self driving car that turns into Waymo?
link |
01:47:16.340
Do you think this is almost like what it takes to build a successful robotics company?
link |
01:47:20.980
Do you think it's possible to go from the ground up, or is it just too much capital
link |
01:47:24.420
investment?
link |
01:47:25.540
Yeah, so it's very hard to get there without a lot of capital.
link |
01:47:31.700
And we're starting to see, you know, fair chunks of capital for some robotics companies.
link |
01:47:38.100
You know, Series B's, I saw one yesterday for $80 million, I think it was, for Covariant.
link |
01:47:45.540
But it can take real money to get into these things, and you may fail along the way.
link |
01:47:54.740
I've certainly failed at Rethink Robotics, and we lost $150 million in capital there.
link |
01:48:00.900
So, okay, so Rethink Robotics is another amazing robotics company you cofounded.
link |
01:48:06.580
So what was the vision there?
link |
01:48:09.060
What was the dream?
link |
01:48:11.140
And what are you most proud of with Rethink Robotics?
link |
01:48:15.620
I'm most proud of the fact that we got robots out of the cage in factories that were safe,
link |
01:48:23.140
absolutely safe, for people and robots to be next to each other.
link |
01:48:26.180
So these are robotic arms.
link |
01:48:27.700
Robotic arms.
link |
01:48:28.500
Able to pick up stuff and interact with humans.
link |
01:48:31.140
Yeah, and that humans could retask them without writing code.
link |
01:48:35.140
And now that's sort of become an expectation for a lot of other little companies and big
link |
01:48:40.020
companies, our advertising they're doing.
link |
01:48:42.260
That's both an interface problem and also a safety problem.
link |
01:48:45.540
Yeah, yeah.
link |
01:48:47.620
So I'm most proud of that.
link |
01:48:51.300
I completely, I let myself be talked out of what I wanted to do.
link |
01:48:59.380
And, you know, you always got, you know, I can't replay the tape.
link |
01:49:02.260
I can't replay it.
link |
01:49:05.460
Maybe, you know, if I'd been stronger on, and I remember the day, I remember the exact
link |
01:49:12.180
meeting.
link |
01:49:13.860
Can you take me through that meeting?
link |
01:49:16.260
Yeah.
link |
01:49:18.340
So I'd said that I'd set as a target for the company that we were going to build $3,000
link |
01:49:23.940
robots with force feedback that was safe for people to be around.
link |
01:49:29.700
Wow.
link |
01:49:30.420
That was my goal.
link |
01:49:31.380
And we built, so we started in 2008, and we had prototypes built of plastic, plastic
link |
01:49:38.980
gearboxes, and at a $3,000, you know, lifetime, or $3,000, I was saying, we're going to go
link |
01:49:48.180
after not the people who already have robot arms in factories, the people who would never
link |
01:49:52.500
have a robot arm.
link |
01:49:53.940
We're going to go after a different market.
link |
01:49:55.940
So we don't have to meet their expectations.
link |
01:49:57.940
And so we're going to build it out of plastic.
link |
01:49:59.860
It doesn't have to have a $35,000 lifetime.
link |
01:50:02.740
It's going to be so cheap that it's OpEx, not CapEx.
link |
01:50:09.140
And so we had a prototype that worked reasonably well, but the control engineers were complaining
link |
01:50:16.980
about these plastic gearboxes with a beautiful little planetary gearbox that we could use
link |
01:50:24.820
something called series elastic actuators.
link |
01:50:29.780
We embedded them in there.
link |
01:50:30.980
We could measure forces.
link |
01:50:32.180
We knew when we hit something, et cetera.
link |
01:50:35.060
The control engineers were saying, yeah, but there's this torque ripple because these plastic
link |
01:50:40.100
gears, they're not great gears, and there's this ripple, and trying to do force control
link |
01:50:44.900
around this ripple is so hard.
link |
01:50:47.220
And I'm not going to name names, but I remember one of the mechanical engineers saying, we'll
link |
01:50:55.140
just build a metal gearbox with spur gears, and it'll take six weeks.
link |
01:50:59.620
We'll be done.
link |
01:51:01.140
Problem solved.
link |
01:51:03.700
Two years later, we got the spur gearbox working.
link |
01:51:08.020
We cost reduced it every possible way we could, but now the price went up too.
link |
01:51:15.540
And then the CEO at the time said, well, we have to have two arms, not one arm.
link |
01:51:19.860
So our first robot product, Baxter, now cost $25,000, and the only people who were going
link |
01:51:27.460
to look at that were people who had arms in factories because that was somewhat cheaper
link |
01:51:31.460
for two arms than arms in factories.
link |
01:51:34.180
But they were used to 0.1 millimeter reproducibility of motion and certain velocities, and I kept
link |
01:51:43.700
thinking, but that's not what we're giving you.
link |
01:51:45.620
You don't need position repeatability.
link |
01:51:47.380
Use force control like a human does.
link |
01:51:49.700
No, no, but we want that repeatability.
link |
01:51:53.060
We want that repeatability.
link |
01:51:54.500
All the other robots have that repeatability.
link |
01:51:56.340
Why don't you have that repeatability?
link |
01:51:58.500
So can you clarify?
link |
01:51:59.780
Force control is you can grab the arm and you can move it.
link |
01:52:02.900
You can move it around, but suppose you...
link |
01:52:06.100
Can you see that?
link |
01:52:06.900
Yes.
link |
01:52:07.540
Suppose you want to...
link |
01:52:09.940
Yes.
link |
01:52:10.440
Suppose this thing is a precise thing that's got to fit here in this right angle.
link |
01:52:16.520
Under position control, you have fixtured where this is.
link |
01:52:20.520
You know where this is precisely, and you just move it, and it goes there.
link |
01:52:25.320
In force control, you would do something like slide over here till we feel that and slide
link |
01:52:30.120
it in there, and that's how a human gets precision.
link |
01:52:34.040
They use force feedback and get the things to mate rather than just go straight to it.
link |
01:52:42.440
Couldn't convince our customers who were in factories and were used to thinking about
link |
01:52:48.120
things a certain way, and they wanted it, wanted it, wanted it.
link |
01:52:51.880
So then we said, okay, we're going to build an arm that gives you that.
link |
01:52:56.120
So now we ended up building a $35,000 robot with one arm with...
link |
01:52:59.880
Oh, what are they called?
link |
01:53:04.840
A certain sort of gearbox made by a company whose name I can't remember right now, but
link |
01:53:08.520
it's the name of the gearbox.
link |
01:53:11.880
But it's got torque ripple in it.
link |
01:53:15.560
So now there was an extra two years of solving the problem of doing the force with the torque
link |
01:53:19.720
ripple.
link |
01:53:20.200
So we had to do the thing we had avoided for the plastic gearboxes, which is a little bit
link |
01:53:28.440
for the plastic gearboxes we ended up having to do.
link |
01:53:31.240
The robot was now overpriced and they...
link |
01:53:35.240
And that was your intuition from the very beginning kind of that this is not...
link |
01:53:40.040
You're opening a door to solve a lot of problems that you're eventually going to have to solve
link |
01:53:44.760
this problem anyway.
link |
01:53:45.800
Yeah.
link |
01:53:46.120
And also I was aiming at a low price to go into a different market.
link |
01:53:49.240
Low price.
link |
01:53:50.280
That didn't have robots.
link |
01:53:51.160
$3,000 would be amazing.
link |
01:53:52.600
Yeah.
link |
01:53:52.760
I think we could have done it for five.
link |
01:53:54.120
But, you know, you talked about setting the goal a little too far for the engineers.
link |
01:53:58.840
Yeah, exactly.
link |
01:54:02.280
So why would you say that company not failed, but went under?
link |
01:54:09.000
We had buyers and there's this thing called the Committee on Foreign Investment in the
link |
01:54:15.400
U.S., CFIUS.
link |
01:54:18.120
And that had previously been invoked twice.
link |
01:54:21.640
Around where the government could stop foreign money coming into a U.S. company based on
link |
01:54:29.640
defense requirements.
link |
01:54:32.680
We went through due diligence multiple times.
link |
01:54:34.600
We were going to get acquired, but every consortium had Chinese money in it, and all the bankers
link |
01:54:42.280
would say at the last minute, you know, this isn't going to get past CFIUS, and the investors
link |
01:54:47.080
would go away.
link |
01:54:47.880
And then we had two buyers, once we were about to run out of money, two buyers, and one used
link |
01:54:54.280
heavy handed legal stuff with the other one, said they were going to take it and pay more,
link |
01:55:02.760
dropped out when we were out of cash, and then bought the assets at 1 30th of the price
link |
01:55:08.040
they had offered a week before.
link |
01:55:10.920
It was a tough week.
link |
01:55:12.280
Do you, does it hurt to think about like an amazing company that didn't, you know, like
link |
01:55:21.640
iRobot didn't find a way?
link |
01:55:24.440
Yeah, it was tough.
link |
01:55:25.400
I said I was never going to start another company.
link |
01:55:27.480
I was pleased that everyone liked what we did so much that the team was hired by three
link |
01:55:36.360
companies, and I was very happy that we were able to do that.
link |
01:55:40.040
Three companies within a week.
link |
01:55:42.920
Everyone had a job in one of these three companies.
link |
01:55:44.760
Some stayed in their same desks because another company came in and rented the space.
link |
01:55:50.680
So I felt good about people not being out on the street.
link |
01:55:55.720
So Baxter has a screen with a face.
link |
01:55:59.560
What, that's a revolutionary idea for a robot manipulation, like for a robotic arm.
link |
01:56:07.320
How much opposition did you get?
link |
01:56:08.840
Well, first the screen was also used during codeless programming.
link |
01:56:12.920
We taught by demonstration.
link |
01:56:14.440
It showed you what its understanding of the task was.
link |
01:56:17.640
So it had two roles.
link |
01:56:21.240
Some customers hated it, and so we made it so that when the robot was running it could
link |
01:56:26.520
be showing graphs of what was happening and not show the eyes.
link |
01:56:30.200
Other people, and some of them surprised me who they were, saying well this one doesn't
link |
01:56:36.600
look as human as the old one.
link |
01:56:37.960
We liked the human looking.
link |
01:56:39.640
Yeah.
link |
01:56:40.120
So there was a mixed bag.
link |
01:56:43.240
But do you think that's, I don't know, I'm kind of disappointed whenever I talk to
link |
01:56:50.360
roboticists, like the best robotics people in the world, they seem to not want to do
link |
01:56:55.160
the eyes type of thing.
link |
01:56:56.760
Like they seem to see it as a machine as opposed to a machine that can also have a human connection.
link |
01:57:02.760
I'm not sure what to do with that.
link |
01:57:03.960
It seems like a lost opportunity.
link |
01:57:05.480
I think the trillion dollar company will have to do the human connection very well no matter
link |
01:57:10.440
what it does.
link |
01:57:11.160
Yeah, I agree.
link |
01:57:13.800
Can I ask you a ridiculous question?
link |
01:57:15.560
Sure.
link |
01:57:17.000
I might give a ridiculous answer.
link |
01:57:19.880
Do you think, well maybe by way of asking the question, let me first mention that you're
link |
01:57:25.640
kind of critical of the idea of the Turing test as a test of intelligence.
link |
01:57:32.280
Let me first ask this question.
link |
01:57:33.640
Do you think we'll be able to build an AI system that humans fall in love with and it
link |
01:57:40.360
falls in love with the human, like romantic love?
link |
01:57:46.920
Well, we've had that with humans falling in love with cars even back in the 50s.
link |
01:57:51.560
It's a different love, right?
link |
01:57:52.680
Well, yeah.
link |
01:57:53.640
I think there's a lifelong partnership where you can communicate and grow like...
link |
01:57:59.640
I think we're a long way from that.
link |
01:58:01.160
I think we're a long, long way.
link |
01:58:03.000
I think Blade Runner had the time scale totally wrong.
link |
01:58:10.440
Yeah, but so to me, honestly, the most difficult part is the thing that you said with the Marvex
link |
01:58:16.840
Paradox is to create a human form that interacts and perceives the world.
link |
01:58:21.400
But if we just look at a voice, like the movie Her or just like an Alexa type voice, I tend
link |
01:58:28.040
to think we're not that far away.
link |
01:58:29.560
Well, for some people, maybe not, but as humans, as we think about the future, we always try
link |
01:58:43.400
to...
link |
01:58:44.200
And this is the premise of most science fiction movies.
link |
01:58:46.920
You've got the world just as it is today and you change one thing.
link |
01:58:50.920
But that's not how...
link |
01:58:51.960
And it's the same with a self driving car.
link |
01:58:53.960
You change one thing.
link |
01:58:55.000
No, everything changes.
link |
01:58:56.840
Everything grows together.
link |
01:58:59.720
So surprisingly, it might be surprising to you or might not, I think the best movie about
link |
01:59:04.520
this stuff was Bicentennial Man.
link |
01:59:09.160
And what was happening there?
link |
01:59:11.080
It was schmaltzy and, you know, but what was happening there?
link |
01:59:15.720
As the robot was trying to become more human, the humans were adopting the technology of
link |
01:59:21.160
the robot and changing their bodies.
link |
01:59:23.080
So there was a convergence happening in a sense.
link |
01:59:27.160
So we will not be the same.
link |
01:59:28.760
You know, we're already talking about genetically modifying our babies.
link |
01:59:32.440
You know, there's more and more stuff happening around that.
link |
01:59:36.680
We will want to modify ourselves even more for all sorts of things.
link |
01:59:43.240
We put all sorts of technology in our bodies to improve it.
link |
01:59:48.440
You know, I've got things in my ears so that I can sort of hear you.
link |
01:59:53.560
Yeah.
link |
01:59:56.120
So we're always modifying our bodies.
link |
01:59:57.480
So, you know, I think it's hard to imagine exactly what it will be like in the future.
link |
02:00:03.640
But on the Turing test side, do you think, so forget about love for a second, let's talk
link |
02:00:09.720
about just like the Alexa Prize.
link |
02:00:12.280
Actually, I was invited to be a part of the Alexa Prize.
link |
02:00:16.200
Actually, I was invited to be a, what is the interviewer for the Alexa Prize or whatever
link |
02:00:23.080
that's in two days.
link |
02:00:25.320
Their idea is success looks like a person wanting to talk to an AI system for a prolonged
link |
02:00:32.440
period of time, like 20 minutes.
link |
02:00:35.080
How far away are we and why is it difficult to build an AI system with which you'd want
link |
02:00:41.400
to have a beer and talk for an hour or two hours?
link |
02:00:45.720
Like not for to check the weather or to check music, but just like to talk as friends.
link |
02:00:53.160
Yeah, well, you know, we saw Weizenbaum back in the 60s with his programmer, Elisa, being
link |
02:01:00.840
shocked at how much people would talk to Elisa.
link |
02:01:03.080
And I remember, you know, in the 70s typing, you know, stuff to Elisa to see what it would
link |
02:01:08.360
come back with.
link |
02:01:09.000
You know, I think right now, and this is a thing that Amazon's been trying to improve
link |
02:01:17.960
with Alexa, there is no continuity of topic.
link |
02:01:22.760
There's not, you can't refer to what we talked about yesterday.
link |
02:01:27.880
It's not the same as talking to a person where there seems to be an ongoing existence, which
link |
02:01:32.360
changes.
link |
02:01:33.800
We share moments together and they last in our memory together.
link |
02:01:37.080
Yeah, there's none of that.
link |
02:01:39.000
And there's no sort of intention of these systems that they have any goal in life, even
link |
02:01:46.840
if it's to be happy, you know, they don't even have a semblance of that.
link |
02:01:51.880
Now, I'm not saying this can't be done.
link |
02:01:53.720
I'm just saying, I think this is why we don't feel that way about them.
link |
02:01:57.960
That's a sort of a minimal requirement.
link |
02:02:01.560
If you want the sort of interaction you're talking about, it's a minimal requirement.
link |
02:02:06.840
Whether it's going to be sufficient, I don't know.
link |
02:02:10.360
We haven't seen it yet.
link |
02:02:11.560
We don't know what it feels like.
link |
02:02:14.120
I tend to think it's not as difficult as solving intelligence, for example, and I think it's
link |
02:02:23.160
achievable in the near term.
link |
02:02:26.680
But on the Turing test, why don't you think the Turing test is a good test of intelligence?
link |
02:02:32.200
Oh, because, you know, again, the Turing, if you read the paper, Turing wasn't saying
link |
02:02:39.080
this is a good test.
link |
02:02:40.440
He was using it as a rhetorical device to argue that if you can't tell the difference
link |
02:02:46.520
between a computer and a person, you must say that the computer's thinking because you
link |
02:02:52.920
can't tell the difference, you know, when it's thinking.
link |
02:02:56.600
You can't say something different.
link |
02:02:58.280
What it has become as this sort of weird game of fooling people, so back at the AI Lab in
link |
02:03:08.920
the late 80s, we had this thing that still goes on called the AI Olympics, and one of
link |
02:03:14.280
the events we had one year was the original imitation game, as Turing talked about, because
link |
02:03:21.320
he starts by saying, can you tell whether it's a man or a woman?
link |
02:03:25.160
So we did that at the Lab.
link |
02:03:28.680
You'd go and type, and the thing would come back, and you had to tell whether it was a
link |
02:03:33.720
man or a woman, and one man came up with a question that he could ask, which was always
link |
02:03:50.920
a dead giveaway of whether the other person was really a man or a woman.
link |
02:03:56.520
He would ask them, did you have green plastic toy soldiers as a kid?
link |
02:04:01.400
Yeah.
link |
02:04:01.880
What did you do with them?
link |
02:04:03.240
And a woman trying to be a man would say, oh, I lined them up.
link |
02:04:07.160
We had wars.
link |
02:04:07.800
We had battles.
link |
02:04:08.760
And the man, just being a man, would say, I stomped on them.
link |
02:04:11.240
I burned them.
link |
02:04:11.960
So that's what the Turing test with computers has become.
link |
02:04:21.480
What's the trick question?
link |
02:04:23.560
That's why I say it's sort of devolved into this weirdness.
link |
02:04:29.480
Nevertheless, conversation not formulated as a test is a fascinatingly challenging dance.
link |
02:04:36.680
That's a really hard problem.
link |
02:04:38.200
To me, conversation, when non poses a test, is a more intuitive illustration how far away
link |
02:04:45.720
we are from solving intelligence than computer vision.
link |
02:04:48.760
It's hard.
link |
02:04:49.960
Computer vision is harder for me to pull apart.
link |
02:04:53.000
But with language, with conversation, you could see.
link |
02:04:55.400
Because language is so human.
link |
02:04:56.840
It's so human.
link |
02:04:58.680
We can so clearly see it.
link |
02:05:04.280
Shit, you mentioned something I was going to go off on.
link |
02:05:06.920
OK.
link |
02:05:08.920
I mean, I have to ask you, because you were the head of CSAIL, AI Lab, for a long time.
link |
02:05:17.560
I don't know.
link |
02:05:18.840
To me, when I came to MIT, you were one of the greats at MIT.
link |
02:05:22.840
So what was that time like?
link |
02:05:25.960
And plus, you're friends with, but you knew Minsky and all the folks there, all the legendary
link |
02:05:34.760
AI people of which you're one.
link |
02:05:37.960
So what was that time like?
link |
02:05:39.560
What are memories that stand out to you from that time, from your time at MIT, from the
link |
02:05:46.760
AI Lab, from the dreams that the AI Lab represented, to the actual revolutionary work?
link |
02:05:53.000
Well, let me tell you first the disappointment in myself.
link |
02:05:56.760
As I've been researching this book, and so many of the players were active in the 50s
link |
02:06:03.960
and 60s, I knew many of them when they were older, and I didn't ask them all the questions
link |
02:06:08.600
now I wish I had asked.
link |
02:06:11.320
I'd sit with them at our Thursday lunches, which we had a faculty lunch, and I didn't
link |
02:06:16.760
ask them so many questions that now I wish I had.
link |
02:06:19.720
Can I ask you that question?
link |
02:06:20.840
Because you wrote that.
link |
02:06:22.440
You wrote that you were fortunate to know and rub shoulders with many of the greats,
link |
02:06:26.600
those who founded AI, robotics, and computer science, and the World Wide Web.
link |
02:06:30.680
And you wrote that your big regret nowadays is that often I have questions for those who
link |
02:06:34.760
have passed on, and I didn't think to ask them any of these questions, even as I saw
link |
02:06:41.560
them and said hello to them on a daily basis.
link |
02:06:44.120
So maybe also another question I want to ask, if you could talk to them today, what question
link |
02:06:51.160
would you ask?
link |
02:06:51.960
What questions would you ask?
link |
02:06:53.240
Well, Licklider, I would ask him.
link |
02:06:56.440
You know, he had the vision for humans and computers working together, and he really
link |
02:07:02.600
founded that at DARPA, and he gave the money to MIT, which started Project MAC in 1963.
link |
02:07:12.360
And I would have talked to him about what the successes were, what the failures were,
link |
02:07:16.200
what he saw as progress, etc.
link |
02:07:18.680
I would have asked him more questions about that, because now I could use it in my book,
link |
02:07:24.680
you know, but I think it's lost.
link |
02:07:25.880
It's lost forever.
link |
02:07:26.920
A lot of the motivations are lost.
link |
02:07:33.240
I should have asked Marvin why he and Seymour Pappert came down so hard on neural networks
link |
02:07:40.840
in 1968 in their book Perceptrons, because Marvin's PhD thesis was all about neural networks.
link |
02:07:48.440
And how do you make sense of that?
link |
02:07:50.280
That book destroyed the field.
link |
02:07:52.040
He probably, do you think he knew the effect that book would have?
link |
02:07:59.480
All the theorems are negative theorems.
link |
02:08:02.280
Yeah.
link |
02:08:03.880
Yeah.
link |
02:08:04.920
So, yeah.
link |
02:08:05.960
That's just the way of, that's the way of life.
link |
02:08:10.920
But still, it's kind of tragic that he was both the proponent and the destroyer of neural
link |
02:08:15.800
networks.
link |
02:08:16.360
Yeah.
link |
02:08:19.160
Is there other memories stand out from the robotics and the AI work at MIT?
link |
02:08:28.120
Well, yeah, but you gotta be more specific.
link |
02:08:31.320
Well, I mean, like, it's such a magical place.
link |
02:08:33.160
I mean, to me, it's a little bit also heartbreaking that, you know, with Google and Facebook,
link |
02:08:40.520
like DeepMind and so on, so much of the talent, you know, it doesn't stay necessarily
link |
02:08:46.280
for prolonged periods of time in these universities.
link |
02:08:50.440
Oh, yeah.
link |
02:08:50.940
I mean, some of the companies are more guilty than others of paying fabulous salaries to
link |
02:08:57.800
some of the highest, you know, producers.
link |
02:09:00.120
And then just, you never hear from them again.
link |
02:09:02.840
They're not allowed to give public talks.
link |
02:09:04.600
They're sort of locked away.
link |
02:09:06.600
And it's sort of like collecting, you know, Hollywood stars or something.
link |
02:09:12.280
And they're not allowed to make movies anymore.
link |
02:09:13.960
I own them.
link |
02:09:14.460
Yeah.
link |
02:09:15.660
That's tragic because, I mean, there's an openness to the university setting where you
link |
02:09:20.700
do research to both in the space of ideas and like publication, all those kinds of things.
link |
02:09:25.580
Yeah, you know, and, you know, there's the publication and all that.
link |
02:09:28.940
And often, you know, although these places say they publish.
link |
02:09:32.940
There's pressure.
link |
02:09:33.660
But I think, for instance, you know, on net net, I think Google buying those eight or
link |
02:09:41.260
nine robotics company was bad for the field because it locked those people away.
link |
02:09:46.620
They didn't have to make the company succeed anymore, locked them away for years, and then
link |
02:09:53.660
sort of all frid it away.
link |
02:09:55.660
Yeah.
link |
02:09:56.160
So do you have hope for MIT, for MIT?
link |
02:10:02.960
Yeah.
link |
02:10:03.460
Why shouldn't I?
link |
02:10:04.560
Well, I could be harsh and say that I'm not sure I would say MIT is leading the world
link |
02:10:11.200
in AI or even Stanford or Berkeley.
link |
02:10:15.440
I would say, I would say DeepMind, Google AI, Facebook AI, all of those things.
link |
02:10:23.680
I would take a slightly different approach, a different answer.
link |
02:10:30.560
I'll come back to Facebook in a minute.
link |
02:10:32.880
But I think those other places are following a dream of one of the founders.
link |
02:10:42.880
And I'm not sure that it's well founded, the dream.
link |
02:10:46.560
And I'm not sure that it's going to have the impact that he believes it is.
link |
02:10:54.720
You're talking about Facebook and Google and so on.
link |
02:10:56.560
I'm talking about Google.
link |
02:10:57.600
Google.
link |
02:10:58.320
But the thing is, those research labs aren't, there's the big dream.
link |
02:11:03.920
And I'm usually a fan of no matter what the dream is, a big dream is a unifier.
link |
02:11:08.480
Because what happens is you have a lot of bright minds working together on a dream.
link |
02:11:15.200
What results is a lot of adjacent ideas and how so much progress is made.
link |
02:11:20.000
Yeah.
link |
02:11:21.040
So I'm not saying they're actually leading.
link |
02:11:22.560
I'm not saying that the universities are leading.
link |
02:11:25.280
Yeah.
link |
02:11:25.780
But I don't think those companies are leading in general because they're,
link |
02:11:28.960
we saw this incredible spike in attendees at NeurIPS.
link |
02:11:36.160
And as I said in my January 1st review this year for 2020, 2020 will not be
link |
02:11:44.800
remembered as a watershed year for machine learning or AI.
link |
02:11:48.560
There was nothing surprising happened anyway.
link |
02:11:52.720
Unlike when deep learning hit ImageNet.
link |
02:11:57.440
That was a shake.
link |
02:12:02.080
And there's a lot more people writing papers, but the papers are fundamentally
link |
02:12:06.640
boring and uninteresting.
link |
02:12:08.800
Incremental work.
link |
02:12:09.760
Yeah.
link |
02:12:10.260
Is there a particular memories you have with Minsky or somebody else at
link |
02:12:13.140
MIT that stand out, funny stories?
link |
02:12:16.340
I mean, unfortunately, he's another one that's passed away.
link |
02:12:21.940
You've known some of the biggest minds in AI.
link |
02:12:24.580
Yeah.
link |
02:12:25.080
And you know, they, they did amazing things and sometimes they were grumpy.
link |
02:12:31.460
Well, he was, uh, he was interesting cause he was very grumpy, but that,
link |
02:12:35.380
that was his, uh, I remember him saying in an interview that the key to success
link |
02:12:41.780
or being to keep being productive is to hate everything you've ever done in the past.
link |
02:12:45.940
Maybe that, maybe that explains the Perceptron book.
link |
02:12:49.940
There it was.
link |
02:12:50.440
He told you exactly.
link |
02:12:53.540
But he, meaning like, just like, I mean, maybe that's the way to not
link |
02:12:58.100
treat yourself too seriously.
link |
02:12:59.380
Just, uh, you know, you're not, you're not, you're not, you're not, you're not,
link |
02:13:03.940
you're not treating yourself too seriously.
link |
02:13:05.620
Just, uh, always be moving forward.
link |
02:13:09.220
Uh, that was the idea.
link |
02:13:10.100
I mean, that, that crankiness, I mean, there's a, uh, that's the scary.
link |
02:13:14.980
So let me, let me, let me tell you, uh, you know, what really, um, you know,
link |
02:13:21.060
the joy memories are about having access to technology before anyone else has seen
link |
02:13:27.460
it.
link |
02:13:27.960
You know, I got to Stanford in 1977 and we had, um, you know, we had terminals
link |
02:13:34.860
that could show live video on them.
link |
02:13:37.260
Um, digital, digital sound system.
link |
02:13:40.620
We had a Xerox graphics printer.
link |
02:13:45.020
We could print, um, uh, it wasn't, you know, it wasn't like a typewriter
link |
02:13:50.140
ball hitting in characters.
link |
02:13:51.980
It could print arbitrary things.
link |
02:13:53.580
I mean, you know, one bit, you know, black or white, but you get arbitrary pictures.
link |
02:13:58.300
This was science fiction sort of stuff.
link |
02:14:00.380
Um, um, at, at MIT, the, uh, the list machines, which, you know, they were the
link |
02:14:07.260
first personal computers and, you know, cost a hundred thousand dollars each.
link |
02:14:12.060
And I could, you know, I got there early enough in the day.
link |
02:14:14.620
I got one for the day.
link |
02:14:15.980
Couldn't, couldn't stand up.
link |
02:14:17.420
I had to keep working.
link |
02:14:18.380
Um, um, so they're having that like direct glimpse into the future.
link |
02:14:25.340
Yeah.
link |
02:14:25.580
And, and, you know, I've had email every day since 1977.
link |
02:14:29.100
Um, and, uh, you know, the, the host field was only eight bits, you know, that many
link |
02:14:36.060
places, but I could send the email to other people at a few places.
link |
02:14:39.980
So that was, that was pretty exciting to be in that world so different from what
link |
02:14:45.340
the rest of the world knew.
link |
02:14:46.780
Um, uh, uh, let me ask you probably edit this out, but just in case you have a
link |
02:14:53.420
story, uh, I'm hanging out with Don Knuth, uh, for a while tomorrow.
link |
02:15:00.060
Did you ever get a chance to such a different world than yours?
link |
02:15:03.340
He's a very kind of theoretical computer science, the puzzle of, uh, of, uh, computer
link |
02:15:08.300
science and mathematics.
link |
02:15:09.500
And you're so much about the magic of robotics, like the practice of it.
link |
02:15:13.740
You mentioned him earlier for like, not, you know, about computation.
link |
02:15:17.820
Did your worlds cross?
link |
02:15:19.580
They did enough.
link |
02:15:20.540
You know, I, I know him now we talk, you know, but let me tell you my, my Donald
link |
02:15:25.100
Knuth story.
link |
02:15:26.700
So, um, you know, besides, you know, analysis of algorithms, he's well known for
link |
02:15:32.140
writing tech, which is in LaTeX, which is the academic publishing system.
link |
02:15:37.580
So he did that at the AI lab and he would do it.
link |
02:15:41.740
He would work overnight at the AI lab.
link |
02:15:45.020
And one, one day, one night, the, uh, the mainframe computer went down and, um, uh,
link |
02:15:55.660
a guy named Robert Pore was there.
link |
02:15:57.340
He did his PhD at the Media Lab at MIT and he was, um, you know, an engineer.
link |
02:16:04.300
And so I, he and I, you know, tracked down what were the problem was.
link |
02:16:08.780
It was one of this big refrigerator size or washing machine size disk drives had
link |
02:16:13.100
failed.
link |
02:16:13.500
And that's what brought the whole system down.
link |
02:16:15.500
So we've got panels pulled off and we're pulling, you know, circuit cards out.
link |
02:16:20.300
And Donald Knuth, who's a really tall guy walks in and he's looking down and says,
link |
02:16:25.340
when will it be fixed?
link |
02:16:26.540
You know, cause he wanted to get back to writing his tech system.
link |
02:16:31.340
And so we, we figured out, you know, it was a particular chip, 7,400 series chip,
link |
02:16:37.420
which was socketed.
link |
02:16:38.700
We popped it out.
link |
02:16:40.780
We put a replacement in, put it back in.
link |
02:16:43.340
Smoke comes out cause we put it in backwards.
link |
02:16:45.740
Cause we were so nervous that Donald Knuth was standing over us.
link |
02:16:49.500
Anyway, we eventually got it fixed and got the mainframe running again.
link |
02:16:53.660
So that was your little, when was that again?
link |
02:16:56.220
Well, that must have been before October 79.
link |
02:16:58.860
Cause we moved out of that building then.
link |
02:17:00.300
So sometime probably 78 sometime early 79.
link |
02:17:03.740
Yeah, those, all those figures is just fascinating.
link |
02:17:06.140
All the people with pass, pass through MIT is really fascinating.
link |
02:17:10.220
Is there, let me ask you to put on your big wise man hat.
link |
02:17:18.140
Is there advice that you can give to young people today,
link |
02:17:20.860
whether in high school or college who are thinking about their career
link |
02:17:24.700
or thinking about life, how to live a life they're proud of, a successful life?
link |
02:17:32.060
Yeah. So, so many people ask me for advice and have asked for,
link |
02:17:36.140
and I give, I talk to a lot of people all the time and there is no one way.
link |
02:17:44.060
You know, there's a lot of pressure to produce papers
link |
02:17:51.900
that will be acceptable and be published.
link |
02:17:56.460
Maybe I was, maybe I can't do it.
link |
02:17:58.620
Maybe I was, maybe I come from an age where I would,
link |
02:18:03.340
I could be a rebel against that and still succeed.
link |
02:18:07.100
Maybe it's harder today, but I think it's important not to get too caught up
link |
02:18:14.860
with what everyone else is doing.
link |
02:18:18.380
And if you, if, well, it depends on what you want of life.
link |
02:18:22.940
If you want to have real impact, you have to be ready to fail a lot of times.
link |
02:18:31.100
So you have to make a lot of unsafe decisions.
link |
02:18:34.220
And the only way to make that work is to make, keep doing it for a long time.
link |
02:18:38.700
And then one of them will be work out.
link |
02:18:40.220
And so that, that, that will make something successful.
link |
02:18:43.740
Or not.
link |
02:18:45.500
Or yeah, or you may, or you just may, you know, end up, you know,
link |
02:18:48.780
not having a, you know, having a lousy career.
link |
02:18:50.780
I mean, it's certainly possible.
link |
02:18:52.140
Taking the risk is the thing.
link |
02:18:53.420
Yeah.
link |
02:18:56.220
But there's no way to, to make all safe decisions and actually really contribute.
link |
02:19:06.620
Do you think about your death, about your mortality?
link |
02:19:12.300
I got to say when COVID hit, I did.
link |
02:19:15.660
Because we did, you know, in the early days, we didn't know how bad it was going to be.
link |
02:19:18.860
And I, that, that made me work on my book harder for a while,
link |
02:19:22.780
but then I'd started this company and now I'm doing full time,
link |
02:19:25.900
more than full time of the company.
link |
02:19:27.100
So the book's on hold, but I do want to finish this book.
link |
02:19:30.300
When you think about it, are you afraid of it?
link |
02:19:35.820
I'm afraid of dribbling, you know, of losing it.
link |
02:19:42.220
The details of, okay.
link |
02:19:43.980
Yeah.
link |
02:19:45.180
Yeah.
link |
02:19:45.580
But the fact that the ride ends, I've known that for a long time.
link |
02:19:51.260
So it's, yeah, but there's knowing and knowing.
link |
02:19:55.420
It's such a, yeah.
link |
02:19:57.580
And it really sucks.
link |
02:19:58.780
It feels, it feels a lot closer.
link |
02:20:01.820
So my, in, in my, my blog with my predictions, my sort of push back against that was that I said,
link |
02:20:08.940
I'm going to review these every year for 32 years and that puts me into my mid nineties.
link |
02:20:14.860
So, you know, it's my whole every, every time you write the blog posts,
link |
02:20:18.780
you're getting closer and closer to your own prediction of your death.
link |
02:20:23.660
Yeah.
link |
02:20:24.940
What do you hope your legacy is?
link |
02:20:28.140
You're one of the greatest roboticist AI researchers of all time.
link |
02:20:34.700
What I hope is that I actually finished writing this book
link |
02:20:38.140
and that there's one person who reads it and see something about changing the way they're thinking.
link |
02:20:48.940
And that leads to the next big.
link |
02:20:54.860
And then there'll be on a podcast a hundred years from now saying I once read that book
link |
02:21:01.580
and that changed everything.
link |
02:21:04.460
What do you think is the meaning of life?
link |
02:21:06.140
This whole thing, the existence, the, the, the, all the hurried things we do
link |
02:21:10.140
on this planet, what do you think is the meaning of it all?
link |
02:21:13.260
Yeah. Well, you know, I think we're all really bad at it.
link |
02:21:17.180
Life or finding meaning or both.
link |
02:21:19.020
Yeah. We get caught up in, in, in the, it's easy to get easier to do the stuff that's immediate
link |
02:21:24.940
and not through the stuff. It's not immediate.
link |
02:21:27.820
So the big picture we're bad at.
link |
02:21:29.980
Yeah. Yeah.
link |
02:21:31.020
Do you have a sense of what that big picture is?
link |
02:21:33.900
Like why you ever look up to the stars and ask, why the hell are we here?
link |
02:21:41.580
You know, my, my, my, my atheism tells me it's just random, but you know, I want to understand the,
link |
02:21:50.380
the way random in the, in the, that's what I talk about in this book, how order comes from disorder.
link |
02:21:55.660
Yeah.
link |
02:21:58.220
But it kind of sprung up like most of the whole thing is random, but this, this, this,
link |
02:22:02.460
the whole thing is random, but this little pocket of complexity they will call earth
link |
02:22:07.660
that like, why the hell does that happen?
link |
02:22:10.300
And, and what we don't know is how common that those pockets of complexity are or how often,
link |
02:22:18.060
um, cause they may not last forever.
link |
02:22:22.780
Which is, uh, more exciting slash sad to you if we're alone or if there's infinite number of.
link |
02:22:30.460
Oh, I think, I think it's impossible for me to believe that we're alone.
link |
02:22:36.300
Um, that would just be too horrible, too cruel.
link |
02:22:41.500
It could be like the sad thing.
link |
02:22:43.180
It could be like a graveyard of intelligent civilizations.
link |
02:22:46.300
Oh, everywhere.
link |
02:22:46.940
Yeah.
link |
02:22:47.980
That might be the most likely outcome.
link |
02:22:50.620
And for us too.
link |
02:22:51.660
Yeah, exactly.
link |
02:22:52.540
Yeah.
link |
02:22:52.860
And all of this will be forgotten.
link |
02:22:54.700
Yeah.
link |
02:22:54.940
Yeah, including all the robots you build, everything forgotten.
link |
02:23:01.500
Well, on average, everyone has been forgotten in history.
link |
02:23:05.740
Yeah.
link |
02:23:06.220
Right.
link |
02:23:06.940
Yeah.
link |
02:23:07.500
Most people are not remembered beyond the generation or two.
link |
02:23:11.100
Um, I mean, yeah.
link |
02:23:12.780
Well, not just on average, basically very close to a hundred percent of people who've ever lived
link |
02:23:17.900
are forgotten.
link |
02:23:18.780
Yeah.
link |
02:23:19.020
I mean, you know, long arc of, I don't know anyone alive who remembers my great grandparents
link |
02:23:24.140
because we didn't meet them.
link |
02:23:26.300
So still this fun, this, uh, this, uh, life is pretty fun somehow.
link |
02:23:32.460
Yeah.
link |
02:23:33.660
Even the immense absurdity and, and, uh, at times, meaninglessness of it all.
link |
02:23:39.180
It's pretty fun.
link |
02:23:40.220
And one of the, for me, one of the most fun things is robots.
link |
02:23:43.740
And I've looked up to your work.
link |
02:23:45.180
I've looked up to you for a long time.
link |
02:23:46.780
That's right.
link |
02:23:47.180
God.
link |
02:23:47.740
Rod, it's, it's an honor that, uh, you would spend your valuable time with me today talking.
link |
02:23:53.580
It was an amazing conversation.
link |
02:23:54.780
Thank you so much for being here.
link |
02:23:55.980
Well, thanks for, thanks for talking with me.
link |
02:23:57.820
I've enjoyed it.
link |
02:24:00.060
Thanks for listening to this conversation with Rodney Brooks.
link |
02:24:02.700
To support this podcast, please check out our sponsors in the description.
link |
02:24:06.860
And now let me leave you with the three laws of robotics from Isaac Asimov.
link |
02:24:12.620
One, a robot may not injure a human being or through inaction, allow human being to come to
link |
02:24:19.020
harm. Two, a robot must obey the orders given to it by human beings, except when such orders
link |
02:24:25.580
would conflict with the first law. And three, a robot must protect its own existence as long
link |
02:24:32.860
as such protection does not conflict with the first or the second laws.
link |
02:24:38.620
Thank you for listening.
link |
02:24:39.740
I hope to see you next time.