back to index

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49


small model | large model

link |
00:00:00.000
The following is a conversation with Elon Musk, Part 2, the second time we spoke on the podcast,
link |
00:00:07.280
with parallels, if not in quality, than an outfit, to the objectively speaking greatest
link |
00:00:13.120
sequel of all time, Godfather Part 2. As many people know, Elon Musk is a leader of Tesla,
link |
00:00:20.720
SpaceX, Neuralink, and the Boring Company. What may be less known is that he's a world
link |
00:00:26.560
class engineer and designer, constantly emphasizing first principles thinking and taking on big
link |
00:00:32.480
engineering problems that many before him will consider impossible. As scientists and engineers,
link |
00:00:39.600
most of us don't question the way things are done, we simply follow the momentum of the crowd.
link |
00:00:44.880
But revolutionary ideas that change the world on the small and large scales happen when you
link |
00:00:51.520
return to the fundamentals and ask, is there a better way? This conversation focuses on the
link |
00:00:57.840
incredible engineering and innovation done in brain computer interfaces at Neuralink.
link |
00:01:04.160
This work promises to help treat neurobiological diseases to help us further understand the
link |
00:01:09.440
connection between the individual neuron to the high level function of the human brain.
link |
00:01:14.400
And finally, to one day expand the capacity of the brain through two way communication
link |
00:01:20.240
with computational devices, the internet, and artificial intelligence systems.
link |
00:01:25.440
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by YouTube,
link |
00:01:31.040
Apple Podcasts, Spotify, support on Patreon, or simply connect with me on Twitter
link |
00:01:36.320
at Lex Friedman, spelled F R I D M A N. And now, as an anonymous YouTube commenter referred to
link |
00:01:43.520
our previous conversation as the quote, historical first video of two robots conversing without
link |
00:01:49.440
supervision, here's the second time, the second conversation with Elon Musk.
link |
00:01:57.840
Let's start with an easy question about consciousness. In your view, is consciousness
link |
00:02:03.120
something that's unique to humans or is it something that permeates all matter, almost like
link |
00:02:07.600
a fundamental force of physics? I don't think consciousness permeates all matter. Panpsychists
link |
00:02:13.680
believe that. Yeah. There's a philosophical. How would you tell? That's true. That's a good point.
link |
00:02:21.120
I believe in scientific methods. I don't know about your mind or anything, but the scientific
link |
00:02:24.240
method is like, if you cannot test the hypothesis, then you cannot reach meaningful conclusion that
link |
00:02:28.800
it is true. Do you think consciousness, understanding consciousness is within the
link |
00:02:34.000
reach of science of the scientific method? We can dramatically improve our understanding of
link |
00:02:40.160
consciousness. You know, hot press to say that we understand anything with complete accuracy,
link |
00:02:47.120
but can we dramatically improve our understanding of consciousness? I believe the answer is yes.
link |
00:02:53.360
Does an AI system in your view have to have consciousness in order to achieve human level
link |
00:02:58.480
or superhuman level intelligence? Does it need to have some of these human qualities that
link |
00:03:03.360
consciousness, maybe a body, maybe a fear of mortality, capacity to love those kinds of
link |
00:03:11.120
silly human things? There's a different, you know, there's this, there's the scientific method,
link |
00:03:19.440
which I very much believe in where something is true to the degree that it is testably. So
link |
00:03:25.200
and otherwise, you're really just talking about, you know, preferences or untestable beliefs or
link |
00:03:34.560
that, you know, that kind of thing. So it ends up being somewhat of a semantic question, where
link |
00:03:42.320
we were conflating a lot of things with the word intelligence. If we parse them out and say,
link |
00:03:46.880
you know, are we headed towards the future where an AI will be able to outthink us in every way?
link |
00:03:57.520
Then the answer is unequivocally yes.
link |
00:04:01.440
In order for an AI system that needs to outthink us in every way, it also needs to have
link |
00:04:07.360
a capacity to have consciousness, self awareness, and understanding.
link |
00:04:12.320
It will be self aware. Yes, that's different from consciousness. I mean, to me, in terms of what
link |
00:04:18.400
what consciousness feels like, it feels like consciousness is in a different dimension.
link |
00:04:22.640
But this is this could be just an illusion. You know, if you damage your brain in some way,
link |
00:04:30.480
physically, you get you, you damage your consciousness, which implies that consciousness
link |
00:04:35.920
is a physical phenomenon. And in my view, the thing is that that I think are really quite,
link |
00:04:42.720
quite likely is that digital intelligence will be able to outthink us in every way, and it will
link |
00:04:48.880
simply be able to simulate what we consider consciousness. So to the degree that you would
link |
00:04:54.080
not be able to tell the difference. And from the from the aspect of the scientific method,
link |
00:04:58.160
it's might as well be consciousness, if we can simulate it perfectly.
link |
00:05:01.440
If you can't tell the difference, when this is sort of the Turing test, but think of a more
link |
00:05:06.800
sort of advanced version of the Turing test. If you're if you're talking to a digital super
link |
00:05:13.600
intelligence and can't tell if that is a computer or a human, like let's say you're just having
link |
00:05:19.440
conversation over a phone or a video conference or something where you're you think you're talking
link |
00:05:26.480
looks like a person makes all of the right inflections and movements and all the small
link |
00:05:33.360
subtleties that constitute a human and talks like human makes mistakes like a human like
link |
00:05:42.400
and you literally just can't tell is this Are you video conferencing with a person or or an AI
link |
00:05:49.120
might as well might as well be human. So on a darker topic, you've expressed serious concern
link |
00:05:54.960
about existential threats of AI. It's perhaps one of the greatest challenges our civilization faces,
link |
00:06:02.400
but since I would say we're kind of an optimistic descendants of apes, perhaps we can find several
link |
00:06:08.480
paths of escaping the harm of AI. So if I can give you an example of an example of an example
link |
00:06:16.480
of escaping the harm of AI. So if I can give you three options, maybe you can comment which do you
link |
00:06:21.920
think is the most promising. So one is scaling up efforts on AI safety and beneficial AI research
link |
00:06:29.040
in hope of finding an algorithmic or maybe a policy solution. Two is becoming a multi planetary
link |
00:06:35.920
species as quickly as possible. And three is merging with AI and riding the wave of that
link |
00:06:44.000
increasing intelligence as it continuously improves. What do you think is most promising,
link |
00:06:49.280
most interesting, as a civilization that we should invest in?
link |
00:06:54.640
I think there's a lot of tremendous amount of investment going on in AI, where there's a lack
link |
00:06:59.200
of investment is in AI safety. And there should be in my view, a government agency that oversees
link |
00:07:07.760
anything related to AI to confirm that it is does not represent a public safety risk,
link |
00:07:12.960
just as there is a regulatory authority for the Food and Drug Administration is that's for
link |
00:07:20.320
automotive safety, there's the FAA for aircraft safety, which I really come to the conclusion that
link |
00:07:25.920
it is important to have a government referee or referee that is serving the public interest
link |
00:07:31.120
in ensuring that things are safe when when there's a potential danger to the public.
link |
00:07:37.120
I would argue that AI is unequivocally something that has potential to be dangerous to the public,
link |
00:07:43.920
and therefore should have a regulatory agency just as other things that are dangerous to the public
link |
00:07:48.480
have a regulatory agency. But let me tell you, the problem with this is that the government
link |
00:07:54.240
moves very slowly. And the rate of the rate, the usually way a regulatory agency comes into being
link |
00:08:01.920
is that something terrible happens. There's a huge public outcry. And years after that,
link |
00:08:09.680
there's a regulatory agency or a rule put in place, take something like, like seatbelts,
link |
00:08:15.120
it was known for a decade or more that seatbelts would have a massive impact on safety and save so
link |
00:08:25.840
many lives in serious injuries. And the car industry fought the requirement to put seatbelts in
link |
00:08:32.000
tooth and nail. That's crazy. Yeah. And hundreds of 1000s of people probably died because of that.
link |
00:08:41.040
And they said people wouldn't buy cars if they had seatbelts, which is obviously absurd.
link |
00:08:45.680
Yeah, or look at the tobacco industry and how long they fought any thing about smoking. That's part
link |
00:08:51.920
of why I helped make that movie. Thank you for smoking. You can sort of see just how pernicious
link |
00:08:58.320
it can be when you have these companies effectively achieve regulatory capture of government. The bad
link |
00:09:11.280
people in the community refer to the advent of digital super intelligence as a singularity.
link |
00:09:17.040
That is not to say that it is good or bad, but that it is very difficult to predict what will
link |
00:09:23.680
happen after that point. And then there's some probability it will be bad, some probably it'll
link |
00:09:28.480
be it will be good. We obviously want to affect that probability and have it be more good than bad.
link |
00:09:35.920
Well, let me on the merger with AI question and the incredible work that's being done at Neuralink.
link |
00:09:40.960
There's a lot of fascinating innovation here across different disciplines going on. So the flexible
link |
00:09:47.280
wires, the robotic sewing machine, that responsive brain movement, everything around ensuring safety
link |
00:09:52.960
and so on. So we currently understand very little about the human brain. Do you also hope that the
link |
00:10:02.560
work at Neuralink will help us understand more about our about our human brain?
link |
00:10:07.840
Yeah, I think the work in Neuralink will definitely shed a lot of insight into how the brain, the mind
link |
00:10:13.840
works. Right now, just the data we have regarding how the brain works is very limited. You know,
link |
00:10:20.640
we've got fMRI, which is that that's kind of like putting us, you know, stethoscope on the outside
link |
00:10:28.480
of a factory wall and then putting it like all over the factory wall and you can sort of hear
link |
00:10:33.200
the sounds, but you don't know what the machines are doing, really. It's hard. You can infer a few
link |
00:10:38.720
things, but it's very broad brushstroke. In order to really know what's going on in the brain,
link |
00:10:43.200
you really need you have to have high precision sensors. And then you want to have stimulus and
link |
00:10:47.600
response. Like if you trigger a neuron, what, how do you feel? What do you see? How does it change
link |
00:10:53.280
your perception of the world? You're speaking to physically just getting close to the brain,
link |
00:10:57.200
being able to measure signals, how do you know what's going on in the brain?
link |
00:11:00.400
Physically, just getting close to the brain, being able to measure signals from the brain
link |
00:11:04.160
will give us sort of open the door inside the factory.
link |
00:11:08.480
Yes, exactly. Being able to have high precision sensors that tell you what individual neurons
link |
00:11:15.280
are doing. And then being able to trigger a neuron and see what the response is in the brain.
link |
00:11:22.000
So you can see the consequences of if you fire this neuron, what happens? How do you feel? What
link |
00:11:28.720
does it change? It'll be really profound to have this in people because people can articulate
link |
00:11:35.520
their change. Like if there's a change in mood, or if they can tell you if they can see better,
link |
00:11:43.040
or hear better, or be able to form sentences better or worse, or their memories are jogged,
link |
00:11:51.040
or that kind of thing. So on the human side, there's this incredible general malleability,
link |
00:11:56.880
plasticity of the human brain, the human brain adapts, adjusts, and so on.
link |
00:12:01.040
So that's not that plastic, to be totally frank.
link |
00:12:03.200
So there's a firm structure, but nevertheless, there's some plasticity. And the open question is,
link |
00:12:09.040
sort of, if I could ask a broad question is how much that plasticity can be utilized. Sort of,
link |
00:12:15.120
on the human side, there's some plasticity in the human brain. And on the machine side,
link |
00:12:20.560
we have neural networks, machine learning, artificial intelligence, it's able to adjust
link |
00:12:26.640
and figure out signals. So there's a mysterious language that we don't perfectly understand
link |
00:12:31.760
that's within the human brain. And then we're trying to understand that language to communicate
link |
00:12:37.120
both directions. So the brain is adjusting a little bit, we don't know how much, and the
link |
00:12:42.160
machine is adjusting. Where do you see, as they try to sort of reach together, almost like with
link |
00:12:48.080
an alien species, try to find a protocol, communication protocol that works? Where do
link |
00:12:53.600
you see the biggest, the biggest benefit arriving from on the machine side or the human side? Do you
link |
00:12:59.360
see both of them working together? I think the machine side is far more malleable than the
link |
00:13:03.680
biological side, by a huge amount. So it'll be the machine that adapts to the brain. That's the only
link |
00:13:12.480
thing that's possible. The brain can't adapt that well to the machine. You can't have neurons start
link |
00:13:19.120
to regard an electrode as another neuron, because neurons just, there's like the pulse. And so
link |
00:13:24.960
something else is pulsing. So there is that elasticity in the interface, which we believe is
link |
00:13:32.320
something that can happen. But the vast majority of the malleability will have to be on the machine
link |
00:13:37.520
side. But it's interesting, when you look at that synaptic plasticity at the interface side,
link |
00:13:43.680
there might be like an emergent plasticity. Because it's a whole nother, it's not like in the
link |
00:13:48.560
brain, it's a whole nother extension of the brain. You know, we might have to redefine what it means
link |
00:13:53.840
to be malleable for the brain. So maybe the brain is able to adjust to external interfaces. There
link |
00:13:59.440
will be some adjustments to the brain, because there's going to be something reading and simulating
link |
00:14:03.680
the brain. And so it will adjust to that thing. But most, the vast majority of the adjustment
link |
00:14:12.400
will be on the machine side. This is just, this is just, it has to be that otherwise it will not
link |
00:14:18.720
work. Ultimately, like, we currently operate on two layers, we have sort of a limbic, like prime
link |
00:14:23.440
primitive brain layer, which is where all of our kind of impulses are coming from. It's sort of
link |
00:14:29.680
like we've got, we've got like a monkey brain with a computer stuck on it. That's that's the
link |
00:14:34.720
human brain. And a lot of our impulses and everything are driven by the monkey brain.
link |
00:14:39.360
And the computer, the cortex is constantly trying to make the monkey brain happy.
link |
00:14:44.720
It's not the cortex that's steering the monkey brains, the monkey brain steering the cortex.
link |
00:14:51.040
You know, the cortex is the part that tells the story of the whole thing. So we convince ourselves
link |
00:14:56.000
it's, it's more interesting than just the monkey brain. The cortex is like what we call like human
link |
00:15:01.360
intelligence. You know, it's just like, that's like the advanced computer relative to other
link |
00:15:05.280
creatures. The other creatures do not have either. Really, they don't, they don't have the
link |
00:15:11.840
computer, or they have a very weak computer relative to humans. But it's, it's like, it sort
link |
00:15:19.840
of seems like surely the really smart thing should control the dumb thing. But actually,
link |
00:15:24.880
the dumb thing controls the smart thing. So do you think some of the same kind of machine learning
link |
00:15:30.160
methods, whether that's natural language processing applications are going to be applied for the
link |
00:15:35.920
communication between the machine and the brain to learn how to do certain things like movement
link |
00:15:43.040
of the body, how to process visual stimuli, and so on. Do you see the value of using machine
link |
00:15:50.320
learning to understand the language of the two way communication with the brain? Sure. Yeah,
link |
00:15:55.440
absolutely. I mean, we're neural net. And that, you know, AI is basically neural net.
link |
00:16:02.800
So it's like digital neural net will interface with biological neural net.
link |
00:16:08.160
And hopefully bring us along for the ride. Yeah. But the vast majority of our intelligence will be
link |
00:16:14.320
digital. Like, so like, think of like, the difference in intelligence between your cortex
link |
00:16:23.120
and your limbic system is gigantic, your limbic system really has no comprehension of what the
link |
00:16:29.840
hell the cortex is doing. It's just literally hungry, you know, or tired or angry or sexy or
link |
00:16:40.240
something, you know, that's just and then that communicates that that impulse to the cortex and
link |
00:16:47.600
tells the cortex to go satisfy that then love a great deal of like, a massive amount of thinking,
link |
00:16:54.480
like truly stupendous amount of thinking has gone into sex without purpose, without procreation,
link |
00:17:00.960
without procreation. Which is actually quite a silly action in the absence of procreation. It's
link |
00:17:11.440
a bit silly. Why are you doing it? Because it makes the limbic system happy. That's why. That's why.
link |
00:17:17.840
But it's pretty absurd, really. Well, the whole of existence is pretty absurd in some kind of sense.
link |
00:17:24.880
Yeah. But I mean, this is a lot of computation has gone into how can I do more of that with
link |
00:17:32.160
procreation not even being a factor? This is, I think, a very important area of research by NSFW.
link |
00:17:40.160
An agency that should receive a lot of funding, especially after this conversation.
link |
00:17:44.160
I propose the formation of a new agency. Oh, boy.
link |
00:17:48.480
What is the most exciting or some of the most exciting things that you see in the future impact
link |
00:17:53.520
of Neuralink, both in the science, the engineering and societal broad impact?
link |
00:17:59.120
Neuralink, I think, at first will solve a lot of brain related diseases. So it could be anything
link |
00:18:05.600
from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points
link |
00:18:11.600
in age. Parents can't remember their kids names and that kind of thing. So it could be anything
link |
00:18:16.480
from like autism, schizophrenia, memory loss, like everyone experiences memory loss at certain points
link |
00:18:19.280
in age. Parents can't remember their kids names and that kind of thing. So there's a tremendous
link |
00:18:24.400
amount of good that Neuralink can do in solving critical damage to the brain or the spinal cord.
link |
00:18:34.480
There's a lot that can be done to improve quality of life of individuals. And those will be steps
link |
00:18:40.720
to address the existential risk associated with digital superintelligence. Like we will not be
link |
00:18:48.240
able to be smarter than a digital supercomputer. So therefore, if you cannot beat them, join them.
link |
00:18:58.240
And at least we won't have that option.
link |
00:19:01.520
So you have hope that Neuralink will be able to be a kind of connection to allow us to merge,
link |
00:19:09.200
the wave of the improving AI systems. I think the chance is above zero percent.
link |
00:19:15.600
So it's non zero. There's a chance. And that's what I've seen. Dumb and Dumber.
link |
00:19:21.920
Yes. So I'm saying there's a chance. He's saying one in a billion or one in a million,
link |
00:19:26.400
whatever it was, a dumb and dumber. You know, it went from maybe one in a million to improving.
link |
00:19:31.120
Maybe it'll be one in a thousand and then one in a hundred, then one in ten. Depends on the rate
link |
00:19:35.040
of improvement of Neuralink and how fast we're able to do make progress.
link |
00:19:41.040
Well, I've talked to a few folks here that are quite brilliant engineers, so I'm excited.
link |
00:19:45.440
Yeah, I think it's like fundamentally good, you know,
link |
00:19:48.400
giving somebody back full motor control after they've had a spinal cord injury.
link |
00:19:53.840
You know, restoring brain functionality after a stroke,
link |
00:19:57.920
solving debilitating genetically oriented brain diseases. These are all incredibly
link |
00:20:02.160
great, I think. And in order to do these, you have to be able to interface with neurons at
link |
00:20:07.440
a detailed level and you need to be able to fire the right neurons, read the right neurons, and
link |
00:20:13.200
and then effectively you can create a circuit, replace what's broken with
link |
00:20:19.760
with silicon and essentially fill in the missing functionality. And then over time,
link |
00:20:26.000
we can develop a tertiary layer. So if like the limbic system is the primary layer, then the
link |
00:20:31.120
cortex is like the second layer. And as I said, obviously the cortex is vastly more intelligent
link |
00:20:36.320
than the limbic system, but people generally like the fact that they have a limbic system
link |
00:20:40.080
and a cortex. I haven't met anyone who wants to delete either one of them. They're like,
link |
00:20:44.480
okay, I'll keep them both. That's cool. The limbic system is kind of fun.
link |
00:20:47.440
That's where the fun is, absolutely. And then people generally don't want to lose their
link |
00:20:53.360
cortex either. They're like having the cortex and the limbic system. And then there's a tertiary
link |
00:20:59.360
layer, which will be digital superintelligence. And I think there's room for optimism given that
link |
00:21:05.520
the cortex, the cortex is very intelligent and limbic system is not, and yet they work together
link |
00:21:11.760
well. Perhaps there can be a tertiary layer where digital superintelligence lies, and that will be
link |
00:21:18.560
vastly more intelligent than the cortex, but still coexist peacefully and in a benign manner with the
link |
00:21:24.880
cortex and limbic system. That's a super exciting future, both in low level engineering that I saw
link |
00:21:30.320
as being done here and the actual possibility in the next few decades. It's important that
link |
00:21:36.080
Neuralink solve this problem sooner rather than later, because the point at which we have digital
link |
00:21:40.880
superintelligence, that's when we pass the singularity and things become just very uncertain.
link |
00:21:45.440
It doesn't mean that they're necessarily bad or good. For the point at which we pass singularity,
link |
00:21:48.640
things become extremely unstable. So we want to have a human brain interface before the singularity,
link |
00:21:55.440
or at least not long after it, to minimize existential risk for humanity and consciousness
link |
00:22:01.360
as we know it. So there's a lot of fascinating actual engineering, low level problems here at
link |
00:22:07.200
Neuralink that are quite exciting. The problems that we face in Neuralink are material science,
link |
00:22:15.600
electrical engineering, software, mechanical engineering, microfabrication. It's a bunch of
link |
00:22:22.560
engineering disciplines, essentially. That's what it comes down to, is you have to have a
link |
00:22:26.080
tiny electrode, so small it doesn't hurt neurons, but it's got to last for as long as a person. So
link |
00:22:35.520
it's going to last for decades. And then you've got to take that signal, you've got to process
link |
00:22:40.880
that signal locally at low power. So we need a lot of chip design engineers, because we're going to
link |
00:22:48.800
do signal processing, and do so in a very power efficient way, so that we don't heat your brain
link |
00:22:56.320
up, because the brain is very heat sensitive. And then we've got to take those signals and
link |
00:23:01.040
we're going to do something with them. And then we've got to stimulate the back to bidirectional
link |
00:23:10.080
communication. So somebody's good at material science, software, and we've got to do a lot of
link |
00:23:15.360
that. So somebody's good at material science, software, mechanical engineering, electrical
link |
00:23:20.880
engineering, chip design, microfabrication. Those are the things we need to work on.
link |
00:23:27.520
We need to be good at material science, so that we can have tiny electrodes that last a long time.
link |
00:23:32.080
And it's a tough thing with the material science problem, it's a tough one, because
link |
00:23:35.760
you're trying to read and simulate electrically in an electrically active area. Your brain is
link |
00:23:43.680
very electrically active and electrochemically active. So how do you have a coating on the
link |
00:23:49.520
electrode that doesn't dissolve over time and is safe in the brain? This is a very hard problem.
link |
00:23:59.040
And then how do you collect those signals in a way that is most efficient? Because you really
link |
00:24:06.880
just have very tiny amounts of power to process those signals. And then we need to automate the
link |
00:24:12.720
whole thing so it's like LASIK. If this is done by neurosurgeons, there's no way it can scale to
link |
00:24:20.960
a large number of people. And it needs to scale to a large number of people, because I think
link |
00:24:24.800
ultimately we want the future to be determined by a large number of humans. Do you think that
link |
00:24:32.720
this has a chance to revolutionize surgery period? So neurosurgery and surgery all across?
link |
00:24:39.040
Yeah, for sure. It's got to be like LASIK. If LASIK had to be done by hand by a person,
link |
00:24:45.680
that wouldn't be great. It's done by a robot. And the ophthalmologist kind of just needs to make
link |
00:24:54.320
sure your head's in the right position, and then they just press a button and go.
link |
00:25:00.000
SmartSummon and soon Autopark takes on the full beautiful mess of parking lots and their human
link |
00:25:05.920
to human nonverbal communication. I think it has actually the potential to have a profound impact
link |
00:25:13.680
in changing how our civilization looks at AI and robotics, because this is the first time human
link |
00:25:19.440
beings, people that don't own a Tesla may have never seen a Tesla or heard about a Tesla,
link |
00:25:24.080
get to watch hundreds of thousands of cars without a driver. Do you see it this way, almost like an
link |
00:25:30.880
education tool for the world about AI? Do you feel the burden of that, the excitement of that,
link |
00:25:36.080
or do you just think it's a smart parking feature? I do think you are getting at something
link |
00:25:42.160
important, which is most people have never really seen a robot. And what is the car that is
link |
00:25:47.680
autonomous? It's a four wheeled robot. Yeah, it communicates a certain sort of message with
link |
00:25:53.200
everything from safety to the possibility of what AI could bring to its current limitations,
link |
00:25:59.520
its current challenges, it's what's possible. Do you feel the burden of that almost like a
link |
00:26:04.000
communicator educator to the world about AI? We were just really trying to make people's
link |
00:26:09.600
lives easier with autonomy. But now that you mentioned it, I think it will be an eye opener
link |
00:26:15.040
to people about robotics, because they've really never seen most people never seen a robot. And
link |
00:26:20.960
there are hundreds of thousands of Tesla's won't be long before there's a million of them that
link |
00:26:25.440
have autonomous capability, and the drive without a person in it. And you can see the kind of
link |
00:26:31.760
evolution of the car's personality and, and thinking with each iteration of autopilot,
link |
00:26:40.080
you can see it's, it's uncertain about this, or it gets it, but now it's more certain. Now it's
link |
00:26:47.600
moving in a slightly different way. Like, I can tell immediately if a car is on Tesla autopilot,
link |
00:26:53.200
because it's got just little nuances of movement, it just moves in a slightly different way.
link |
00:26:58.720
Cars on Tesla autopilot, for example, on the highway are far more precise about being in the
link |
00:27:02.960
center of the lane than a person. If you drive down the highway and look at how at where cars
link |
00:27:08.960
are, the human driven cars are within their lane, they're like bumper cars. They're like moving all
link |
00:27:13.840
over the place. The car in autopilot, dead center. Yeah, so the incredible work that's going into
link |
00:27:20.720
that neural network, it's learning fast. Autonomy is still very, very hard. We don't actually know
link |
00:27:27.040
how hard it is fully, of course. You look at the most problems you tackle, this one included,
link |
00:27:34.880
with an exponential lens, but even with an exponential improvement, things can take longer
link |
00:27:39.520
than expected sometimes. So where does Tesla currently stand on its quest for full autonomy?
link |
00:27:47.840
What's your sense? When can we see successful deployment of full autonomy?
link |
00:27:55.840
Well, on the highway already, the the probability of intervention is extremely low.
link |
00:28:00.160
Yes. So for highway autonomy, with the latest release, especially the probability of needing
link |
00:28:08.480
to intervene is really quite low. In fact, I'd say for stop and go traffic,
link |
00:28:13.200
it's far safer than a person right now. The probability of an injury or impact is much,
link |
00:28:18.880
much lower for autopilot than a person. And then with navigating autopilot, you can change lanes,
link |
00:28:25.360
take highway interchanges, and then we're coming at it from the other direction, which is low speed,
link |
00:28:30.320
full autonomy. And in a way, this is like, how does a person learn to drive? You learn to drive
link |
00:28:35.920
in the parking lot. You know, the first time you learn to drive probably wasn't jumping on
link |
00:28:40.720
August Street in San Francisco. That'd be crazy. You learn to drive in the parking lot, get things
link |
00:28:45.200
get things right at low speed. And then the missing piece that we're working on is traffic
link |
00:28:52.400
lights and stop streets. Stop streets, I would say actually also relatively easy, because, you know,
link |
00:28:59.200
you kind of know where the stop street is, worst case in geocoders, and then use visualization to
link |
00:29:04.320
see where the line is and stop at the line to eliminate the GPS error. So actually, I'd say it's
link |
00:29:10.720
probably complex traffic lights and very windy roads are the two things that need to get solved.
link |
00:29:19.680
What's harder, perception or control for these problems? So being able to perfectly perceive
link |
00:29:24.000
everything, or figuring out a plan once you perceive everything, how to interact with all the
link |
00:29:29.600
agents in the environment in your sense, from a learning perspective, is perception or action
link |
00:29:35.440
harder? And that giant, beautiful multitask learning neural network, the hottest thing is
link |
00:29:42.240
having accurate representation of the physical objects in vector space. So transfer taking the
link |
00:29:48.960
visual input, primarily visual input, some sonar and radar, and and then creating the an accurate
link |
00:29:56.880
vector space representation of the objects around you. Once you have an accurate vector space
link |
00:30:02.400
representation, the planning and control is relatively easier. That is relatively easy.
link |
00:30:08.160
Basically, once you have accurate vector space representation, then you're kind of like a video
link |
00:30:14.560
game, like cars and like Grand Theft Auto or something like they work pretty well. They drive
link |
00:30:19.600
down the road, they don't crash, you know, pretty much unless you crash into them. That's because
link |
00:30:24.160
they've they've got an accurate vector space representation of where the cars are, and they're
link |
00:30:27.360
just and then they're rendering that as the as the output. Do you have a sense, high level, that
link |
00:30:33.520
Tesla's on track on being able to achieve full autonomy? So on the highway? Yeah, absolutely.
link |
00:30:42.000
And still no driver state, driver sensing? And we have driver sensing with torque on the wheel.
link |
00:30:48.320
That's right. Yeah. By the way, just a quick comment on karaoke. Most people think it's fun,
link |
00:30:55.120
but I also think it is a driving feature. I've been saying for a long time, singing in the car
link |
00:30:59.040
is really good for attention management and vigilance management. That's right.
link |
00:31:02.720
Tesla karaoke is great. It's one of the most fun features of the car. Do you think of a connection
link |
00:31:08.480
between fun and safety sometimes? Yeah, you can do both at the same time. That's great.
link |
00:31:12.640
I just met with Andrew and wife of Carl Sagan, directed Cosmos. I'm generally a big fan of Carl
link |
00:31:19.760
Sagan. He's super cool. And had a great way of putting things. All of our consciousness,
link |
00:31:25.280
all civilization, everything we've ever known and done is on this tiny blue dot.
link |
00:31:29.920
People also get they get too trapped in there. This is like squabbles amongst humans.
link |
00:31:34.720
Let's not think of the big picture. They take civilization and our continued existence for
link |
00:31:39.680
granted. I shouldn't do that. Look at the history of civilizations. They rise and they fall. And now
link |
00:31:47.760
civilization is all it's globalized. And so civilization, I think now rises and falls together.
link |
00:31:56.480
There's no there's not geographic isolation. This is a big risk. Things don't always go up. That
link |
00:32:05.120
should be that's an important lesson of history. In 1990, at the request of Carl Sagan, the Voyager
link |
00:32:12.720
One spacecraft, which is a spacecraft that's reaching out farther than anything human made
link |
00:32:18.560
into space, turned around to take a picture of Earth from 3.6 billion years ago. And that's
link |
00:32:24.720
a picture of Earth from 3.7 billion miles away. And as you're talking about the pale blue dot,
link |
00:32:31.520
that picture there takes up less than a single pixel in that image. Yes. Appearing as a tiny
link |
00:32:37.600
blue dot, as a pale blue dot, as Carl Sagan called it. So he spoke about this dot of ours in 1994.
link |
00:32:46.640
And if you could humor me, I was wondering if in the last two minutes you could read the words
link |
00:32:54.160
that he wrote describing this pale blue dot. Sure. Yes, it's funny. The universe appears to be 13.8
link |
00:33:01.520
billion years old. Earth is like four and a half billion years old.
link |
00:33:07.920
In another half billion years or so, the sun will expand and probably evaporate the oceans and make
link |
00:33:14.320
life impossible on Earth, which means that if it had taken consciousness 10% longer to evolve,
link |
00:33:19.200
it would never have evolved at all. It's 10% longer. And I wonder how many dead one planet
link |
00:33:29.680
civilizations there are out there in the cosmos.
link |
00:33:31.520
That never made it to the other planet and ultimately extinguished themselves or were destroyed
link |
00:33:35.200
by external factors. Probably a few. It's only just possible to travel to Mars. Just barely.
link |
00:33:46.640
If G was 10% more, it wouldn't work really.
link |
00:33:50.080
If G was 10% lower, it would be easy. Like you can go single stage from the surface of Mars all the
link |
00:34:00.240
way to the surface of the Earth. Because Mars is 37% Earth's gravity. We need a giant booster
link |
00:34:08.240
to get off the Earth. Channeling Carl Sagan. Look again at that dot. That's here. That's home. That's us.
link |
00:34:25.360
On it, everyone you love, everyone you know, everyone you've ever heard of, every human being
link |
00:34:30.960
who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident
link |
00:34:37.600
religions, ideologies and economic doctrines. Every hunter and farger, every hero and coward,
link |
00:34:42.960
every creator and destroyer of civilization, every king and peasant, every young couple in love,
link |
00:34:49.840
every mother and father, hopeful child, inventor and explorer, every teacher of morals, every
link |
00:34:57.760
corrupt politician, every superstar, every supreme leader, every saint and sinner in the history of
link |
00:35:06.400
our species lived there on a mode of dust suspended in a sunbeam. Our planet is a lonely speck in the
link |
00:35:13.840
great enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help
link |
00:35:20.640
will come from elsewhere to save us from ourselves. The Earth is the only world known so far to harbor
link |
00:35:25.680
life. There is nowhere else, at least in the near future, to which our species could migrate. This
link |
00:35:32.080
is not true. This is false. Mars. And I think Carl Sagan would agree with that. He couldn't even
link |
00:35:39.840
imagine it at that time. So thank you for making the world dream. And thank you for talking today.
link |
00:35:45.760
I really appreciate it. Thank you.