back to index

Elon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49


small model | large model

link |
00:00:00.000
The following is a conversation with Elon Musk, part two. The second time we spoke on the podcast
link |
00:00:07.280
with parallels, if not in quality, than an outfit to the objectively speaking greatest
link |
00:00:13.120
sequel of all time, Godfather part two. As many people know, Elon Musk is a leader of Tesla,
link |
00:00:20.720
SpaceX, Neolink, and the Boren company. What may be less known is that he's a world class
link |
00:00:26.880
engineer and designer, constantly emphasizing first principles thinking and taking on big
link |
00:00:32.480
engineering problems that many before him will consider impossible. As scientists and engineers,
link |
00:00:39.600
most of us don't question the way things are done, we simply follow the momentum of the crowd.
link |
00:00:44.880
But revolutionary ideas that change the world on the small and large scales happen
link |
00:00:50.480
when you return to the fundamentals and ask, is there a better way? This conversation focuses
link |
00:00:57.600
on the incredible engineering and innovation done in brain computer interfaces at Neolink.
link |
00:01:04.160
This work promises to help treat neurobiological diseases to help us further understand the
link |
00:01:09.520
connection between the individual neuron to the high level function of the human brain.
link |
00:01:14.480
And finally, to one day expand the capacity of the brain through two way communication
link |
00:01:20.320
with computational devices, the internet, and artificial intelligence systems.
link |
00:01:25.440
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by new YouTube,
link |
00:01:31.040
Apple Podcasts, Spotify, support on Patreon, or simply connect with me on Twitter,
link |
00:01:36.400
Alex Friedman, spelled F R I D M A N. And now, as an anonymous YouTube commenter,
link |
00:01:43.040
refer to our previous conversation as the quote, historical first video of two robots
link |
00:01:48.480
conversing without supervision. Here's the second time, the second conversation with Elon Musk.
link |
00:01:57.840
Let's start with an easy question about consciousness. In your view, is consciousness
link |
00:02:03.120
something that's unique to humans, or is it something that permeates all matter,
link |
00:02:07.120
almost like a fundamental force of physics? I don't think consciousness permeates all matter.
link |
00:02:12.240
Panpsychists believe that. Yeah. There's a philosophical. How would you tell?
link |
00:02:19.680
That's true. That's a good point. I believe in scientific methods. I don't
link |
00:02:22.880
bother mine or anything, but the scientific method is like, if you cannot test the hypothesis,
link |
00:02:26.480
then you cannot reach meaningful conclusion that it is true. Do you think consciousness,
link |
00:02:31.760
understanding consciousness is within the reach of science of the scientific method?
link |
00:02:36.880
We can dramatically improve our understanding of consciousness. I would be hard pressed to
link |
00:02:43.760
say that we understand anything with complete accuracy, but can we dramatically improve our
link |
00:02:49.360
understanding of consciousness? I believe the answer is yes.
link |
00:02:53.360
Does an AI system, in your view, have to have consciousness in order to achieve human level
link |
00:02:58.480
or superhuman level intelligence? Does it need to have some of these human qualities,
link |
00:03:03.120
that consciousness, maybe a body, maybe a fear of mortality, capacity to love,
link |
00:03:10.160
those kinds of silly human things?
link |
00:03:16.160
It's different. There's this scientific method, which I very much believe in,
link |
00:03:20.880
where something is true to the degree that it is, testably so.
link |
00:03:25.200
Otherwise, you're really just talking about preferences or untestable beliefs or that kind
link |
00:03:37.360
of thing. It ends up being somewhat of a semantic question, where we are conflating
link |
00:03:45.120
a lot of things with the word intelligence. If we parse them out and say,
link |
00:03:48.800
all we headed towards the future where an AI will be able to outthink us in every way,
link |
00:04:01.520
then the answer is unequivocally yes.
link |
00:04:05.600
In order for an AI system that needs to outthink us in every way, it also needs to have
link |
00:04:11.600
a capacity to have consciousness, self awareness, and understanding.
link |
00:04:16.560
It will be self aware, yes. That's different from consciousness.
link |
00:04:21.120
I mean, to me, in terms of what consciousness feels like, it feels like consciousness is in
link |
00:04:25.040
a different dimension. But this could be just an illusion. If you damage your brain in some way,
link |
00:04:34.320
physically, you get, you damage your consciousness, which implies that consciousness is a physical
link |
00:04:40.480
phenomenon, in my view. The things that I think are really quite likely is that digital
link |
00:04:48.640
intelligence will outthink us in every way, and it will soon be able to simulate what we consider
link |
00:04:55.120
consciousness. So to a degree that you would not be able to tell the difference.
link |
00:04:59.520
And from the aspect of the scientific method, it might as well be consciousness if we can
link |
00:05:04.320
simulate it perfectly. If you can't tell the difference, and this is sort of the Turing test,
link |
00:05:10.640
but think of a more sort of advanced version of the Turing test. If you're talking to
link |
00:05:17.680
digital superintelligence and can't tell if that is a computer or a human, like let's say you're
link |
00:05:23.600
just having a conversation over a phone or a video conference or something where you think
link |
00:05:30.480
you're talking, looks like a person makes all of the right inflections and movements and all the
link |
00:05:37.680
small subtleties that constitute a human and talks like human makes mistakes like a human.
link |
00:05:47.200
And you literally just can't tell. Are you video conferencing with a person or an AI?
link |
00:05:53.440
Might as well. Might as well. Be human. So on a darker topic, you've expressed serious concern
link |
00:06:02.000
about existential threats of AI. It's perhaps one of the greatest challenges our civilization faces.
link |
00:06:09.440
But since I would say we're kind of an optimistic descendants of apes, perhaps we can find several
link |
00:06:15.600
paths of escaping the harm of AI. So if I can give you three options, maybe you can comment,
link |
00:06:21.520
which do you think is the most promising? So one is scaling up efforts on AI safety and
link |
00:06:27.360
beneficial AI research in hope of finding an algorithmic or maybe a policy solution.
link |
00:06:34.160
Two is becoming a multi planetary species as quickly as possible. And three is merging with AI
link |
00:06:41.280
and riding the wave of that increasing intelligence as it continuously improves.
link |
00:06:47.200
What do you think is the most promising, most interesting as a civilization that we should
link |
00:06:51.440
invest in? I think there's a lot of investment going on in AI. Where there's a lack of investment
link |
00:06:59.280
is in AI safety. And there should be, in my view, a government agency that oversees
link |
00:07:07.040
anything related to AI to confirm that it does not represent a public safety risk,
link |
00:07:12.400
just as there is a regulatory authority for the food and drug administration,
link |
00:07:17.600
there's NHTSA for automotive safety, there's the FAA for aircraft safety.
link |
00:07:24.160
We generally come to the conclusion that it is important to have a government referee or
link |
00:07:28.880
referee that is serving the public interest in ensuring that things are safe when there's a
link |
00:07:35.920
potential danger to the public. I would argue that AI is unequivocally something that has
link |
00:07:43.440
potential to be dangerous to the public and therefore should have a regulatory agency just as
link |
00:07:48.160
other things that are dangerous to the public have a regulatory agency. But let me tell you a
link |
00:07:52.960
problem with this is that the government moves very slowly and the rate of the rate, the usually
link |
00:08:00.000
way a regulatory agency comes into being is that something terrible happens. There's a huge public
link |
00:08:07.920
outcry and years after that, there's a regulatory agency or a rule put in place. It takes something
link |
00:08:14.960
like seat belts. It was known for a decade or more that seat belts would have a massive impact on
link |
00:08:24.480
safety and save so many lives and serious injuries. The car industry fought the requirement to put
link |
00:08:31.840
seat belts in tooth and nail. That's crazy. Hundreds of thousands of people probably died
link |
00:08:40.000
because of that. They said people wouldn't buy cars if they had seat belts, which is obviously
link |
00:08:44.880
absurd. Or look at the tobacco industry and how long they fought any thing about smoking.
link |
00:08:51.600
That's part of why I helped make that movie, Thank You for Smoking. You can sort of see just
link |
00:08:58.800
how pernicious it can be when you have these companies effectively
link |
00:09:06.320
achieve regulatory capture of government is bad. People in the AI community refer to the advent of
link |
00:09:14.960
digital superintelligence as a singularity. That is not to say that it is good or bad,
link |
00:09:22.480
but that it is very difficult to predict what will happen after that point. There's some
link |
00:09:28.880
probability it will be bad, some probably it will be good. We obviously want to affect that
link |
00:09:33.600
probability and have it be more good than bad. Well, let me on the merger with AI question and
link |
00:09:41.280
the incredible work that's being done at Neuralink. There's a lot of fascinating innovation here
link |
00:09:46.640
across different disciplines going on. The flexible wires, the robotic sewing machine,
link |
00:09:52.880
the responsive brain movement, everything around ensuring safety and so on. We currently
link |
00:10:00.560
understand very little about the human brain. Do you also hope that the work at Neuralink will
link |
00:10:07.360
help us understand more about the human mind, about the brain? Yeah, I think the work at Neuralink
link |
00:10:14.240
will definitely shut a lot of insight into how the brain and the mind works. Right now,
link |
00:10:20.640
just the data we have regarding how the brain works is very limited. We've got fMRI, which
link |
00:10:29.040
is that that's kind of like putting a stethoscope on the outside of a factory wall and then putting
link |
00:10:35.600
it like all over the factory wall and you can sort of hear the sounds, but you don't know what
link |
00:10:39.360
the machines are doing really. It's hard. You can infer a few things, but it's a very broad
link |
00:10:44.640
breaststroke. In order to really know what's going on in the brain, you really need, you have to have
link |
00:10:49.040
high precision sensors and then you want to have stimulus and response. Like if you trigger Neuralink,
link |
00:10:54.880
how do you feel? What do you see? How does it change your perception of the world?
link |
00:10:59.440
You're speaking to physically just getting close to the brain, being able to measure signals from
link |
00:11:03.280
the brain will give us sort of open the door inside the factory. Yes, exactly. Being able to
link |
00:11:10.240
have high precision sensors that tell you what individual neurons are doing and then being
link |
00:11:17.440
able to trigger Neuron and see what the response is in the brain so you can see the consequences of
link |
00:11:25.600
if you fire this Neuron, what happens? How do you feel? What does it change? It'll be really
link |
00:11:31.120
profound to have this in people because people can articulate their change. Like if there's a
link |
00:11:38.160
change in mood or if they can tell you if they can see better or hear better or
link |
00:11:45.840
be able to form sentences better or worse or their memories are jogged or that kind of thing.
link |
00:11:52.720
So on the human side, there's this incredible general malleability,
link |
00:11:56.800
plasticity of the human brain. The human brain adapts, adjusts and so on. So that's not that
link |
00:12:01.680
plastic for totally frank. So there's a firm structure but there is some plasticity and the
link |
00:12:07.840
open question is sort of if I could ask a broad question is how much that plasticity can be
link |
00:12:13.760
utilized? Sort of on the human side, there's some plasticity in the human brain and on the machine
link |
00:12:20.160
side, we have neural networks, machine learning, artificial intelligence that's able to adjust
link |
00:12:27.200
and figure out signals. So there's a mysterious language that we don't perfectly understand
link |
00:12:32.320
that's within the human brain and then we're trying to understand that language to communicate
link |
00:12:37.680
both directions. So the brain is adjusting a little bit, we don't know how much and the
link |
00:12:42.720
machine is adjusting. Where do you see as they try to sort of reach together almost like with an
link |
00:12:48.800
alien species, try to find a protocol, communication protocol that works? Where do you see the biggest
link |
00:12:56.320
the biggest benefit arriving from on the machine side or the human side? Do you see both of them
link |
00:13:00.720
working together? I should think the machine side is far more malleable than the biological side
link |
00:13:06.000
by a huge amount. So it will be the the machine that adapts to the brain. That's the only thing
link |
00:13:13.280
that's possible. The brain can't adapt that well to the machine. You can't have neurons start to
link |
00:13:19.280
regard an electrode as another neuron because a neuron just this is like the pulse and so
link |
00:13:24.960
something else is pulsing. So there is that elasticity in the interface which we believe
link |
00:13:31.440
is something that can happen but the vast majority of malleability will have to be on the machine
link |
00:13:37.600
side. But it's interesting when you look at that synaptic plasticity at the interface side, there
link |
00:13:43.840
might be like an emergent plasticity because it's a whole nother. It's not like in the brain. It's a
link |
00:13:49.040
whole nother extension of the brain. You know, we might have to redefine what it means to be
link |
00:13:54.800
malleable for the brain. So maybe the brain is able to adjust to external interfaces.
link |
00:13:59.280
There will be some adjustments to the brain because there's going to be something
link |
00:14:02.320
reading and simulating the brain and so it will adjust to that thing. But
link |
00:14:10.880
most the vast majority of the adjustment will be on the machine side.
link |
00:14:15.520
This is just it has to be that otherwise it will not work. Ultimately, like we currently
link |
00:14:20.720
operate on two layers. We have sort of a limbic like prime primitive brain layer,
link |
00:14:25.920
which is where all of our kind of impulses are coming from. It's sort of like we've got
link |
00:14:30.640
we've got like a monkey brain with a computer stuck on it. That's the human brain. And a lot
link |
00:14:36.240
of our impulses and everything are driven by the monkey brain. And the computer, the cortex,
link |
00:14:42.160
is constantly trying to make the monkey brain happy. It's not the cortex that's steering the
link |
00:14:47.200
monkey brain. It's the monkey brain steering the cortex. But the cortex is the part that tells
link |
00:14:53.360
the story of the whole thing. So we convince ourselves it's more interesting than just the
link |
00:14:58.160
monkey brain. The cortex is like what we call like human intelligence, you know, so it's like
link |
00:15:02.800
that's like the advanced computer relative to other creatures. Other creatures do not have
link |
00:15:08.160
either really, they don't have the computer, or they have a very weak computer relative to humans.
link |
00:15:17.200
But but it's this, it's like, it sort of seems like surely the really smart thing should control
link |
00:15:23.200
the dumb thing. But actually, the dumb thing controls the smart thing. So do you think some
link |
00:15:28.800
of the same kind of machine learning methods, or whether that's natural language processing
link |
00:15:33.520
applications are going to be applied for the communication between the machine and the brain
link |
00:15:40.640
to learn how to do certain things like movement of the body, how to process visual stimuli and so
link |
00:15:46.560
on. Do you see the value of using machine learning to understand the language of the two way
link |
00:15:53.440
communication with the brain? Yeah, absolutely. I mean, we're neural net. And that, you know,
link |
00:16:00.880
AI is basically neural net. So it's like digital neural net will interface with biological neural
link |
00:16:05.760
net. And hopefully bring us along for the ride. Yeah. But the vast majority of our
link |
00:16:12.160
of our intelligence will be digital. This is like, so like think of like the difference in
link |
00:16:20.880
intelligence between your cortex and your limbic system is gigantic. Your limbic system really
link |
00:16:26.800
has no comprehension of what the hell the cortex is doing. It's just literally hungry, you know,
link |
00:16:35.680
or tired or angry or sexy or something, you know, and just and then in that case, that's
link |
00:16:44.960
that impulse to the cortex and tells the cortex to go satisfy that. Then a lot of great deal of
link |
00:16:51.840
like a massive amount of thinking, like truly stupendous amount of thinking has gone into sex
link |
00:16:57.600
without purpose, without procreation, without procreation, which, which is actually quite
link |
00:17:06.800
a silly action in the absence of procreation. It's a bit silly. Well, why are you doing it?
link |
00:17:14.480
Because it makes the limbic system happy. That's why that's why. But it's pretty absurd, really.
link |
00:17:21.280
Well, the whole of existence is pretty absurd in some kind of sense.
link |
00:17:24.880
Yeah. But I mean, this is a lot of computation has gone into how can I do more of that
link |
00:17:31.920
with procreation, not even being a factor. This is, I think, a very important era of research by NSFW.
link |
00:17:40.240
Any agency that should receive a lot of funding, especially after this conversation?
link |
00:17:44.080
If I propose a formation of a new agency. Oh, boy.
link |
00:17:48.480
What is the most exciting or some of the most exciting things that you see in the future impact
link |
00:17:57.040
of Neuralink, both in the science, the engineering and societal broad impact?
link |
00:18:02.720
So Neuralink, I think, at first will solve a lot of brain related diseases. So
link |
00:18:08.560
creating from like autism, schizophrenia, memory loss, like everyone experiences memory loss at
link |
00:18:14.560
certain points in age. Parents can't remember their kid's names and that kind of thing.
link |
00:18:19.280
So there's a tremendous amount of good that Neuralink can do in solving critical
link |
00:18:27.840
damage to brain or the spinal cord. There's a lot that can be done to improve quality of life
link |
00:18:33.760
of individuals. And that will be those will be steps along the way. And then ultimately,
link |
00:18:38.240
it's intended to address the the risk, the existential risk associated with
link |
00:18:45.120
digital superintelligence. Like we will not be able to be smarter than a digital supercomputer.
link |
00:18:54.400
So therefore, if you cannot beat them, join them. And at least we won't have that option.
link |
00:18:59.280
So you have hope that Neuralink will be able to be a kind of connection to allow us to
link |
00:19:08.480
merge to ride the wave of the improving AI systems.
link |
00:19:12.640
I think the chance is above zero percent.
link |
00:19:15.600
So it's non zero. There's a chance. And that's
link |
00:19:18.960
So what if you've seen Dumb and Dumber?
link |
00:19:21.920
Yes.
link |
00:19:22.560
So I'm saying there's a chance.
link |
00:19:24.160
He's saying one in a billion or one in a million, whatever it was at Dumb and Dumber.
link |
00:19:28.240
You know, it went from maybe one in a million to improving. Maybe it'll be one in a thousand
link |
00:19:32.320
and then one in a hundred, then one in ten. Depends on the rate of improvement of Neuralink
link |
00:19:37.200
and how fast we're able to do to make progress, you know.
link |
00:19:41.040
Well, I've talked to a few folks here that are quite brilliant engineers. So I'm excited.
link |
00:19:45.440
Yeah. I think it's like fundamentally good, you know,
link |
00:19:48.160
you're giving somebody back full motor control after they've had a spinal cord injury,
link |
00:19:52.400
you know, restoring brain functionality after a stroke, solving,
link |
00:19:58.320
debilitating genetically orange brain diseases. These are all incredibly great, I think.
link |
00:20:03.760
And in order to do these, you have to be able to interface with neurons at detail level and
link |
00:20:08.720
need to be able to fire the right neurons, read the right neurons, and then effectively you can
link |
00:20:14.800
create a circuit, replace what's broken with silicon, and essentially fill in the missing
link |
00:20:23.200
functionality. And then over time, we can develop a tertiary layer. So if like the limbic system
link |
00:20:32.400
is the primary layer, then the cortex is like the second layer. And as I said, you know,
link |
00:20:37.280
obviously the cortex is vastly more intelligent than the limbic system. But people generally
link |
00:20:41.280
like the fact that they have a limbic system and a cortex. I haven't met anyone who wants to
link |
00:20:44.880
delete either one of them. They're like, okay, I'll keep them both. That's cool.
link |
00:20:48.640
The limbic system is kind of fun.
link |
00:20:50.240
Yeah, that's what the fun is. Absolutely. And then people generally don't want to lose the
link |
00:20:55.920
cortex either, right? So they're like having the cortex and the limbic system.
link |
00:21:00.560
And then there's a tertiary layer, which will be digital superintelligence.
link |
00:21:04.400
And I think there's room for optimism given that the cortex is very intelligent and the limbic
link |
00:21:12.640
system is not. And yet they work together well. Perhaps there can be a tertiary layer
link |
00:21:18.320
where digital superintelligence lies. And that will be vastly more intelligent than the cortex,
link |
00:21:23.280
but still coexist peacefully and in a benign manner with the cortex and limbic system.
link |
00:21:29.280
That's a super exciting future, both in the low level engineering that I saw as being done here
link |
00:21:34.080
and the actual possibility in the next few decades.
link |
00:21:38.000
It's important that New Orleans solve this problem sooner rather than later, because
link |
00:21:42.240
the point at which we have digital superintelligence, that's when we pass singularity.
link |
00:21:46.480
And things become just very uncertain. It doesn't mean that they're necessarily bad or good.
link |
00:21:49.840
But the point at which we pass singularity, things become extremely unstable.
link |
00:21:53.440
So we want to have a human brain interface before the singularity, or at least not long after it,
link |
00:21:59.760
to minimize existential risk for humanity and consciousness as we know it.
link |
00:22:06.080
But there's a lot of fascinating, actual engineering, low level problems here at New
link |
00:22:10.080
Orleans that are quite exciting. The problems that we face in New Orleans are
link |
00:22:17.280
material science, electrical engineering, software, mechanical engineering, micro fabrication.
link |
00:22:23.680
It's a bunch of engineering disciplines, essentially. That's what it comes down to,
link |
00:22:28.080
is you have to have a tiny electrode. It's so small it doesn't hurt neurons,
link |
00:22:36.880
but it's got to last for as long as a person. So it's got to last for decades.
link |
00:22:41.760
And then you've got to take that signal, you've got to process that signal locally at low power.
link |
00:22:48.160
So we need a lot of chip design engineers, because we've got to do signal processing.
link |
00:22:55.040
And do so in a very power efficient way, so that we don't heat your brain up,
link |
00:23:01.840
because the brain's very heat sensitive. And then we've got to take those signals,
link |
00:23:05.520
we've got to do something with them, and then we've got to stimulate the back to,
link |
00:23:12.320
so you could biirectional communication. So if somebody's good at material science,
link |
00:23:18.000
software, mechanical engineering, electrical engineering, chip design, micro fabrication,
link |
00:23:23.680
that's what those are the things we need to work on. We need to be good at material science,
link |
00:23:28.800
so that we can have tiny electrodes that last a long time. And it's a tough thing with the
link |
00:23:33.600
material science problem, it's a tough one, because you're trying to read and stimulate
link |
00:23:38.080
electrically in an electrically active area. Your brain is very electrically active and
link |
00:23:44.800
electrochemically active. So how do you have, say, a coating on the electrode that doesn't dissolve
link |
00:23:51.360
over time, and is safe in the brain? This is a very hard problem. And then how do you
link |
00:24:01.680
collect those signals in a way that is most efficient, because you really just have very
link |
00:24:07.440
tiny amounts of power to process those signals. And then we need to automate the whole thing,
link |
00:24:13.040
so it's like Lasik. If this is done by neurosurgeons, there's no way it can scale to
link |
00:24:21.040
a large number of people. And it needs to scale to a large number of people, because I think
link |
00:24:24.880
ultimately we want the future to be determined by a large number of humans.
link |
00:24:32.080
Do you think that this has a chance to revolutionize surgery period, so neurosurgery and surgery
link |
00:24:38.240
all across? Yeah, for sure. It's got to be like Lasik. If Lasik had to be hand done,
link |
00:24:43.680
done by hand, by a person, that wouldn't be great. It's done by a robot. And the ophthalmologist
link |
00:24:53.440
kind of just needs to make sure your head's in my position, and then they just press the button and go.
link |
00:24:59.760
So smart summon and soon auto park takes on the full beautiful mess of parking lots and their
link |
00:25:05.680
human to human nonverbal communication. I think it has actually the potential to have a profound
link |
00:25:13.280
impact in changing how our civilization looks at AI and robotics, because this is the first time
link |
00:25:19.120
human beings, people that don't own a Tesla may have never seen a Tesla, heard about a Tesla,
link |
00:25:24.080
get to watch hundreds of thousands of cars without a driver. Do you see it this way almost like an
link |
00:25:30.880
education tool for the world about AI? Do you feel the burden of that, the excitement of that,
link |
00:25:36.080
or do you just think it's a smart parking feature? I do think you are getting at something
link |
00:25:42.160
important, which is most people have never really seen a robot. And what is the car that is autonomous?
link |
00:25:48.160
It's a four wheeled robot. Yeah, it communicates a certain sort of message with everything from
link |
00:25:54.240
safety to the possibility of what AI could bring to its current limitations, its current challenges,
link |
00:26:00.720
it's what's possible. Do you feel the burden of that almost like a communicator, educator to the
link |
00:26:05.440
world about AI? We're just really trying to make people's lives easier with autonomy.
link |
00:26:11.680
But now that you mention it, I think it will be an eye opener to people about robotics,
link |
00:26:16.320
because most people have never seen a robot, and there are hundreds of thousands of Teslas,
link |
00:26:23.520
won't be long before there's a million of them that have autonomous capability
link |
00:26:27.680
and the drive without a person in it. And you can see the kind of evolution of the car's
link |
00:26:32.960
personality and thinking with each iteration of autopilot. You can see it's uncertain about this,
link |
00:26:43.120
or it gets, but now it's more certain. Now it's moving in a slightly different way.
link |
00:26:50.000
Like I can tell immediately if a car is on Tesla Autopilot, because it's got just little nuances
link |
00:26:54.960
of movement, it just moves in a slightly different way. Cars on Tesla Autopilot,
link |
00:26:59.920
for example, on the highway are far more precise about being in the center of the lane
link |
00:27:03.760
than a person. If you drive down the highway and look at where cars are, the human driven cars,
link |
00:27:10.640
are within their lane, they're like bumper cars. They're like moving all over the place.
link |
00:27:14.800
The car on autopilot, dead center. Yeah, so the incredible work that's going into that
link |
00:27:20.880
neural network is learning fast. Autonomy is still very, very hard. We don't actually know
link |
00:27:27.040
how hard it is fully, of course. You look at most problems you tackle, this one included
link |
00:27:34.880
with an exponential lens. But even with an exponential improvement, things can take longer
link |
00:27:39.440
than expected sometimes. So where does Tesla currently stand on its quest for full autonomy?
link |
00:27:48.320
What's your sense? When can we see successful deployment of full autonomy?
link |
00:27:54.720
Well, on the highway already, the probability of intervention is extremely low. So for highway
link |
00:28:02.720
autonomy, with the latest release, especially the probability of needing to intervene
link |
00:28:09.840
is really quite low. In fact, I'd say for stop and go traffic, it's as far safer than a person
link |
00:28:17.440
right now. The probability of an injury or an impact is much, much lower for autopilot than a
link |
00:28:22.160
person. And then with navigating autopilot, it can change lanes, take highway interchanges,
link |
00:28:28.960
and then we're coming at it from the other direction, which is low speed, full autonomy.
link |
00:28:33.760
And in a way, this is like, it's like, how does a person learn to drive? You learn to drive in
link |
00:28:37.760
the parking lot. You know, the first time you learn to drive probably wasn't jumping on
link |
00:28:42.640
Marcus Street in San Francisco. That'd be crazy. You learn to drive in the parking lot, get things
link |
00:28:47.040
right at low speed. And then the missing piece that we're working on is traffic lights and
link |
00:28:54.960
stop streets. Stop streets, I would say, are actually also relatively easy because you kind
link |
00:29:01.280
of know where the stop street is, worst case in geocoders, and then use visualization to see
link |
00:29:06.240
where the line is and stop at the line to eliminate the GPS error. So actually, I'd say
link |
00:29:12.080
there's probably complex traffic lights and very windy roads are the two things that need to get
link |
00:29:21.120
solved. What's harder, perception or control for these problems? So being able to perfectly
link |
00:29:25.360
perceive everything or figuring out a plan once you perceive everything, how to interact with
link |
00:29:31.040
all the agents in the environment, in your sense, from a learning perspective, is perception or
link |
00:29:36.720
action harder? And that giant, beautiful multitask learning neural network? The hottest thing is
link |
00:29:44.000
having accurate representation of the physical objects in vector space. So taking the visual
link |
00:29:51.040
input, primarily visual input, some sonar and radar, and then creating an accurate vector
link |
00:29:59.120
space representation of the objects around you. Once you have an accurate vector space representation,
link |
00:30:05.440
the plan and control is relatively easier. Basically, once you have accurate vector space
link |
00:30:13.360
representation, then you're kind of like a video game. Cars in like Grand Theft Auto or something,
link |
00:30:19.520
they work pretty well. They drive down the road, they don't crash pretty much unless you crash
link |
00:30:24.400
into them. That's because they've got an accurate vector space representation of where the cars
link |
00:30:28.800
are, and then they're rendering that as the output. Do you have a sense high level that Tesla's on
link |
00:30:36.400
track on being able to achieve full autonomy? So on the highway? Yeah, absolutely. And still no
link |
00:30:44.720
driver state, driver sensing. And we have driver sensing with the torque on the wheel? That's right.
link |
00:30:52.240
By the way, just a quick comment on karaoke. Most people think it's fun, but I also think it is
link |
00:30:58.400
a driving feature. I've been saying for a long time, singing in the car is really good for
link |
00:31:01.920
attention management and vigilance management. That's great. Tesla karaoke is great. It's one
link |
00:31:07.760
of the most fun features of the car. Do you think of a connection between fun and safety sometimes?
link |
00:31:12.480
Yeah, you can do both at the same time. That's great. I just met with Andrew and wife of Carl
link |
00:31:18.800
Sagan, the director of Cosmos. I'm generally a big fan of Carl Sagan. He's super cool and had
link |
00:31:25.760
a great way of doing things. All of our consciousness, all civilization, everything we've ever known
link |
00:31:30.800
and done is on this tiny blue dot. People also get, they get too trapped in there. This is like
link |
00:31:36.560
squabbles amongst humans. Let's not think of the big picture. They take a civilization
link |
00:31:42.480
and our continued existence for granted. I shouldn't do that. Look at the history of civilizations.
link |
00:31:48.160
They rise and they fall. And now civilization is all, it's globalized. And so civilization,
link |
00:31:57.200
I think now rises and falls together. There's not geographic isolation. This is a big risk.
link |
00:32:06.640
Things don't always go up. That should be, that's an important lesson of history.
link |
00:32:10.800
In 1990, at the request of Carl Sagan, the Voyage 1 spacecraft, which is a spacecraft that's
link |
00:32:19.680
reaching out farther than anything human made into space, turned around to take a picture of Earth
link |
00:32:25.360
from 3.7 billion miles away. And as you're talking about the pale blue dot, that picture,
link |
00:32:32.160
the Earth takes up less than a single pixel in that image. Yes.
link |
00:32:35.520
Of course. Appearing as a tiny blue dot as pale blue dot as Carl Sagan called it.
link |
00:32:42.320
So he spoke about this dot of ours in 1994. And if you could humor me, I was wondering if
link |
00:32:51.920
in the last two minutes you could read the words that he wrote describing this pale blue dot.
link |
00:32:57.760
Sure. Yes, it's funny, the universe appears to be 13.8 billion years old. Earth is like 4.5
link |
00:33:08.160
billion years old. Another half billion years or so, the sun will expand and probably evaporate
link |
00:33:16.800
the oceans and make life impossible on Earth. Which means that if it had taken consciousness
link |
00:33:22.560
10% longer to evolve, it would never have evolved at all. It's just 10% longer.
link |
00:33:30.640
And I wonder how many dead one planet civilizations that are out there in the cosmos
link |
00:33:37.760
that never made it to the other planet and ultimately extinguished themselves or were
link |
00:33:41.120
destroyed by external factors. Probably a few. It's only just possible to travel to Mars,
link |
00:33:51.600
just barely. If G was 10% more, wouldn't work, really.
link |
00:33:59.360
If FG was 10% lower, it would be easy.
link |
00:34:03.760
Like you can go single stage from surface of Mars all the way to surface of the Earth.
link |
00:34:07.520
Because Mars is 37% Earth's gravity.
link |
00:34:12.880
We need a giant booster to get off Earth.
link |
00:34:14.560
Look again at that dot. That's here. That's home. That's us. On it, everyone you love,
link |
00:34:28.960
everyone you know, everyone you've ever heard of, every human being who ever was, lived out their
link |
00:34:34.960
lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies,
link |
00:34:40.720
and economic doctrines. Every hunter and farger, every hero and coward, every creator and destroyer
link |
00:34:46.240
of civilization. Every king and peasant, every young couple in love, every mother and father,
link |
00:34:53.600
hopeful child, inventor and explorer. Every teacher of morals, every corrupt politician,
link |
00:35:01.280
every superstar, every supreme leader, every saint and sinner in the history of our species
link |
00:35:08.800
live there, on a mode of dust suspended in a sunbeam. Our planet is a lonely speck in the great
link |
00:35:16.000
enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will
link |
00:35:22.640
come from elsewhere to save us from ourselves. The Earth is the only world known so far to
link |
00:35:27.280
harbor life. There is nowhere else, at least in the near future, to which our species could migrate.
link |
00:35:33.760
This is not true. This is false. Mars. And I think Carl Sagan would agree with that. He
link |
00:35:41.120
couldn't even imagine it at that time. So thank you for making the world dream.
link |
00:35:46.400
And thank you for talking today. I really appreciate it. Thank you.