back to indexElon Musk: Neuralink, AI, Autopilot, and the Pale Blue Dot | Lex Fridman Podcast #49
link |
The following is a conversation with Elon Musk, part two. The second time we spoke on the podcast
link |
with parallels, if not in quality, than an outfit to the objectively speaking greatest
link |
sequel of all time, Godfather part two. As many people know, Elon Musk is a leader of Tesla,
link |
SpaceX, Neolink, and the Boren company. What may be less known is that he's a world class
link |
engineer and designer, constantly emphasizing first principles thinking and taking on big
link |
engineering problems that many before him will consider impossible. As scientists and engineers,
link |
most of us don't question the way things are done, we simply follow the momentum of the crowd.
link |
But revolutionary ideas that change the world on the small and large scales happen
link |
when you return to the fundamentals and ask, is there a better way? This conversation focuses
link |
on the incredible engineering and innovation done in brain computer interfaces at Neolink.
link |
This work promises to help treat neurobiological diseases to help us further understand the
link |
connection between the individual neuron to the high level function of the human brain.
link |
And finally, to one day expand the capacity of the brain through two way communication
link |
with computational devices, the internet, and artificial intelligence systems.
link |
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe by new YouTube,
link |
Apple Podcasts, Spotify, support on Patreon, or simply connect with me on Twitter,
link |
Alex Friedman, spelled F R I D M A N. And now, as an anonymous YouTube commenter,
link |
refer to our previous conversation as the quote, historical first video of two robots
link |
conversing without supervision. Here's the second time, the second conversation with Elon Musk.
link |
Let's start with an easy question about consciousness. In your view, is consciousness
link |
something that's unique to humans, or is it something that permeates all matter,
link |
almost like a fundamental force of physics? I don't think consciousness permeates all matter.
link |
Panpsychists believe that. Yeah. There's a philosophical. How would you tell?
link |
That's true. That's a good point. I believe in scientific methods. I don't
link |
bother mine or anything, but the scientific method is like, if you cannot test the hypothesis,
link |
then you cannot reach meaningful conclusion that it is true. Do you think consciousness,
link |
understanding consciousness is within the reach of science of the scientific method?
link |
We can dramatically improve our understanding of consciousness. I would be hard pressed to
link |
say that we understand anything with complete accuracy, but can we dramatically improve our
link |
understanding of consciousness? I believe the answer is yes.
link |
Does an AI system, in your view, have to have consciousness in order to achieve human level
link |
or superhuman level intelligence? Does it need to have some of these human qualities,
link |
that consciousness, maybe a body, maybe a fear of mortality, capacity to love,
link |
those kinds of silly human things?
link |
It's different. There's this scientific method, which I very much believe in,
link |
where something is true to the degree that it is, testably so.
link |
Otherwise, you're really just talking about preferences or untestable beliefs or that kind
link |
of thing. It ends up being somewhat of a semantic question, where we are conflating
link |
a lot of things with the word intelligence. If we parse them out and say,
link |
all we headed towards the future where an AI will be able to outthink us in every way,
link |
then the answer is unequivocally yes.
link |
In order for an AI system that needs to outthink us in every way, it also needs to have
link |
a capacity to have consciousness, self awareness, and understanding.
link |
It will be self aware, yes. That's different from consciousness.
link |
I mean, to me, in terms of what consciousness feels like, it feels like consciousness is in
link |
a different dimension. But this could be just an illusion. If you damage your brain in some way,
link |
physically, you get, you damage your consciousness, which implies that consciousness is a physical
link |
phenomenon, in my view. The things that I think are really quite likely is that digital
link |
intelligence will outthink us in every way, and it will soon be able to simulate what we consider
link |
consciousness. So to a degree that you would not be able to tell the difference.
link |
And from the aspect of the scientific method, it might as well be consciousness if we can
link |
simulate it perfectly. If you can't tell the difference, and this is sort of the Turing test,
link |
but think of a more sort of advanced version of the Turing test. If you're talking to
link |
digital superintelligence and can't tell if that is a computer or a human, like let's say you're
link |
just having a conversation over a phone or a video conference or something where you think
link |
you're talking, looks like a person makes all of the right inflections and movements and all the
link |
small subtleties that constitute a human and talks like human makes mistakes like a human.
link |
And you literally just can't tell. Are you video conferencing with a person or an AI?
link |
Might as well. Might as well. Be human. So on a darker topic, you've expressed serious concern
link |
about existential threats of AI. It's perhaps one of the greatest challenges our civilization faces.
link |
But since I would say we're kind of an optimistic descendants of apes, perhaps we can find several
link |
paths of escaping the harm of AI. So if I can give you three options, maybe you can comment,
link |
which do you think is the most promising? So one is scaling up efforts on AI safety and
link |
beneficial AI research in hope of finding an algorithmic or maybe a policy solution.
link |
Two is becoming a multi planetary species as quickly as possible. And three is merging with AI
link |
and riding the wave of that increasing intelligence as it continuously improves.
link |
What do you think is the most promising, most interesting as a civilization that we should
link |
invest in? I think there's a lot of investment going on in AI. Where there's a lack of investment
link |
is in AI safety. And there should be, in my view, a government agency that oversees
link |
anything related to AI to confirm that it does not represent a public safety risk,
link |
just as there is a regulatory authority for the food and drug administration,
link |
there's NHTSA for automotive safety, there's the FAA for aircraft safety.
link |
We generally come to the conclusion that it is important to have a government referee or
link |
referee that is serving the public interest in ensuring that things are safe when there's a
link |
potential danger to the public. I would argue that AI is unequivocally something that has
link |
potential to be dangerous to the public and therefore should have a regulatory agency just as
link |
other things that are dangerous to the public have a regulatory agency. But let me tell you a
link |
problem with this is that the government moves very slowly and the rate of the rate, the usually
link |
way a regulatory agency comes into being is that something terrible happens. There's a huge public
link |
outcry and years after that, there's a regulatory agency or a rule put in place. It takes something
link |
like seat belts. It was known for a decade or more that seat belts would have a massive impact on
link |
safety and save so many lives and serious injuries. The car industry fought the requirement to put
link |
seat belts in tooth and nail. That's crazy. Hundreds of thousands of people probably died
link |
because of that. They said people wouldn't buy cars if they had seat belts, which is obviously
link |
absurd. Or look at the tobacco industry and how long they fought any thing about smoking.
link |
That's part of why I helped make that movie, Thank You for Smoking. You can sort of see just
link |
how pernicious it can be when you have these companies effectively
link |
achieve regulatory capture of government is bad. People in the AI community refer to the advent of
link |
digital superintelligence as a singularity. That is not to say that it is good or bad,
link |
but that it is very difficult to predict what will happen after that point. There's some
link |
probability it will be bad, some probably it will be good. We obviously want to affect that
link |
probability and have it be more good than bad. Well, let me on the merger with AI question and
link |
the incredible work that's being done at Neuralink. There's a lot of fascinating innovation here
link |
across different disciplines going on. The flexible wires, the robotic sewing machine,
link |
the responsive brain movement, everything around ensuring safety and so on. We currently
link |
understand very little about the human brain. Do you also hope that the work at Neuralink will
link |
help us understand more about the human mind, about the brain? Yeah, I think the work at Neuralink
link |
will definitely shut a lot of insight into how the brain and the mind works. Right now,
link |
just the data we have regarding how the brain works is very limited. We've got fMRI, which
link |
is that that's kind of like putting a stethoscope on the outside of a factory wall and then putting
link |
it like all over the factory wall and you can sort of hear the sounds, but you don't know what
link |
the machines are doing really. It's hard. You can infer a few things, but it's a very broad
link |
breaststroke. In order to really know what's going on in the brain, you really need, you have to have
link |
high precision sensors and then you want to have stimulus and response. Like if you trigger Neuralink,
link |
how do you feel? What do you see? How does it change your perception of the world?
link |
You're speaking to physically just getting close to the brain, being able to measure signals from
link |
the brain will give us sort of open the door inside the factory. Yes, exactly. Being able to
link |
have high precision sensors that tell you what individual neurons are doing and then being
link |
able to trigger Neuron and see what the response is in the brain so you can see the consequences of
link |
if you fire this Neuron, what happens? How do you feel? What does it change? It'll be really
link |
profound to have this in people because people can articulate their change. Like if there's a
link |
change in mood or if they can tell you if they can see better or hear better or
link |
be able to form sentences better or worse or their memories are jogged or that kind of thing.
link |
So on the human side, there's this incredible general malleability,
link |
plasticity of the human brain. The human brain adapts, adjusts and so on. So that's not that
link |
plastic for totally frank. So there's a firm structure but there is some plasticity and the
link |
open question is sort of if I could ask a broad question is how much that plasticity can be
link |
utilized? Sort of on the human side, there's some plasticity in the human brain and on the machine
link |
side, we have neural networks, machine learning, artificial intelligence that's able to adjust
link |
and figure out signals. So there's a mysterious language that we don't perfectly understand
link |
that's within the human brain and then we're trying to understand that language to communicate
link |
both directions. So the brain is adjusting a little bit, we don't know how much and the
link |
machine is adjusting. Where do you see as they try to sort of reach together almost like with an
link |
alien species, try to find a protocol, communication protocol that works? Where do you see the biggest
link |
the biggest benefit arriving from on the machine side or the human side? Do you see both of them
link |
working together? I should think the machine side is far more malleable than the biological side
link |
by a huge amount. So it will be the the machine that adapts to the brain. That's the only thing
link |
that's possible. The brain can't adapt that well to the machine. You can't have neurons start to
link |
regard an electrode as another neuron because a neuron just this is like the pulse and so
link |
something else is pulsing. So there is that elasticity in the interface which we believe
link |
is something that can happen but the vast majority of malleability will have to be on the machine
link |
side. But it's interesting when you look at that synaptic plasticity at the interface side, there
link |
might be like an emergent plasticity because it's a whole nother. It's not like in the brain. It's a
link |
whole nother extension of the brain. You know, we might have to redefine what it means to be
link |
malleable for the brain. So maybe the brain is able to adjust to external interfaces.
link |
There will be some adjustments to the brain because there's going to be something
link |
reading and simulating the brain and so it will adjust to that thing. But
link |
most the vast majority of the adjustment will be on the machine side.
link |
This is just it has to be that otherwise it will not work. Ultimately, like we currently
link |
operate on two layers. We have sort of a limbic like prime primitive brain layer,
link |
which is where all of our kind of impulses are coming from. It's sort of like we've got
link |
we've got like a monkey brain with a computer stuck on it. That's the human brain. And a lot
link |
of our impulses and everything are driven by the monkey brain. And the computer, the cortex,
link |
is constantly trying to make the monkey brain happy. It's not the cortex that's steering the
link |
monkey brain. It's the monkey brain steering the cortex. But the cortex is the part that tells
link |
the story of the whole thing. So we convince ourselves it's more interesting than just the
link |
monkey brain. The cortex is like what we call like human intelligence, you know, so it's like
link |
that's like the advanced computer relative to other creatures. Other creatures do not have
link |
either really, they don't have the computer, or they have a very weak computer relative to humans.
link |
But but it's this, it's like, it sort of seems like surely the really smart thing should control
link |
the dumb thing. But actually, the dumb thing controls the smart thing. So do you think some
link |
of the same kind of machine learning methods, or whether that's natural language processing
link |
applications are going to be applied for the communication between the machine and the brain
link |
to learn how to do certain things like movement of the body, how to process visual stimuli and so
link |
on. Do you see the value of using machine learning to understand the language of the two way
link |
communication with the brain? Yeah, absolutely. I mean, we're neural net. And that, you know,
link |
AI is basically neural net. So it's like digital neural net will interface with biological neural
link |
net. And hopefully bring us along for the ride. Yeah. But the vast majority of our
link |
of our intelligence will be digital. This is like, so like think of like the difference in
link |
intelligence between your cortex and your limbic system is gigantic. Your limbic system really
link |
has no comprehension of what the hell the cortex is doing. It's just literally hungry, you know,
link |
or tired or angry or sexy or something, you know, and just and then in that case, that's
link |
that impulse to the cortex and tells the cortex to go satisfy that. Then a lot of great deal of
link |
like a massive amount of thinking, like truly stupendous amount of thinking has gone into sex
link |
without purpose, without procreation, without procreation, which, which is actually quite
link |
a silly action in the absence of procreation. It's a bit silly. Well, why are you doing it?
link |
Because it makes the limbic system happy. That's why that's why. But it's pretty absurd, really.
link |
Well, the whole of existence is pretty absurd in some kind of sense.
link |
Yeah. But I mean, this is a lot of computation has gone into how can I do more of that
link |
with procreation, not even being a factor. This is, I think, a very important era of research by NSFW.
link |
Any agency that should receive a lot of funding, especially after this conversation?
link |
If I propose a formation of a new agency. Oh, boy.
link |
What is the most exciting or some of the most exciting things that you see in the future impact
link |
of Neuralink, both in the science, the engineering and societal broad impact?
link |
So Neuralink, I think, at first will solve a lot of brain related diseases. So
link |
creating from like autism, schizophrenia, memory loss, like everyone experiences memory loss at
link |
certain points in age. Parents can't remember their kid's names and that kind of thing.
link |
So there's a tremendous amount of good that Neuralink can do in solving critical
link |
damage to brain or the spinal cord. There's a lot that can be done to improve quality of life
link |
of individuals. And that will be those will be steps along the way. And then ultimately,
link |
it's intended to address the the risk, the existential risk associated with
link |
digital superintelligence. Like we will not be able to be smarter than a digital supercomputer.
link |
So therefore, if you cannot beat them, join them. And at least we won't have that option.
link |
So you have hope that Neuralink will be able to be a kind of connection to allow us to
link |
merge to ride the wave of the improving AI systems.
link |
I think the chance is above zero percent.
link |
So it's non zero. There's a chance. And that's
link |
So what if you've seen Dumb and Dumber?
link |
So I'm saying there's a chance.
link |
He's saying one in a billion or one in a million, whatever it was at Dumb and Dumber.
link |
You know, it went from maybe one in a million to improving. Maybe it'll be one in a thousand
link |
and then one in a hundred, then one in ten. Depends on the rate of improvement of Neuralink
link |
and how fast we're able to do to make progress, you know.
link |
Well, I've talked to a few folks here that are quite brilliant engineers. So I'm excited.
link |
Yeah. I think it's like fundamentally good, you know,
link |
you're giving somebody back full motor control after they've had a spinal cord injury,
link |
you know, restoring brain functionality after a stroke, solving,
link |
debilitating genetically orange brain diseases. These are all incredibly great, I think.
link |
And in order to do these, you have to be able to interface with neurons at detail level and
link |
need to be able to fire the right neurons, read the right neurons, and then effectively you can
link |
create a circuit, replace what's broken with silicon, and essentially fill in the missing
link |
functionality. And then over time, we can develop a tertiary layer. So if like the limbic system
link |
is the primary layer, then the cortex is like the second layer. And as I said, you know,
link |
obviously the cortex is vastly more intelligent than the limbic system. But people generally
link |
like the fact that they have a limbic system and a cortex. I haven't met anyone who wants to
link |
delete either one of them. They're like, okay, I'll keep them both. That's cool.
link |
The limbic system is kind of fun.
link |
Yeah, that's what the fun is. Absolutely. And then people generally don't want to lose the
link |
cortex either, right? So they're like having the cortex and the limbic system.
link |
And then there's a tertiary layer, which will be digital superintelligence.
link |
And I think there's room for optimism given that the cortex is very intelligent and the limbic
link |
system is not. And yet they work together well. Perhaps there can be a tertiary layer
link |
where digital superintelligence lies. And that will be vastly more intelligent than the cortex,
link |
but still coexist peacefully and in a benign manner with the cortex and limbic system.
link |
That's a super exciting future, both in the low level engineering that I saw as being done here
link |
and the actual possibility in the next few decades.
link |
It's important that New Orleans solve this problem sooner rather than later, because
link |
the point at which we have digital superintelligence, that's when we pass singularity.
link |
And things become just very uncertain. It doesn't mean that they're necessarily bad or good.
link |
But the point at which we pass singularity, things become extremely unstable.
link |
So we want to have a human brain interface before the singularity, or at least not long after it,
link |
to minimize existential risk for humanity and consciousness as we know it.
link |
But there's a lot of fascinating, actual engineering, low level problems here at New
link |
Orleans that are quite exciting. The problems that we face in New Orleans are
link |
material science, electrical engineering, software, mechanical engineering, micro fabrication.
link |
It's a bunch of engineering disciplines, essentially. That's what it comes down to,
link |
is you have to have a tiny electrode. It's so small it doesn't hurt neurons,
link |
but it's got to last for as long as a person. So it's got to last for decades.
link |
And then you've got to take that signal, you've got to process that signal locally at low power.
link |
So we need a lot of chip design engineers, because we've got to do signal processing.
link |
And do so in a very power efficient way, so that we don't heat your brain up,
link |
because the brain's very heat sensitive. And then we've got to take those signals,
link |
we've got to do something with them, and then we've got to stimulate the back to,
link |
so you could biirectional communication. So if somebody's good at material science,
link |
software, mechanical engineering, electrical engineering, chip design, micro fabrication,
link |
that's what those are the things we need to work on. We need to be good at material science,
link |
so that we can have tiny electrodes that last a long time. And it's a tough thing with the
link |
material science problem, it's a tough one, because you're trying to read and stimulate
link |
electrically in an electrically active area. Your brain is very electrically active and
link |
electrochemically active. So how do you have, say, a coating on the electrode that doesn't dissolve
link |
over time, and is safe in the brain? This is a very hard problem. And then how do you
link |
collect those signals in a way that is most efficient, because you really just have very
link |
tiny amounts of power to process those signals. And then we need to automate the whole thing,
link |
so it's like Lasik. If this is done by neurosurgeons, there's no way it can scale to
link |
a large number of people. And it needs to scale to a large number of people, because I think
link |
ultimately we want the future to be determined by a large number of humans.
link |
Do you think that this has a chance to revolutionize surgery period, so neurosurgery and surgery
link |
all across? Yeah, for sure. It's got to be like Lasik. If Lasik had to be hand done,
link |
done by hand, by a person, that wouldn't be great. It's done by a robot. And the ophthalmologist
link |
kind of just needs to make sure your head's in my position, and then they just press the button and go.
link |
So smart summon and soon auto park takes on the full beautiful mess of parking lots and their
link |
human to human nonverbal communication. I think it has actually the potential to have a profound
link |
impact in changing how our civilization looks at AI and robotics, because this is the first time
link |
human beings, people that don't own a Tesla may have never seen a Tesla, heard about a Tesla,
link |
get to watch hundreds of thousands of cars without a driver. Do you see it this way almost like an
link |
education tool for the world about AI? Do you feel the burden of that, the excitement of that,
link |
or do you just think it's a smart parking feature? I do think you are getting at something
link |
important, which is most people have never really seen a robot. And what is the car that is autonomous?
link |
It's a four wheeled robot. Yeah, it communicates a certain sort of message with everything from
link |
safety to the possibility of what AI could bring to its current limitations, its current challenges,
link |
it's what's possible. Do you feel the burden of that almost like a communicator, educator to the
link |
world about AI? We're just really trying to make people's lives easier with autonomy.
link |
But now that you mention it, I think it will be an eye opener to people about robotics,
link |
because most people have never seen a robot, and there are hundreds of thousands of Teslas,
link |
won't be long before there's a million of them that have autonomous capability
link |
and the drive without a person in it. And you can see the kind of evolution of the car's
link |
personality and thinking with each iteration of autopilot. You can see it's uncertain about this,
link |
or it gets, but now it's more certain. Now it's moving in a slightly different way.
link |
Like I can tell immediately if a car is on Tesla Autopilot, because it's got just little nuances
link |
of movement, it just moves in a slightly different way. Cars on Tesla Autopilot,
link |
for example, on the highway are far more precise about being in the center of the lane
link |
than a person. If you drive down the highway and look at where cars are, the human driven cars,
link |
are within their lane, they're like bumper cars. They're like moving all over the place.
link |
The car on autopilot, dead center. Yeah, so the incredible work that's going into that
link |
neural network is learning fast. Autonomy is still very, very hard. We don't actually know
link |
how hard it is fully, of course. You look at most problems you tackle, this one included
link |
with an exponential lens. But even with an exponential improvement, things can take longer
link |
than expected sometimes. So where does Tesla currently stand on its quest for full autonomy?
link |
What's your sense? When can we see successful deployment of full autonomy?
link |
Well, on the highway already, the probability of intervention is extremely low. So for highway
link |
autonomy, with the latest release, especially the probability of needing to intervene
link |
is really quite low. In fact, I'd say for stop and go traffic, it's as far safer than a person
link |
right now. The probability of an injury or an impact is much, much lower for autopilot than a
link |
person. And then with navigating autopilot, it can change lanes, take highway interchanges,
link |
and then we're coming at it from the other direction, which is low speed, full autonomy.
link |
And in a way, this is like, it's like, how does a person learn to drive? You learn to drive in
link |
the parking lot. You know, the first time you learn to drive probably wasn't jumping on
link |
Marcus Street in San Francisco. That'd be crazy. You learn to drive in the parking lot, get things
link |
right at low speed. And then the missing piece that we're working on is traffic lights and
link |
stop streets. Stop streets, I would say, are actually also relatively easy because you kind
link |
of know where the stop street is, worst case in geocoders, and then use visualization to see
link |
where the line is and stop at the line to eliminate the GPS error. So actually, I'd say
link |
there's probably complex traffic lights and very windy roads are the two things that need to get
link |
solved. What's harder, perception or control for these problems? So being able to perfectly
link |
perceive everything or figuring out a plan once you perceive everything, how to interact with
link |
all the agents in the environment, in your sense, from a learning perspective, is perception or
link |
action harder? And that giant, beautiful multitask learning neural network? The hottest thing is
link |
having accurate representation of the physical objects in vector space. So taking the visual
link |
input, primarily visual input, some sonar and radar, and then creating an accurate vector
link |
space representation of the objects around you. Once you have an accurate vector space representation,
link |
the plan and control is relatively easier. Basically, once you have accurate vector space
link |
representation, then you're kind of like a video game. Cars in like Grand Theft Auto or something,
link |
they work pretty well. They drive down the road, they don't crash pretty much unless you crash
link |
into them. That's because they've got an accurate vector space representation of where the cars
link |
are, and then they're rendering that as the output. Do you have a sense high level that Tesla's on
link |
track on being able to achieve full autonomy? So on the highway? Yeah, absolutely. And still no
link |
driver state, driver sensing. And we have driver sensing with the torque on the wheel? That's right.
link |
By the way, just a quick comment on karaoke. Most people think it's fun, but I also think it is
link |
a driving feature. I've been saying for a long time, singing in the car is really good for
link |
attention management and vigilance management. That's great. Tesla karaoke is great. It's one
link |
of the most fun features of the car. Do you think of a connection between fun and safety sometimes?
link |
Yeah, you can do both at the same time. That's great. I just met with Andrew and wife of Carl
link |
Sagan, the director of Cosmos. I'm generally a big fan of Carl Sagan. He's super cool and had
link |
a great way of doing things. All of our consciousness, all civilization, everything we've ever known
link |
and done is on this tiny blue dot. People also get, they get too trapped in there. This is like
link |
squabbles amongst humans. Let's not think of the big picture. They take a civilization
link |
and our continued existence for granted. I shouldn't do that. Look at the history of civilizations.
link |
They rise and they fall. And now civilization is all, it's globalized. And so civilization,
link |
I think now rises and falls together. There's not geographic isolation. This is a big risk.
link |
Things don't always go up. That should be, that's an important lesson of history.
link |
In 1990, at the request of Carl Sagan, the Voyage 1 spacecraft, which is a spacecraft that's
link |
reaching out farther than anything human made into space, turned around to take a picture of Earth
link |
from 3.7 billion miles away. And as you're talking about the pale blue dot, that picture,
link |
the Earth takes up less than a single pixel in that image. Yes.
link |
Of course. Appearing as a tiny blue dot as pale blue dot as Carl Sagan called it.
link |
So he spoke about this dot of ours in 1994. And if you could humor me, I was wondering if
link |
in the last two minutes you could read the words that he wrote describing this pale blue dot.
link |
Sure. Yes, it's funny, the universe appears to be 13.8 billion years old. Earth is like 4.5
link |
billion years old. Another half billion years or so, the sun will expand and probably evaporate
link |
the oceans and make life impossible on Earth. Which means that if it had taken consciousness
link |
10% longer to evolve, it would never have evolved at all. It's just 10% longer.
link |
And I wonder how many dead one planet civilizations that are out there in the cosmos
link |
that never made it to the other planet and ultimately extinguished themselves or were
link |
destroyed by external factors. Probably a few. It's only just possible to travel to Mars,
link |
just barely. If G was 10% more, wouldn't work, really.
link |
If FG was 10% lower, it would be easy.
link |
Like you can go single stage from surface of Mars all the way to surface of the Earth.
link |
Because Mars is 37% Earth's gravity.
link |
We need a giant booster to get off Earth.
link |
Look again at that dot. That's here. That's home. That's us. On it, everyone you love,
link |
everyone you know, everyone you've ever heard of, every human being who ever was, lived out their
link |
lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies,
link |
and economic doctrines. Every hunter and farger, every hero and coward, every creator and destroyer
link |
of civilization. Every king and peasant, every young couple in love, every mother and father,
link |
hopeful child, inventor and explorer. Every teacher of morals, every corrupt politician,
link |
every superstar, every supreme leader, every saint and sinner in the history of our species
link |
live there, on a mode of dust suspended in a sunbeam. Our planet is a lonely speck in the great
link |
enveloping cosmic dark. In our obscurity, in all this vastness, there is no hint that help will
link |
come from elsewhere to save us from ourselves. The Earth is the only world known so far to
link |
harbor life. There is nowhere else, at least in the near future, to which our species could migrate.
link |
This is not true. This is false. Mars. And I think Carl Sagan would agree with that. He
link |
couldn't even imagine it at that time. So thank you for making the world dream.
link |
And thank you for talking today. I really appreciate it. Thank you.