back to indexJeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208
link |
The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand
link |
the structure, function, and the origin of intelligence in the human brain.
link |
He previously wrote a seminal book on the subject titled On Intelligence, and recently
link |
a new book called A Thousand Brains, which presents a new theory of intelligence
link |
that Richard Dawkins, for example, has been raving about, calling the book, quote,
link |
brilliant and exhilarating. I can't read those two words and not think of him saying it in his
link |
British accent. Quick mention of our sponsors, Code Academy, Biooptimizers, ExpressVPN,
link |
ASleep, and Blinkist. Check them out in the description to support this podcast.
link |
As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his
link |
new book is that if human civilization were to destroy itself, all of knowledge, all our creations
link |
will go with us. He proposes that we should think about how to save that knowledge in a way that
link |
long outlives us, whether that's on Earth, in orbit around Earth, or in deep space. And then
link |
to send messages that advertise this backup of human knowledge to other intelligent alien
link |
civilizations. The main message of this advertisement is not that we are here, but that we were once
link |
here. This little difference somehow was deeply humbling to me, that we may with some nonzero
link |
likelihood destroy ourselves, and that an alien civilization, thousands or millions of years
link |
from now may come across this knowledge store, and they would only with some low probability even
link |
notice it, not to mention be able to interpret it. And the deeper question here for me is what
link |
information in all of human knowledge is even essential? Does Wikipedia capture it or not at
link |
all? This thought experiment forces me to wonder what are the things we've accomplished and are
link |
hoping to still accomplish that will outlive us? Is it things like complex buildings, bridges,
link |
cars, rockets? Is it ideas like science, physics, and mathematics? Is it music and art? Is it
link |
computers, computational systems, or even artificial intelligence systems? I personally
link |
can't imagine that aliens wouldn't already have all of these things. In fact, much more and much
link |
better. To me, the only unique thing we may have is consciousness itself, and the actual
link |
subjective experience of suffering, of happiness, of hatred, of love. If we can record these experiences
link |
in the highest resolution directly from the human brain, such that aliens will be able to replay
link |
them, that is what we should store and send as a message. Not Wikipedia, but the extremes of
link |
conscious experiences, the most important of which, of course, is love. This is the Lex
link |
Friedman podcast, and here is my conversation with Jeff Hawkins. We previously talked over two
link |
years ago. Do you think there's still neurons in your brain that remember that conversation,
link |
that remember me and got excited? There's a Lex neuron in your brain that just finally has a purpose?
link |
I do remember our conversation, or I have some memories of it, and I formed additional memories
link |
of you in the meantime. I wouldn't say there's a neuron or a neuron in my brain that know you,
link |
but there are synapses in my brain that have formed that reflect my knowledge of you and
link |
the model I have of you in the world. Whether the exact same synapses were formed two years ago,
link |
it's hard to say because these things come and go all the time. One thing to note about
link |
brains is that when you think of things, you often erase the memory and rewrite it again.
link |
So yes, but I have a memory of you, and that's instantiated in synapses. There's a simpler way
link |
to think about it, Lex. We have a model of the world in your head, and that model is continually
link |
being updated. I updated it this morning. You offered me this water, you said it was from the
link |
refrigerator. I remember these things, and so the model includes where we live, the places we know,
link |
the words, the objects in the world, but it's just monstrous model, and it's constantly being
link |
updated, and people are just part of that model. So are animals, so are other physical objects,
link |
so are events we've done. So it's no special in my mind, special place for the memories of humans.
link |
I mean, obviously, I know a lot about my wife, but and friends, and so on, but it's not like a
link |
special place for humans were over here, but we model everything, and we model other people's
link |
behaviors, too. So if I said, there's a copy of your mind in my mind, it's just because I know
link |
how humans, I've learned how humans behave, and I learned some things about you, and that's part
link |
of my world model. Well, I just also mean the collective intelligence of the human species.
link |
I wonder if there's something fundamental to the brain that enables that, so modeling other
link |
humans with their ideas. You're actually jumping into a lot of big profits, like collective
link |
intelligence is a separate topic that a lot of people like to talk about, we can talk about that.
link |
But so that's interesting, like, you know, we're not just individuals, we live in society and so on.
link |
But from our research point of view, and so again, let's just talk, we study the neocortex,
link |
it's a sheet of neural tissue, it's about 75% of your brain, it runs on this very repetitive
link |
algorithm. It's a very repetitive circuit. And so you can apply that algorithm to lots of different
link |
problems, but it's all underneath, it's the same thing, we're just building this model.
link |
So from our point of view, we wouldn't look for these special circuits someplace buried in your
link |
brain that might be related to understanding other humans. It's more like, how do we build a
link |
model of anything? How do we understand anything in the world? And humans are just another part
link |
of the things we understand. So there's nothing, there's nothing to the brain that knows the
link |
emergent phenomenon of collective intelligence? Well, I certainly know about that. I've heard
link |
the terms I've read. No, but that's, right. Well, okay, right. As an idea. Well, I think we have
link |
language, which is sort of built into our brains, and that's a key part of collective intelligence.
link |
So there are some, you know, prior assumptions about the world we're going to live in, when we're
link |
born, we're not just a blank slate. And so, you know, did we evolve to take advantage of those
link |
situations? Yes. But again, we study only part of the brain, the neocortex. There's other parts
link |
of the brain are very much involved in societal interactions and human emotions and how we interact
link |
and even societal issues about, you know, how we interact with other people when we support them,
link |
when we're greedy and things like that. I mean, certainly the brain is a great place
link |
where to study intelligence. I wonder if it's the fundamental atom of intelligence? Well,
link |
I would say it's absolutely an essential component, even if you believe in collective
link |
intelligence as, hey, that's where it's all happening. That's what we need to study,
link |
which I don't believe that, by the way. I think it's really important, but I don't think that is
link |
the thing. But even if you do believe that, then you have to understand how the brain works in
link |
doing that. It's, you know, it's more like we are intelligent, we are intelligent individuals,
link |
and together we are much more magnified, our intelligence. We can do things that we couldn't
link |
do individually. But even as individuals, we're pretty damn smart. And we can model things and
link |
understand the world and interact with it. So to me, if you're going to start someplace,
link |
you need to start with the brain, then you could say, well, how do brains interact with each other?
link |
And what is the nature of language? And how do we share models that I've learned something about
link |
the world? How do I share it with you? Which is really what, you know, sort of communal
link |
intelligence is. I know something, you know something. We've had different experiences in
link |
the world. I've learned something about brains. Maybe I can impart that to you. You've learned
link |
something about, you know, whatever physics and you can part that to me. But it all comes down to
link |
even just the epistemological question of, well, what is knowledge and how do you represent it in
link |
the brain? Right? And it's not, that's where it's going to reside, right? Or in our writings.
link |
It's obvious that human collaboration, human interaction is how we build societies. But
link |
some of the things you talk about and work on, some of those elements of what makes up an intelligent
link |
entity is there with a single person. Absolutely. I mean, we can't deny that the brain is the core
link |
element here in, at least I think it's obvious, the brain is the core element in all theories of
link |
intelligence. It's where knowledge is represented. It's where knowledge is created. We interact,
link |
we share, we build upon each other's work. But without a brain, you'd have nothing. You know,
link |
there would be no intelligence without brains. And so that's where we start. I got into this field
link |
because I just was curious as to who I am. You know, how do I think? What's going on in my head
link |
when I'm thinking? What does it mean to know something? I can ask what it means for me to
link |
know something independent of how I learned it from you or from someone else or from society.
link |
What does it mean for me to know that I have a model of you in my head? What does it mean to
link |
know I know what this microphone does and how it works physically, even when I can't see it right
link |
now? How do I know that? What does it mean? How do the neurons do that at the fundamental level
link |
of neurons and synapses and so on? Those are really fascinating questions. And I'm happy to
link |
just happy to understand those if I could. So in your new book, you talk about our brain,
link |
our mind as being made up of many brains. So the book is called A Thousand Brains,
link |
A Thousand Brain Theory of Intelligence. What is the key idea of this book?
link |
The book has three sections and it has sort of maybe three big ideas. So the first section is
link |
all about what we've learned about the neocortex and that's the thousand brains theory. Just to
link |
complete the picture, the second section is all about AI and the third section is about the future
link |
of humanity. So the thousand brains theory, the big idea there, if I had to summarize into one
link |
one big idea, is that we think of the brain, the neocortex is learning this model of the world.
link |
But what we learned is actually there's tens of thousands of independent modeling systems going
link |
on. And so each, what we call a column in the cortex with about 150,000 of them is a complete
link |
modeling system. So it's a collective intelligence in your head in some sense. So the thousand brains
link |
theory says about where do I have knowledge about, you know, this coffee cup or where is the model
link |
of this cell phone? It's not in one place. It's in thousands of separate models that are complementary
link |
and they communicate with each other through voting. So this idea that we have, we feel like
link |
we're one person, you know, that's our experience, we can explain that. But reality, there's lots of
link |
these like, it's almost like little brains, like, but they're sophisticated modeling systems,
link |
about 150,000 of them in each human brain. And that's a totally different way of thinking about
link |
how the neocortex is structured than we or anyone else thought of even just five years ago.
link |
So you mentioned you started this journey just looking in the mirror and trying to understand
link |
who you are. So if you have many brains, who are you then?
link |
So it's interesting, we have a singular perception, right? You know, we think, oh, I'm just here,
link |
I'm looking at you. But it's, it's composed of all these things, like there's sounds and there's,
link |
and there's this vision and there's touch and all kinds of inputs yet we have the singular
link |
perception. And what the 1000 brain theory says, we have these models that are visual models,
link |
we have audit models, auditory models, models of talk to models and so on, but they vote.
link |
And so they send in the cortex, you can think about that these columns as about like little
link |
grains of rice, 150,000 stacked next to each other. And each one is its own little modeling system.
link |
But they have these long range connections that go between them. And we call those voting
link |
connections or voting neurons. And so the different columns try to reach a consensus,
link |
like, what am I looking at? Okay, you know, each one has some ambiguity, but they come to
link |
a consensus. Oh, there's a water bottle, I'm looking at. We are only consciously able to
link |
perceive the voting. We're not able to perceive anything that goes under the hood. So the voting
link |
is what we're aware of. The results of the vote. Yeah, the voting. Well, it's, you can imagine
link |
it this way. We were just talking about eye movements a moment ago. So as I'm looking at
link |
something, my eyes are moving about three times a second. And with each movement,
link |
a completely new input is coming into the brain. It's not repetitive. It's not shifting around.
link |
It's completely new. I'm totally unaware of it. I can't perceive it. But yet if I looked at the
link |
neurons in your brain, they're going on and off, on and off, on and off, on up. But the voting neurons
link |
are not. The voting neurons are saying, you know, we all agree, even though I'm looking at different
link |
parts of this is a water bottle right now. And that's not changing. And it's in some position
link |
and pose relative to me. So I have this perception of the water bottle about two feet away from me
link |
at a certain pose to me. That is not changing. That's the only part I'm aware of. I can't be
link |
aware of the fact of the inputs from the eyes are moving and changing and all this other tapping.
link |
So these long range connections are the part we can be conscious of. The individual activity in
link |
each column is doesn't go anywhere else. It doesn't get shared anywhere else. It doesn't,
link |
there's no way to extract it and talk about it or extract it and even remember it to say, oh,
link |
yes, I can recall that. So, but these long range connections are the things that are accessible
link |
to language. And to our, you know, it's like the hippocampus or our memories, you know, our
link |
short term memory systems and so on. So we're not aware of 95% or maybe it's even 98% of what's
link |
going on in your brain. We're only aware of this sort of stable, somewhat stable voting outcome
link |
of all these things that are going on underneath the hood.
link |
So what would you say is the basic element in the 1000 brains theory of intelligence,
link |
of intelligence? Like, what's the atom of intelligence when you think about it?
link |
Is it the individual brains and then what is a brain?
link |
Well, let's, let's, can we just talk about what intelligence is first and then,
link |
and then we can talk about the elements are. So in my, in my book, intelligence is the ability
link |
to learn a model of the world, to build the internal to your head, a model that represents
link |
the structure of everything, you know, to know what this is a table and that's a coffee cup and
link |
this is a goose neck lamp and all these things. To know these things, I have to have a model
link |
in my head. I just don't look at them and go, what is that? I already have internal representations
link |
of these things in my head and I had to learn them. I wasn't born of any of that knowledge.
link |
You were, you know, we have some lights in the room here. I, you know, that's not part of my
link |
evolutionary heritage, right? It's not in my genes. So we have this incredible model and the model
link |
includes not only what things look like and feel like, but where they are relative to each other
link |
and how they behave. I've never picked up this water bottle before, but I know that if I took
link |
my hand on that blue thing and I turn it, it'll probably make a funny little sound as the little
link |
plastic things detach and then it'll rotate and it'll rotate a certain way and it'll come off.
link |
How do I know that? Because I have this model in my head. So the essence of intelligence
link |
is our ability to learn a model and the more sophisticated our model is, the smarter we are.
link |
Not that there is a single intelligence because you can know about, you know a lot about things
link |
that I don't know and I know about things you don't know and we can both be very smart,
link |
but we both learned a model of the world through interacting with it. So that is the
link |
essence of intelligence. Then we can ask ourselves, what are the mechanisms in the brain that allow
link |
us to do that? And what are the mechanisms of learning, not just the neural mechanisms,
link |
what are the general process by how we learn a model? So that was a big insight for us.
link |
It's like, what are the, what are the actual things that, how do you learn this stuff?
link |
It turns out you have to learn it through movement. You can't learn it just by,
link |
that's how we learn, we learn through movement, we learn. So you build up this model by observing
link |
things and touching them and moving them and walking around the world and so on.
link |
So either you move or the thing moves. Somehow. Yeah, obviously you can learn
link |
things just by reading a book, something like that, but think about if I were to say, oh,
link |
here's a new house. I want you to learn, you know, what do you do? You have to walk,
link |
you have to walk from room to room. You have to open the doors, look around,
link |
see what's on the left, what's on the right. As you do this, you're building a model in your head.
link |
It's just, that's what you're doing. You can't just sit there and say, I'm going to grok the
link |
house. No, you know, or you don't even want to just sit down and read some description of it,
link |
right? You literally physically interact with them. The same with like a smartphone. If I'm
link |
going to learn a new app, I touch it and I move things around. I see what happens when I,
link |
when I do things with it. So that's the basic way we learn in the world.
link |
And by the way, when you say model, you mean something that can be used for prediction
link |
in the future. It's, it's used for prediction and for behavior and planning.
link |
Right. And does a pretty good job at doing so.
link |
Yeah. Here's the way to think about the model. A lot of people get hung up on this. So
link |
you can imagine an architect making a model of a house, right? So there's a physical model
link |
that's small. And why don't they do that? Well, we do that because you can imagine what it would
link |
look like from different angles. Okay, look from here, look from there. And you can also say,
link |
well, how far to get from the garage to the, to the swimming pool or something like that,
link |
right? You can imagine looking at this. And so what would be the view from this location?
link |
So we build these physical models to let you imagine the future and imagine behaviors.
link |
Now we can take that same model and put it in a computer. So we now, today, they all build
link |
models of houses in a computer and they, and they do that using a set of,
link |
we'll come back to this term in a moment, reference frames, but eventually you assign a
link |
reference frame to the house and you assign different things for the house in different
link |
locations. And then the computer can generate an image and say, okay, this is what it looks
link |
like in this direction. The brain is doing something remarkably similar to this.
link |
Surprising. It's using reference frames. It's building these, it's similar to a model in a
link |
computer, which has the same benefits of building a physical model. It allows me to say, what would
link |
this thing look like if it was in this orientation? What would likely happen if I pushed this button?
link |
I've never pushed this button before. Or how would I accomplish something? I want to convey
link |
a new idea of learned. How would I do that? I can imagine in my head, well, I could talk about it.
link |
I could write a book. I could do some podcasts. I could, you know, maybe tell my neighbor,
link |
you know, and I can imagine the outcomes of all these things before I do any of them.
link |
That's the model that you do. Let's just plan the future and imagine the consequences or actions.
link |
Prediction, you asked about prediction. Prediction is not the goal of the model. Prediction is an
link |
inherent property of it. And it's how the model corrects itself.
link |
So prediction is fundamental to intelligence. It's fundamental to building a model and the
link |
model's intelligent. And let me go back and be very precise about this. Prediction,
link |
you can think of prediction two ways. One is like, hey, what would happen if I did this?
link |
That's the type of prediction. That's a key part of intelligence. But it isn't prediction. It's like,
link |
oh, what's this water bottle going to feel like when I pick it up? And that doesn't seem very
link |
intelligent. But the way to think, one way to think about prediction is it's a way for us to learn
link |
where our model is wrong. So if I picked up this water bottle and it felt hot, I'd be very surprised.
link |
Or if I picked up it was very light, it would be very, I'd be surprised. Or if I turned this top
link |
and it didn't, I had to turn it the other way, I'd be surprised. And so all those might have
link |
a prediction like, okay, I'm going to do it. I'm going to drink some water. Okay, I do this.
link |
There it is. I feel opening, right? What if I had to turn it the other way? Or what if it's
link |
split in two? Then I say, oh my gosh, I misunderstood this. I didn't have the right model. This thing,
link |
my attention would be drawn to, I'll be looking at it going, well, how did that happen? Why did it
link |
open up that way? And I would update my model by doing it. Just by looking at it and playing around
link |
that update and saying, this is a new type of water bottle. So you're talking about sort of
link |
complicated things like a water bottle. But this also applies for just basic vision, just like
link |
seeing things. That's almost like a precondition of just perceiving the world as predicting.
link |
Everything that you see is first passed through your prediction.
link |
Everything you see and feel, in fact, this is the insight I had back in the late 80s,
link |
and excuse me, early 80s. And another people have reached the same idea is that every sensory input
link |
you get, not just vision, but touch and hearing, you have an expectation about it and a prediction.
link |
Sometimes you can predict very accurately. Sometimes you can't. I can't predict what next
link |
word is going to come out of your mouth. But as you start talking, I was better and better
link |
predictions. And if you talked about some topics, I'd be very surprised. So I have this sort of
link |
background prediction that's going on all the time for all of my senses. Again, the way I think
link |
about that is this is how we learn. It's more about how we learn. It's a test of our understanding,
link |
our predictions, our test. Is this really a water bottle? If it is, I shouldn't see
link |
a little finger sticking out the side. And if I saw a little finger sticking out, I was like,
link |
what the hell's going on? That's not normal.
link |
I mean, that's fascinating. Let me linger on this for a second. It really honestly feels
link |
that prediction is fundamental to everything, to the way our mind operates, to intelligence.
link |
So it's just a different way to see intelligence, which is like everything starts a prediction.
link |
And prediction requires a model. You can't predict something unless you have a model of it.
link |
Right. But the action is prediction. So the thing the model does is prediction.
link |
But you can then extend it to things like, what would happen if I took this today? I went and
link |
did this. What would be likely? Or you can extend prediction to like, oh, I want to get a promotion
link |
at work. What action should I take? And you can say, if I did this, I could predict what might
link |
happen. If I spoke to someone, I predict what would happen. So it's not just low level predictions.
link |
Yeah, it's all predictions. It's all predictions. It's like this black box, so you can ask basically
link |
any question, low level or high level. So we started off with that observation. It's all,
link |
it's this nonstop prediction. And I write about this in the book about, and then we asked,
link |
how do neurons actually make predictions? And physically, like, what does the neuron do when
link |
it makes a prediction? And, or the neural tissue does when it makes prediction. And then we asked,
link |
what are the mechanisms by how we build a model that allows you to make prediction? So we started
link |
with prediction as sort of the fundamental research agenda, if in some sense, like, and say, well,
link |
we understand how the brain makes predictions. We will understand how it builds these models and how
link |
it learns. And that's the core of intelligence. So it was like, it was the key that got us in the door
link |
to say, that is our research agenda. Understand predictions. So in this whole process,
link |
this, where does intelligence originate, would you say? So if we look at things that are much
link |
less intelligence to humans, and you start to build up a human, the process of evolution,
link |
where's this magic thing that has a prediction model or a model that's able to predict that
link |
starts to look a lot more like intelligence? Is there a place where Richard Dawkins wrote an
link |
introduction to your, to your book, an excellent introduction? I mean, it puts a lot of things
link |
into context. And it's funny, just looking at parallels for your book and Darwin's origin of
link |
species. So Darwin wrote about the origin of species. So what is the origin of intelligence?
link |
Yeah, well, we have a theory about it. And it's just that is the theory. Theory goes as follows.
link |
As soon as living things started to move, they're not just floating in sea, they're not just
link |
a plant, you know, grounded someplace. As soon as they started to move, there was an advantage to
link |
moving intelligently, to moving in certain ways. And there's some very simple things you can do,
link |
you know, bacteria or single cell organisms can move towards a source of gradient of food or
link |
something like that. But an animal that might know where it is and know where it's been and how to
link |
get back to that place, or an animal that might say, oh, there was a source of food someplace,
link |
how do I get to it? Or there was a danger, how do I get to it? There was a mate, how do I get to them?
link |
There was a big evolutionary advantage to that. So early on, there was a pressure to start
link |
understanding your environment, like, where am I? And where have I been? And what happened in those
link |
different places? So we still have this neural mechanism in our brains. It's in the mammals,
link |
it's in the hippocampus and enterinocortex, these are older parts of the brain. And these are very
link |
well studied. We build a map of our environment. So these neurons in these parts of the brain know
link |
where I am in this room and where the door was and things like that. So a lot of other mammals
link |
have this kind of animal? All mammals have this, right? And almost any animal that knows where it
link |
is and get around must have some mapping system, must have some way of saying, I've learned a map
link |
of my environment. I have hummingbirds in my backyard. And they go to the same places all the
link |
time. They must know where they are. They just know where they are. They're not just randomly
link |
flying around. They know particular flowers they come back to. So we all have this. And it turns
link |
out it's very tricky to get neurons to do this, to build a map of an environment. And so we now
link |
know there's these famous studies that's still very active about place cells and grid cells and
link |
these other types of cells in the older parts of the brain and how they build these maps of the
link |
world. It's really clever. It's obviously been under a lot of evolutionary pressure over a long
link |
period of time to get good at this. So animals know where they are. What we think has happened,
link |
and there's a lot of evidence that suggests this, is that mechanism we learned to map
link |
a space is was repackaged. The same type of neurons was repackaged into a more compact form.
link |
And that became the cortical column. And it was in some sense,
link |
genericized, if that's a word. It was turned into a very specific thing about learning maps
link |
of environments to learning maps of anything, learning a model of anything, not just your
link |
space, but coffee cups and so on. And it got sort of repackaged into a more compact version,
link |
a more universal version, and then replicated. So the reason we're so flexible is we have a very
link |
generic version of this mapping algorithm, and we have 150,000 copies of it. Sounds a lot like
link |
the progress of deep learning. How so? To take neural networks that seem to work well for a
link |
specific task, compress them, and multiply it by a lot. And then you just stack them on top of it.
link |
It's like the story of transformers in natural language processing.
link |
Deep learning networks, they end up, you're replicating an element, but you still need
link |
the entire network to do anything. Here, what's going on, each individual element is a complete
link |
learning system. This is why I can take a human brain, cut it in half, and it still works. It's
link |
pretty amazing. It's fundamentally distributed. It's fundamentally distributed, complete modeling
link |
systems. But that's our story we like to tell. I would guess it's likely largely right,
link |
but there's a lot of evidence supporting that story, this evolutionary story.
link |
The thing which brought me to this idea is that the human brain got big very quickly,
link |
so that led to the proposal a long time ago that, well, there's this common element just
link |
instead of creating new things, it just replicated something. We also are extremely flexible. We
link |
can learn things that we had no history about. So that tells it that the learning algorithm is
link |
very generic. It's very universal because it doesn't assume any prior knowledge about what it's
link |
learning. So you combine those things together and you say, okay, well, how did that come about?
link |
Where did that universal algorithm come from? It had to come from something that wasn't universal.
link |
It came from something that was more specific. Anyway, this led to our hypothesis that you
link |
would find grid cells and play cell equivalents in the New York Cortex. When we first published our
link |
first papers on this theory, we didn't know of evidence for that. It turns out there was some,
link |
but we didn't know about it. And since then, so then we became aware of evidence for grid cells
link |
in certain parts of the New York Cortex. And then now there's been new evidence coming out. There's
link |
some interesting papers that came out just January of this year. So one of our predictions was if
link |
this evolutionary hypothesis is correct, we would see grid cell play cell equivalents, cells that
link |
work like them through every column in the New York Cortex, and that's starting to be seen.
link |
And what does it mean that why is it important that they're present?
link |
Because it tells us, well, we're asking about the evolutionary origin of intelligence, right?
link |
So our theory is that these columns in the Cortex are working on the same principles,
link |
the modeling systems. And it's hard to imagine how neurons do this. And so we said, hey,
link |
it's really hard to imagine how neurons could learn these models of things. We can talk about
link |
the details of that if you want. But let's, but there's this other part of the brain, we know
link |
the learned models of environments. So could that mechanism to learn to model this room be
link |
used to learn to model the water bottle? Is it the same mechanism? So we said it's much more
link |
likely the brain's using the same mechanism, which case it would have these equivalent cell types.
link |
So it's basically the whole theory is built on the idea that these columns have reference frames
link |
and they're learning these models and these grid cells create these reference frames. So it's
link |
basically the major, in some sense, the major predictive part of this theory is that we will
link |
find these equivalent mechanisms in each column in the near Cortex, which tells us that that's
link |
what they're doing. They're learning these sensory model models of the world. So just
link |
we're pretty confident that would happen. But now we're seeing the evidence.
link |
So the evolutionary process nature does a lot of copy and paste and see what happens.
link |
Yeah. Yeah, there's no direction to it. But, but it just found out like, Hey, if I took this,
link |
these elements and made more of them, what happens? And let's hook them up to the eyes and
link |
let's hook them to ears. And that seems to work pretty well for us. Again, just to take a quick
link |
step back to our conversation of collective intelligence. Do you sometimes see that as just
link |
another copy and paste aspect is copying pasting these brains and humans and making a lot of them
link |
and then creating social structures that then almost operates as a single brain?
link |
I wouldn't have said it, but you said it sounded pretty good.
link |
So to you, the brain is fundamental is like, is it something?
link |
Yeah. I mean, our goal is to understand how the neocortex works. We can argue how essential that
link |
is to understanding human brain because it's not the entire human brain. You can argue how
link |
essential that is to understanding human intelligence. You can argue how essential this
link |
to, to, you know, sort of communal intelligence. I'm not, I didn't, our goal was to understand
link |
the neocortex. Yeah. So what is the neocortex and where does it fit in the various aspect of
link |
what the brain does? Like, how important is it to you? Well, obviously, again, we, I mentioned
link |
again, in the beginning, it's, it's about 70 to 75% of the volume of a human brain. So it, you know,
link |
it dominates our brain in terms of size, not in terms of number of neurons, but in terms of size.
link |
Size isn't everything, Jeff. I know. But it's, it's nothing, it's nothing to be, it's not that.
link |
We know that all high level vision, hearing and touch happens in neocortex. We know that
link |
all language occurs and is understood in the neocortex, whether that's spoken language, written
link |
language, sign language, whether language of mathematics, language of physics, music, math,
link |
you know, we know that all high level planning and thinking occurs in the neocortex. If I were to
link |
say, you know, what part of your brain designed a computer and understands programming and, and
link |
creates music, it's all the neocortex. So then that's a kind of undeniable fact.
link |
If, but then there's other parts of our brain are important too, right? Our emotional states,
link |
our body regulating our body. So the way I like to look at it is, you know, could you, can you
link |
understand the neocortex without the rest of the brain? And some people say you can't,
link |
and I think absolutely you can. It's not that they're not interacting, but you can understand it.
link |
Can you understand the neocortex without understanding the emotions of fear? Yes,
link |
you can. You can understand how the system works. It's just a modeling system. I make the,
link |
the analogy in the book that it's, it's like a map of the world and how that map is used
link |
depends on who's using it. So how our map of our world in our neocortex, how we, how we
link |
manifest as a human depends on the rest of our brain. What are our motivations? You know,
link |
what are my desires? Am I a nice guy or not a nice guy? Am I a cheater or am I, you know,
link |
not a cheater? You know, how important different things are in my life. So, so, but the neocortex
link |
can be understood on its own. And, and I say that as a neuroscientist, I know there's all these
link |
interactions and I want to say I don't know them and we don't think about them. But a layperson's
link |
point of view, you can say it's a modeling system. I don't generally think too much about the communal
link |
aspect of intelligence, which you brought up a number of times already. So that's not really
link |
been my concern. I just wonder if there's a continuum from the origin of the universe, like
link |
this pockets of complexities that form living organisms. I wonder if we're just, if you look
link |
at humans, we feel like we're at the top. And I wonder if there's like just where everybody
link |
probably, every living type pocket of complexity is probably thinks they're the part in the French,
link |
they're the shit. They're at the top of the pyramid. Well, if they're thinking.
link |
Well, and then what is thinking? Well, that in a sense, the whole point is in their sense of the
link |
world, they, their sense is that they're at the top of it. I think what is the turtle?
link |
But you're, you're, you're bringing up, you know, the problems of complexity and complexity theory
link |
are, you know, it's a huge, interesting problem in science. And, you know, I think we've made
link |
surprisingly little progress in understanding complex systems in general. And so, you know,
link |
the Santa Fe Institute was founded to, to study this and even the scientists there will say it's
link |
really hard. We haven't really been able to figure out exactly, you know, that science
link |
hasn't really congealed yet. We're still trying to figure out the basic elements of that science.
link |
What, you know, where does complexity come from and what is it and how you define it,
link |
whether it's DNA, creating bodies or phenotypes or individuals creating societies or ants and,
link |
you know, markets and so on. It's, it's a very complex thing. I'm not a complexity theorist
link |
person, right? I, I think you need to ask, well, the brain itself is a complex system. So,
link |
can we understand that? I think we've made a lot of progress understanding how the brain works.
link |
So, but I haven't brought it out to like, oh, well, where are we on the complexity spectrum?
link |
You know, it's like, it's a great question. I prefer for that answer to be, we're not special.
link |
It seems like if we're honest, most likely we're not special. So, if there is a spectrum,
link |
we're probably not in some kind of significant place. I think there's one thing we could say
link |
that we are special. And again, only here on earth, I'm not saying I'm bad, is that if we
link |
think about knowledge, what we know, we clearly, human brains have, the only brains have a certain
link |
types of knowledge. We're the only brains on, on this earth to understand what the earth is,
link |
how old it is, that the universe is a picture as a whole with the only organisms understand DNA and
link |
the origins of, you know, of species. No other species on, on this planet has that knowledge.
link |
So, if we think about, I like to think about, you know, one of the endeavors of humanity is to
link |
understand the universe as much as we can. I think our species is further along in that,
link |
undeniably, whether our theories are right or wrong, we can debate, but at least we have theories.
link |
You know, we, we know that what the sun is and how fusion is and how what black holes are and,
link |
you know, we know general theory and relativity and no other animal has any of this knowledge.
link |
So, from that sense, we're special. Are we special in terms of the, the hierarchy of complexity in,
link |
in the universe? Probably not.
link |
Can we look at a neuron? Yeah, you say that prediction happens in the neuron. What does
link |
that mean? So, the neuron tradition is seen as the basic element of the, the brain.
link |
So, we, I mentioned this earlier, that prediction was our research agenda.
link |
Yeah. We said, okay, how does the brain make a prediction? Like, I'm about to grab this water
link |
bottle and my brain is predicting what I'm going to feel on, on all my parts of my fingers. If I
link |
felt something really odd on any part here, I'd notice it. So, my brain is predicting what it's
link |
going to feel as I grab this thing. So, what is that? How does that manifest itself in neural
link |
tissue, right? We got brains made of neurons and there's chemicals and there's neurons and there's
link |
spikes and the connect, you know, where is, where is the prediction going on? And one argument could
link |
be that, well, when I'm predicting something, a neuron must be firing in advance. It's like, okay,
link |
this neuron represents what you're going to feel and it's firing. It's sending a spike. And certainly,
link |
that happens to some extent. But our predictions are so ubiquitous that we're making so many of them,
link |
which we're totally unaware of. Just the vast majority of them have no idea that you're doing
link |
this. That it, there wasn't really, we were trying to figure how could this be? Where are these,
link |
where are these happening, right? And I won't walk you through the whole story unless you insist on
link |
it, but we came to the realization that most of your predictions are occurring inside individual
link |
neurons, especially these, the most common neuron, the pyramidal cells. And there are, there's a
link |
property of neurons. I mean, everyone knows, or most people know that a neuron is a cell and it
link |
has this spike called an action potential and it sends information. But we now note that there's
link |
these spikes internal to the neuron. They're called dendritic spikes. They travel along the
link |
branches of the neuron and they don't leave the neuron. They're just internal only. They're far
link |
more dendritic spikes than there are action potentials, far more. They're happening all the
link |
time. And what we came to understand that those dendritic spikes, the ones that are occurring,
link |
are actually a form of prediction. They're telling the neuron, the neuron is saying, I expect that
link |
I might become active shortly. And that internal, so the internal spike is a way of saying,
link |
you're going to, you might be generating external spikes soon. I predicted you're going to become
link |
active. And we've, we've, we wrote a paper in 2016, which explained how this manifests itself
link |
in neural tissue and how it is that this all works together. But the vast, we think it's,
link |
there's a lot of evidence supporting it. So we, that's where we think that most of these predictions
link |
are internal. That's why you can't be, the internal neuron, you can't perceive them.
link |
From understanding the, the prediction mechanism of a single neuron, do you think there's deep
link |
insights to be gained about the prediction capabilities of the mini brains within the
link |
bigger brain and the brain? Oh yeah. Yeah. Yeah. So having a prediction
link |
side of their individual neuron is not that useful. You know, what, so what?
link |
The way it manifests itself in neural tissue is that when a neuron, a neuron emits these spikes
link |
are a very singular type of vent. If a neuron is predicting that it's going to be active,
link |
it emits its spike very a little bit sooner, just a few milliseconds sooner than it would
link |
have otherwise. It's like, I give the analogy to the book, it's like a sprinter on a starting
link |
block in a race. And if someone says, get ready, set, you get up and you're ready to go. And then
link |
when your race starts, you get a little bit earlier start. So that, it's that, that ready set is like
link |
the prediction and the neurons like ready to go quicker. And what happens is when you have a whole
link |
bunch of neurons together and they're all getting these inputs, the ones that are in the predictive
link |
state, the ones that are anticipating to become active, if they do become active, they, they
link |
happen sooner, they disable everything else and it leads to different representations in the brain.
link |
So you have to, it's not isolated just to the neuron, the prediction occurs with the neuron,
link |
but the network behavior changes. So what happens under different predictions, different inputs
link |
have different representations. So how I, what I predict is going to be different under different
link |
contexts, you know, what my input will be is different under different contexts. So this is,
link |
this is a key to the whole theory, how this works. So the theory of the 1000 brains, if you were to
link |
count the number of brains, how would you do it? The 1000 main theory says that basically every
link |
cortical column in the, in your neocortex is a complete modeling system. And that when I ask
link |
where do I have a model of something like a coffee cup, it's not in one of those models,
link |
it's in thousands of those models. There's thousands of models of coffee cups. That's what
link |
the 1000 brains, there's a voting mechanism, then there's a voting mechanism, which you
link |
leads, which you're, which is the thing you're, which you're conscious of, which leads to your
link |
singular perception. That's why you, you perceive something. So that's the 1000 brain theory.
link |
The details of how we got to that theory are complicated. It wasn't, we just thought of it
link |
one day. And one of those details that we had to ask, how does a model make predictions? And we
link |
talked about just these predictive neurons. That's part of this theory. That's like saying, oh,
link |
it's a detail, but it was like a crack in the door. It's like, how are we going to figure out how
link |
these neurons build, do this? You know, what is going on here? So we just looked at prediction
link |
as like, well, we know that's ubiquitous. We know that every part of the cortex is making
link |
predictions. Therefore, whatever the predictive system is, it's going to be everywhere. We know
link |
there's a gazillion predictions happening at once. So this is, we can start teasing apart,
link |
you know, ask questions about, you know, how can neurons be making these predictions? And that
link |
sort of built up to now what we have this 1000 brain theory, which is complex, you know, which
link |
is, I can state it simply, but we just didn't think of it. We had to get there step by step,
link |
very, it took years to get there. And where does reference frames fit in? So yeah.
link |
Okay. So again, a reference frame, I mentioned earlier about the, you know, model of a house.
link |
And I said, if you're going to build a model of a house in a computer, they have a reference
link |
frame. And you can think of referencing like Cartesian coordinates, like X, Y and Z axes.
link |
So I can say, oh, I'm going to design a house. I can say, well, the front door is at this location,
link |
XYZ and the roof is at this location, XYZ and so on. That's the type of reference frame.
link |
So it turns out for you to make a prediction and then I walk you through the thought experiment
link |
in the book where I was predicting what my finger was going to feel when I touched a coffee cup,
link |
was a ceramic coffee cup, but this one will do. And what I realized is that to make a prediction
link |
when my finger is going to feel like it's going to feel different than this, which it feel different
link |
if I touch the hole or the thing on the bottom, make that prediction. The cortex needs to know
link |
where the finger is, the tip of the finger relative to the coffee cup and exactly relative to the
link |
coffee cup. And to do that, I have to have a reference frame for the coffee cup. There has
link |
to have a way of representing the location of my finger to the coffee cup. And then we realized,
link |
of course, every part of your skin has to have a reference frame relative to things that touch.
link |
And then we did the same thing with vision. But so the idea that a reference frame is necessary
link |
to make a prediction when you're touching something or when you're seeing something
link |
and you're moving your eyes or you're moving your fingers, it's just a requirement to know what to
link |
predict. If I have a structure, I'm going to make a prediction. I have to know where it is.
link |
I'm looking or touching it. So then we say, well, how do neurons make reference frames?
link |
It's not obvious. You know, XYZ coordinates don't exist in the brain. It's just not the way it works.
link |
So that's when we looked at the older part of the brain, the hippocampus and the enteronocortex,
link |
where we knew that in that part of the brain, there's a reference frame for a room or a reference
link |
frame for environment. Remember, I talked earlier about how you could make a map of this room.
link |
So we said, oh, they are implementing reference frames there. So we knew that a reference
link |
frames needed to exist in every quarter of a column. And so that was a deductive thing. We
link |
just deduced it. So you take the old mammalian ability to know where you are in a particular
link |
space and you start applying that to higher and higher levels. Yeah. You first you apply it to
link |
like where your finger is. So here's what I think about it. The old part of the brain says,
link |
where's my body in this room? Yeah. The new part of the brain says, where's my finger relative to
link |
this object? Yeah. Where is a section of my retina relative to this object? I'm looking at one
link |
little corner. Where is that relative to this patch of my retina? Yeah. And then we take the same
link |
thing and apply the concepts, mathematics, physics, you know, humanity, whatever you want to think
link |
of. And eventually you're pondering your own mortality. Well, whatever. But the point is,
link |
when we think about the world, when we have knowledge about the world, how is that knowledge
link |
organized, Lex? Where is it in your head? The answer is, it's in reference frames. So the way I
link |
learned the structure of this water bottle, where the features are relative to each other,
link |
when I think about history or democracy or mathematics, there's same basic underlying
link |
structures happening. There's reference frames for where the knowledge that you're signing
link |
things to. So in the book, I go through examples like mathematics and language and politics.
link |
But the evidence is very clear in the neuroscience. The same mechanism that we used to model this
link |
coffee cup, we're going to use to model high level thoughts. You're the demise of the humanity,
link |
whatever you want to think about. It's interesting to think about how different are the representations
link |
of those higher dimensional concepts, higher level concepts, how different the representation
link |
there is in terms of reference frames versus spatial. But interesting thing, it's a different
link |
application, but it's the exact same mechanism. But isn't there some aspect to higher level
link |
concepts that they seem to be hierarchical? They just seem to integrate a lot of information
link |
into them. So is our physical objects. So take this water bottle. I'm not particular to this
link |
brand, but this is a Fiji water bottle, and it has a logo on it. I use this example in my book,
link |
in my book, our company's coffee cup has a logo on it. But this object is hierarchical.
link |
It's got like a cylinder and a cap, but then it has this logo on it, and the logo has a word.
link |
The word has letters, the letters have different features. And so I don't have to remember,
link |
I don't have to think about this. So I say, oh, there's a Fiji logo on this water bottle. I don't
link |
have to go through and say, oh, what is the Fiji logo? It's the F and I and the J and I, and there's
link |
a hibiscus flower. And oh, it has a, you know, the stamen on it. I don't have to do that. I just
link |
incorporate all of that in some sort of hierarchical representation. I say, you know, put this logo on
link |
this water bottle. And, and, and then the logo has a word and the word has letters, all hierarchical.
link |
It's all that stuff is big. It's amazing that the brain instantly just does all that. The idea
link |
that there's, there's water, it's liquid, and the idea that you can drink it when you're thirsty,
link |
the idea that there's brands. And then there's like, all of that information is instantly
link |
like built into the whole thing once you proceed. So I wanted to get back to your point about
link |
hierarchical representation. The world itself is hierarchical, right? And I can take this
link |
microphone in front of me. I know inside there's going to be some electronics. I know there's
link |
going to be some wires and I know there's going to be a little dive from them was back and forth.
link |
I don't see that, but I know it. So everything in the world is hierarchical. Just go into room.
link |
It's composed of other components of kitchen has a refrigerator, you know, the refrigerator has a
link |
door, the door has a hinge, the hinge has screws and pin. So anyway, the modeling system that
link |
exists in every cortical column learns the hierarchical structure of objects. So it's a
link |
very sophisticated modeling system in this grain of rice. It's hard to imagine, but this
link |
grain of rice can do really sophisticated things. It's got a hundred thousand neurons in it.
link |
It's very sophisticated. So that same mechanism that can model a water bottle or a coffee cup
link |
can model conceptual objects as well. That's the beauty of this discovery that this guy,
link |
Vernon Mountcastle, made many, many years ago, which is that there's a single cortical algorithm
link |
underlying everything we're doing. So common sense concepts and higher level concepts are
link |
all represented in the same way. They're set in the same mechanisms. It's a little bit like
link |
computers, right? All computers are universal Turing machines. Even the little teeny one
link |
that's in my toaster and the big one that's running some cloud servers someplace.
link |
They're all running on the same principle. They can apply different things. So the brain is all
link |
built on the same principle. It's all about learning these models, structured models using
link |
movement and reference frames. And it can be applied to something as simple as a water bottle
link |
and a coffee cup. And it can be like just thinking like, what's the future of humanity? And, you
link |
know, why do you have a hedgehog on your desk? I don't know. Nobody knows. Well, I think it's
link |
a hedgehog. That's right. It's a hedgehog in the fog. It's a Russian reference. Does it give you any
link |
inclination or hope about how difficult that is to engineer common sense reasoning?
link |
So how complicated is this whole process? So looking at the brain, is this a marvel of
link |
engineering? Or is it pretty dumb stuff stacked on top of each other over a pretty extensive
link |
copy? Can it be both? Can it be both, right? I don't know if it can be both, because if it's
link |
an incredible engineering job, that means it's so evolution did a lot of work.
link |
Yeah, but then it just copied that, right? So as I said earlier, the figuring out how to model
link |
something like a space is really hard. And evolution had to go through a lot of trick
link |
and these cells I was talking about, these grid cells and place cells, they're really complicated.
link |
This is not simple stuff. This neural tissue works on these really unexpected weird mechanisms.
link |
But it did it. It figured it out. But now you could just make lots of copies of it.
link |
But then finding, yeah, so it's a very interesting idea that it's a lot of copies
link |
of a basic mini brain. But the question is, how difficult it is to find that mini brain that
link |
you can copy and paste effectively? Well, today, we know enough to build this. I'm sitting here with,
link |
you know, I know the steps we have to go. There's still some engineering problems to solve,
link |
but we know enough. And it's not like, Oh, this is an interesting idea, we have to go think about
link |
it in other few decades. No, we actually understand it in pretty well details. So not all the details,
link |
but most of them. So it's complicated, but it is an engineering problem. So in my company,
link |
we are working on that. We are basically the roadmap, how we do this. It's not going to take
link |
decades. It's better a few years, optimistically, but I think that's possible. It's, you know,
link |
complex things. If you understand them, you can build them. So in which domain do you think it's
link |
best to build them? Are we talking about robotics, like entities that operate in the physical world
link |
that are able to interact with that world? Are we talking about entities that operate in the
link |
digital world? Are we talking about something more like, more specific, like is done in the
link |
machine learning community, where you look at natural language or computer vision?
link |
Where do you think is easiest? It's the first, it's the first two more than the third one,
link |
I would say. Again, let's just use computers as an analogy. The pioneers are computing people
link |
like John Van Norman on Turing, they created this thing, you know, we now call the universal
link |
Turing machine, which is a computer, right? Did they know how it was going to be applied,
link |
where it was going to be used, you know, could they envision any of the future? No,
link |
they just said, this is like a really interesting computational idea about algorithms and how you
link |
can implement them in a machine. And we're doing something similar to that today, like we are,
link |
we are building this sort of universal learning principle that can be applied to many, many
link |
different things. But the robotics piece of that, the interactive.
link |
Okay, all right. Let us be specific. You can think of this cortical column as what we call a
link |
sensory motor learning system. It has the idea that there's a sensor, and then it's moving.
link |
That sensor can be physical. It could be like my finger, and it's moving in the world. It could
link |
be like my eye, and it's physically moving. It can also be virtual. So it could be, an example
link |
would be, I could have a system that lives in the internet that that actually samples information
link |
on the internet and moves by following links. That's, that's a sensory motor system. So
link |
it's just something that echoes the process of a finger moving along.
link |
But in a very, very loose sense, it's, it's like, again, learning is inherently about
link |
the subring, the structure in the world and discover the structure in the world,
link |
you have to move through the world, even if it's a virtual world, even if it's a conceptual world,
link |
you have to move through it. You don't, it doesn't exist in one, it has some structure to it.
link |
So here's, here's a couple of predictions that getting what you're talking about.
link |
So in humans, the same algorithm does robotics, right? It moves my arms, my eyes, my body, right?
link |
And so in my, in the future, to me, robotics and AI will merge. They're not going to be
link |
separate fields because they're going to, the, the, the algorithms to really controlling robots
link |
are going to be the same algorithms we have in our brain, that the brain, that these
link |
sensory motor algorithms, today we're not there, but I think that's going to happen.
link |
And, and then, so, but not all AI systems will have to be robotics. You can have systems that
link |
have very different types of embodiments. Some will have physical movements, some will have
link |
not have physical movements. It's a very generic learning system. Again, it's like computers,
link |
the Turing machine is, it's like, it doesn't say how it's supposed to be implemented. It doesn't
link |
tell you how big it is. It doesn't tell you what you can apply it to, but it's an interesting,
link |
it's a computational principle. Cortical column equivalent is a computational principle about
link |
learning. It's about how you learn and it can be applied to a gazillion things. This is, I think
link |
this is, I think this impact of AI is going to be as large, if not larger than computing has been
link |
in the last century, by far, because it's, it's getting at a fundamental thing. It's not a vision
link |
system or a learning system. It's a, it's not a vision system or a hearing system. It is a learning
link |
system. It's a fundamental principle, how you learn the structure in the world, how you gain
link |
knowledge and be intelligent. And that's what the thousand brain says was going on. And we have a
link |
particular implementation in our head, but it doesn't have to be like that at all. Do you think
link |
there's going to be some kind of impact? Okay, let me ask it another way. What do increasingly
link |
intelligent AI systems do with us humans in the following way? Like, how hard is the human in a
link |
loop problem? How hard is it to interact the finger on the coffee cup equivalent of having a
link |
conversation with a human being? So how hard is it to fit into our little human world?
link |
I don't, I think it's a lot of engineering problems. I don't think it's a fundamental problem.
link |
I could ask you the same question. How hard is it for computers to fit into a human world?
link |
Right. That, I mean, that's essentially what I'm asking. Like, how much are we,
link |
elitist are we as humans? Like, we tried to keep out systems?
link |
I don't know. I'm not sure. I think I'm not sure that's the right question. Let's look at
link |
computers as an analogy. Computers are million times faster than us. They do things we can't
link |
understand. Most people have no idea what's going on when they use computers, right? How
link |
we integrate them in our society? Well, we don't think of them as their own entity. They're not
link |
living things. We don't afford them rights. We, we rely on them. Our survival as seven billion
link |
people or something like that is relying on computers now.
link |
Don't you think that's a fundamental problem that we see them as something we can't,
link |
we don't give rights to? Computers?
link |
So yeah, computers. So robots, computers, intelligent systems, it feels like for them to
link |
operate successfully, they would need to have a lot of the elements that we would start having
link |
to think about, like, should this entity have rights?
link |
I don't think so. I think it's tempting to think that way. First of all, I don't think anyone,
link |
hardly anyone thinks that's for computers today. No one says, oh, this thing needs a right. I
link |
shouldn't be able to turn it off. Or, you know, if I throw it in the trash can, you know, and hit
link |
it with a sledgehammer, my, my, for my criminal act, no, no one thinks that. And now we think
link |
about intelligent machines, which is where you're going. And, and all of a sudden we're like, well,
link |
now we can't do that. I think the basic problem we have here is that people think intelligent
link |
machines will be like us. They're going to have the same emotions as we do, the same feelings as
link |
we do. What if I can build an intelligent machine that have absolutely could care less about whether
link |
it was on or off or destroyed or not? It just doesn't care. It's just like a map. It's just
link |
a modeling system. It has no desires to live, nothing. Is it possible to create a system that
link |
can model the world deeply and not care about whether it lives or dies? Absolutely. No question
link |
about it. To me, that's not 100% obvious. It's obvious to me. So we can, we can debate if you
link |
want. Where does your, where does your desire to live come from? It's an old evolutionary design.
link |
I mean, we can argue, does it really matter if we live or not? Objectively no, right? We're all
link |
going to die eventually. But evolution makes us want to live. Evolution makes us want to fight
link |
to live. Evolutionists want to care and love one another and to care for our children and our,
link |
and our relatives and our family and, and so on. And those are all good things. But they come about
link |
not because we're smart, because we're animals that grew up. You know, the hummingbird in my
link |
backyard cares about its offspring. You know, they, every living thing in some sense cares about,
link |
you know, surviving. But when we talk about creating intelligent machines, we're not creating
link |
life. We're not creating evolving creatures. We're not creating living things. We're just
link |
creating a machine that can learn really sophisticated stuff. And that machine, it may even be able to
link |
talk to us, but it doesn't, it's not going to have a desire to live unless somehow we put it
link |
into that system. Well, there's learning, right? The thing is, but you don't learn to like want to
link |
live. It's built into you. It's, well, people like Ernest Becker argue. So, okay, there's the fact
link |
of finiteness of life. The way we think about it is something we learned, perhaps. So, okay.
link |
Yeah. And some people decide they don't want to live. And some people decide, you know,
link |
you can, but the desire to live is built in DNA, right? But I think what I'm trying to get to is,
link |
in order to accomplish goals, it's useful to have the urgency of mortality. So what the Stoics
link |
talked about is meditating in your mortality. It might be a very useful thing to do, to die
link |
and have the urgency of death. And to realize the, to conceive yourself as an entity that operates
link |
in this world that eventually will no longer be a part of this world. And actually conceive of
link |
yourself as a conscious entity might be very useful for you to be a system that makes sense of the
link |
world. Otherwise, you might get lazy. Well, okay. We're going to build these machines, right?
link |
So, are we talking about building AI? But we're building the equivalent of the
link |
cortical columns. The neocortex. The neocortex. And the question is, where do they arrive at?
link |
Because we're not hard coding everything in. Well, in terms of, if you build the neocortex
link |
equivalent, it will not have any of these desires or emotional states. Now, you could argue that
link |
that neocortex won't be useful unless I give it some agency, unless I give it some desire,
link |
unless I give it some motivation. Otherwise, you'll be just lazy to do nothing, right? You
link |
could argue that. But on its own, it's not going to do those things. It's just not, it's just not
link |
going to sit there and say, I understand the world. Therefore, I care to live. No, it's not going to
link |
do that. It's just going to say, I understand the world. Why is that obvious to you? Do you think
link |
it's possible? Okay, let me ask it this way. Do you think it's possible it will at least assign to
link |
itself agency and perceive itself in this world as being a conscious entity as a useful way to
link |
operate in the world and to make sense of the world? I think intelligent machine can be conscious,
link |
but that does not, again, imply any of these desires and goals that you're worried about.
link |
We can talk about what it means for a machine to be conscious.
link |
By the way, not worry about, but get excited about. It's not necessarily that we should worry
link |
about it. I think there's a legitimate problem or not a problem. A question asks, if you build
link |
this modeling system, what's it going to model? What's its desire? What's its goal? What are we
link |
applying it to? That's an interesting question. One thing, and it depends on the application.
link |
It's not something that inherent to the modeling system. It's something we apply to the modeling
link |
system in a particular way. If I wanted to make a really smart car, it would have to know about
link |
driving in cars and what's important in driving in cars. It's not going to figure that out on its
link |
own. It's not going to sit there and say, you know, I've understood the world and I've decided,
link |
you know, no, no, no, we're going to have to tell it. We're going to have to say like,
link |
so I imagine I make this car really smart. It learns about your driving habits. It learns
link |
about the world. It's just, you know, is it one day going to wake up and say, you know what,
link |
I'm tired of driving and doing what you want. I think I have better ideas about how to spend my
link |
time. Okay. No, it's not going to do that. Well, part of me is playing a little bit of devil's
link |
advocate, but part of me is also trying to think through this because I've studied cars quite a
link |
bit and I've studied pedestrians and cyclists quite a bit. And as part of me that thinks
link |
that there needs to be more intelligence than we realize in order to drive successfully,
link |
that game theory of human interaction seems to require some deep understanding of human nature.
link |
Okay. When a pedestrian crosses the street, there's some sense. They look at a car usually
link |
and then they look away. There's some sense in which they say, I believe that you're not going
link |
to murder me. You don't have the guts to murder me. This is the little dance of pedestrian car
link |
interaction is saying, I'm going to look away and I'm going to put my life in your hands
link |
because I think you're human. You're not going to kill me. And then the car in order to successfully
link |
operate in like Manhattan streets has to say, no, no, no, no, I am going to kill you like a
link |
little bit. There's a little bit of this weird inkling of mutual murder and that's a dance
link |
and then somehow successfully operate through that. Do you think you were born of that?
link |
Did you learn that social interaction? I think it might have a lot of the same elements that
link |
you're talking about, which is we're leveraging things we were born with and applying them in the
link |
context that I would have said that that kind of interaction is learned because people in
link |
different cultures have different interactions like that. If you cross the street in different
link |
cities and different parts of the world, they have different ways of interacting. I would say
link |
that's learned and I would say an intelligent system can learn that too, but that does not
link |
lead and the intelligent system can understand humans. It could understand that just like I can
link |
study an animal and learn something about that animal. I could study apes and learn something
link |
about their culture and so on. I don't have to be an ape to know that. I may not be completely,
link |
but I can understand something. So intelligent machine can model that. That's just part of
link |
the world. It's just part of the interactions. The question we're trying to get at, will the
link |
intelligent machine have its own personal agency that's beyond what we assigned to it or its own
link |
personal goals or will evolve and create these things? My confidence comes from understanding
link |
the mechanisms I'm talking about creating. This is not hand wavy stuff. It's down in the details.
link |
I'm going to build it and I know what it's going to look like and I know what's it going to behave.
link |
I know what the kind of things it could do and the kind of things it can't do. Just like when I
link |
build a computer, I know it's not going to on its own decide to put another register inside
link |
of it. It can't do that. It's no way. No matter what your software does, it can't add a register
link |
to the computer. So in this way, when we build AI systems, we have to make choices about how we
link |
embed them. So I talk about this in the book. I said, intelligent system is not just a neocortex
link |
equivalent. You have to have that, but it has to have some kind of embodiment, physical, virtual.
link |
It has to have some sort of goals. It has to have some sort of ideas about dangers, about
link |
things it shouldn't do like we build in safeguards in the systems. We have them in our bodies. We
link |
have put them in the cars. My car follows my directions until the day it sees I'm about to
link |
hit something and it ignores my directions and puts the brakes on. So we can build those things in.
link |
So that's a very interesting problem, how to build those in. I think my differing opinion
link |
about the risks of AI for most people is that people assume that somehow those things will
link |
just appear automatically and it will evolve. And intelligence itself begets that stuff or
link |
requires it. But it's not. Intelligence of the neocortex equivalent doesn't require this. The
link |
neocortex equivalent just says, I'm a learning system. Tell me what you want me to learn. And
link |
I'll ask me questions and I'll tell you the answers. But in that, again, it's like a map.
link |
It doesn't, a map has no intent about things, but you can use it to solve problems.
link |
Okay. So the building, engineering, the neocortex in itself is just creating an intelligent
link |
prediction system. Modeling system. Sorry, modeling system. You can use it to then make
link |
predictions. But you can also put it inside a thing that's actually acting in this world.
link |
You have to put it inside something. Again, think of the map analogy. A map on its own
link |
doesn't do anything. It's just inert. It can learn, but it's inert. So we have to embed
link |
it somehow in something to do something. So what's your intuition here? You had a conversation
link |
with Sam Harris recently that was sort of, you've had a bit of a disagreement and you're
link |
sticking on this point. Elon Musk, Stuart Russell kind of have us worry existential
link |
threats of AI. What's your intuition? Why, if we engineer an increasingly intelligent
link |
neocortex type of system in the computer, why that shouldn't be a thing that we...
link |
It was interesting to use the word intuition and Sam Harris used the word intuition too.
link |
And when he used that intuition, that word immediately stopped and said,
link |
that's the cut to the problem. He's using intuition. I'm not speaking about my intuition.
link |
I'm speaking about something I understand, something I'm going to build, something I am
link |
building, something I understand completely or at least well enough to know what it's all
link |
I'm guessing. I know what this thing's going to do. And I think most people who are worried,
link |
they have trouble separating out. They don't have the unknowledge or the understanding about
link |
like, what is intelligence? How's it manifest in the brain? How's it separate from these other
link |
functions in the brain? And so they imagine it's going to be human like or animal like.
link |
It's going to have the same sort of drives and emotions we have, but there's no reason for that.
link |
That's just because there's an unknown. If the unknown is like, oh my God,
link |
I don't know what this is going to do. We have to be careful. It could be like us,
link |
but really smarter. I'm saying, no, it won't be like us. It'll be really smarter,
link |
but it won't be like us at all. But I'm coming from that not because I just
link |
guessing I'm not using intuition. I'm basically like, okay, I understand this thing works.
link |
This is what it does. It makes money to you. Okay. But to push back, so I also disagree with
link |
the intuitions that Sam has, but so disagree with what you just said, which, you know,
link |
what's a good analogy. So if you look at the Twitter algorithm in the early days,
link |
just recommender systems, you can understand how recommender systems work. What you can't
link |
understand in the early days is when you apply that recommender system at scale to thousands
link |
and millions of people, how that can change societies. So the question is, yes, you're just
link |
saying this is how an engineer in your cortex works, but when you have a very useful
link |
TikTok type of service that goes viral, when your neural cortex goes viral, and then millions of
link |
people start using it, can that destroy the world? No. Well, first of all, this is back,
link |
one thing I want to say is that AI is a dangerous technology. I'm not denying that.
link |
All technologies dangerous. Well, and AI, maybe particularly so. Okay. So
link |
am I worried about it? Yeah, I'm totally worried about it. The thing where the narrow component
link |
we're talking about now is the existential risk of AI. So I want to make that distinction because
link |
I think AI can be applied poorly. It can be applied in ways that people are going to understand
link |
the consequences of it. These are all potentially very bad things, but they're not the AI system
link |
creating this existential risk on its own. And that's the only place that I disagree with other
link |
people. Right. So I think the existential risk thing is humans are really damn good at surviving.
link |
So to kill off the human race would be very, very different. Yes, but I'll go further. I don't think
link |
AI systems are ever going to try to. I don't think AI systems are ever going to like say,
link |
I'm going to ignore you. I'm going to do what I think is best. I don't think that's going to
link |
happen, at least not in the way I'm talking about it. So the Twitter recommendation algorithm
link |
is an interesting example. Let's use computers as an analogy again. I build a computer. It's a
link |
universal computing machine. I can't predict what people are going to use it for. They can build
link |
all kinds of things. They can even create computer viruses. It's all kinds of stuff.
link |
So there's some unknown about its utility and about where it's going to go. But in the other
link |
hand, I pointed out that once I build a computer, it's not going to fundamentally change how it
link |
computes. It's like, I use the example of a register, which is a part internal part of a
link |
computer. You know, I say it can't just say it because computers don't evolve. They don't replicate.
link |
They don't evolve. They don't, you know, the physical manifestation of the computer itself
link |
is not going to, there's certain things that can't do. Right. So we can break into things like
link |
things that are possible to happen we can't predict and things that are just impossible to
link |
happen. Unless we go out of our way to make them happen, they're not going to happen unless somebody
link |
makes them happen. Yeah. So there's, there's a bunch of things to say. One is the physical
link |
aspect, which you're absolutely right. We have to build a thing for it to operate in the physical
link |
world and you can just stop building them. You know, the moment they're not doing the thing you
link |
want them to do or just change the design or change the design. The question is, I mean,
link |
there's, it's possible in the physical world, this is probably longer term is you automate the
link |
building. It makes, it makes a lot of sense to automate the building. There's a lot of factories
link |
that are doing more and more and more automation to go from raw resources to the final product.
link |
It's possible to imagine that obviously much more efficient to keep, to create a factory
link |
that's creating robots that do something, you know, they do something extremely useful for society.
link |
It could be personal assistance. It could be, it could be, it could be your toaster, but a toaster
link |
that's much has a deeper knowledge of your culinary preferences. And that could. Well,
link |
I think now you've hit on the right thing. The real thing we need to be worried about next is
link |
self replication. Right. That is the thing that we're in the physical world or even the virtual
link |
world. Self replication, because self replication is dangerous. It's probably more likely to be
link |
killed by a virus, you know, or a human engineered virus. Anybody can create a, you know, this
link |
technology is getting so almost anybody, well, not anybody, but a lot of people could create
link |
a human engineered virus that could wipe out humanity. That is really dangerous. No intelligence
link |
required. Just self replication. So, so we need to be careful about that. So when I think about,
link |
you know, AI, I'm not thinking about robots building robots. Don't do that. Don't build a,
link |
you know, just. Well, that's because you're interested in creating intelligence. It seems
link |
like self replication is a good way to make a lot of money. Well, fine. But so is, you know,
link |
maybe editing viruses is a good way too. I don't know. The point is, if as a society,
link |
when we want to look at existential risks, the existential risks we face that we can control
link |
almost all evolve around self replication. Yes. The question is, I don't see a good way to make
link |
a lot of money by engineering viruses and deploying them on the world. There could be,
link |
there could be applications that are useful. But let's separate out. Let's separate out. I mean,
link |
you don't need to. You only need some, you know, terrorists who want to do it because it doesn't
link |
take a lot of money to make viruses. Let's just separate out what's risky and what's not risky.
link |
I'm arguing that the intelligence side of this equation is not risky. It's not risky at all.
link |
It's the self replication side of the equation that's risky. And I'm not dismissing that. I'm
link |
scared as hell. It's like the paperclip maximizer thing. Those are often like talked about in the
link |
same conversation. I think you're right. Like creating ultra intelligence, super intelligent
link |
systems is not necessarily coupled with a self replicating, arbitrarily self replicating
link |
systems. Yeah. And you don't get evolution unless you're self replicating. Yeah. And so I think
link |
that's just this argument that people have trouble separating those two out. They just think, oh,
link |
yeah, intelligence looks like us. And look how, look at the damage we've done to this planet.
link |
Like how we've, you know, destroyed all these other species. Yeah. Well, we replicate,
link |
which 8 billion of us or 7 billion of us now. I think the idea is that the more intelligent
link |
we're able to build systems, the more tempting it becomes from a capitalist perspective of creating
link |
products, the more tempting it becomes to create self reproducing systems.
link |
All right. So let's say that's true. So does that mean we don't build intelligent systems? No,
link |
that means we regulate, we understand the risks, we regulate them. You know, look, there's a lot
link |
of things we could do as society, which have some sort of financial benefit to someone,
link |
which could do a lot of harm. And we have to learn how to regulate those things.
link |
We have to learn how to deal with those things. I will argue this. I would say the opposite.
link |
Like I would say having intelligent machines at our disposal will actually help us in the end more
link |
because it'll help us understand these risks better, help us mitigate these risks better.
link |
There might be ways of saying, oh, well, how do we solve climate change problems? You know,
link |
how do we do this or how do we do that? That just like computers are dangerous in the hands of the
link |
wrong people, but they've been so great for so many other things, we live with those dangers.
link |
And I think we have to do the same with intelligent machines. But we have to be
link |
constantly vigilant about this idea of A, bad actors doing bad things with them and B,
link |
don't ever, ever create a self replicating system. And by the way, I don't even know
link |
if you could create a self replicating system that uses a factory that's really dangerous.
link |
You know, nature's way of self replicating is so amazing.
link |
You know, it doesn't require anything. It just, you know, the thing and resources and it goes,
link |
right? Yeah. If I said to you, you know what, we have to build, our goal is to build a factory
link |
that can make, that builds new factories. And it has to end the end supply chain. It has to
link |
bind the resources, get the energy. I mean, that's really hard. You know, no one's doing that in the
link |
next, you know, 100 years. I've been extremely impressed by the efforts of Elon Musk and Tesla
link |
to try to do exactly that, not from raw resource. Well, he actually, I think states the goal is
link |
to go from raw resource to the final car in one factory. That's the main goal. Of course,
link |
it's not currently possible, but they're taking huge leaps. Well, he's not the only one to do that.
link |
This has been a goal for many industries for a long, long time.
link |
It's difficult to do. Well, a lot of people, what they do is instead they have like
link |
a million suppliers and then they, like there's everybody's management.
link |
They all co locate them and they kind of systems together.
link |
It's a fundamental distributed system. I think that's, that also is not getting at the issue
link |
I was just talking about, which is self replication. I mean, self replication means
link |
there's no entity involved other than the entity that's replicating.
link |
Right. And so if there are humans in this, in the loop, that's not really self replicating, right?
link |
It's, unless somehow we're duped into it. But it's also don't necessarily
link |
agree with you because you've kind of mentioned that AI will not say no to us.
link |
I just think they will. Yeah. Yeah. So like, I think it's a useful feature to build in.
link |
I'm just trying to like put myself in the mind of engineers to sometimes say no,
link |
you know, if you, I get an example earlier, right? I get an example of my car, right?
link |
My car turns the wheel and applies the accelerator and the brake as I say,
link |
until it decides there's something dangerous. Yes. And then it doesn't do that.
link |
Yeah. Now, that was something it didn't decide to do. It's something we programmed into the car.
link |
And so good. It's a good idea, right? The question again, isn't like,
link |
if we create an intelligent system or ever ignore our commands, of course we will sometimes.
link |
Is it going to do it because it came up with its own goals that serve its purposes and it
link |
doesn't care about our purposes? No, I don't think that's going to happen.
link |
Okay. So let me ask you about these super intelligent cortical systems that we engineer
link |
and us humans. Do you think with these entities operating out there in the world,
link |
what is the future, most promising future look like? Is it us merging with them?
link |
Or is it us? Like, how do we keep us humans around when you have increasingly intelligent
link |
beings? Is it one of the dreams is to upload our minds in the digital space? So can we just
link |
give our minds to these systems so they can operate on them? Is there some kind of more
link |
interesting merger or is there more? In the third part of my book, I talked about all these scenarios
link |
and let me just walk through them. Sure. The uploading the mind one. Yes.
link |
Extremely, really difficult to do. Like, we have no idea how to do this even remotely right now.
link |
So it would be a very long way away, but I make the argument you wouldn't like the result.
link |
And you wouldn't be pleased with the result. It's really not what you think it's going to be.
link |
Imagine I could upload your brain into a computer right now and now the computer's
link |
sitting there going, Hey, I'm over here. Great. Get rid of that old bio person. I don't need
link |
them. You're still sitting here. Yeah. What are you going to do? No, no, that's not me. I'm here.
link |
Right. Yeah. Are you going to feel satisfied? Then you, but people imagine, look, I'm on my
link |
deathbed and I'm about to, you know, expire and I pushed the button and now I'm uploaded. But
link |
think about it a little differently. And so I don't think it's going to be a thing because
link |
people by the time we're able to do this, if ever, because you have to replicate the entire body,
link |
not just the brain, it's, it's really, it's, I walk through the issues. It's really substantial.
link |
Do you have a sense of what makes us us? Is there, is there a shortcut to it can only save a certain
link |
part that makes us truly arts? No, but I think that machine would feel like it's you too.
link |
Right. Right. If you people just like, I have a child, right? I have two daughters.
link |
They're independent people. I created them. Well, partly. Yeah. And
link |
I don't, just because they're somewhat like me, I don't feel on them and they don't feel
link |
like I'm me. So if you split the part, you have two people. So we can tell them to come back to
link |
what makes, what consciousness we want. We can talk about that, but we don't have a remote
link |
consciousness. I'm not sitting there going, oh, I'm conscious of that. I'm in that system over there.
link |
So let's, let's stay on our topic. So one was uploading a brain. Yeah.
link |
It ain't going to happen in a hundred years, maybe a thousand, but I don't think people are going to
link |
want to do it. The merging your mind with, uh, you know, the neural link thing, right? Like,
link |
again, really, really difficult. It's, it's one thing to make progress to control a prosthetic
link |
arm. It's another to have like a billion or several billion, you know, things and understanding what
link |
those signals mean. Like it's the one thing to like, okay, I can learn to think some patterns
link |
to make something happen. It's quite another thing to have a system, a computer, which actually
link |
knows exactly what cells it's talking to and how it's talking to them and interacting in a way
link |
like that. Very, very difficult. We're not getting anywhere closer to that.
link |
Interesting. Can I, can I ask a question here? So for me, what makes that merger very difficult
link |
practically in the next 10, 20, 50 years is like literally the biology side of it, which is like,
link |
it's just hard to do that kind of surgery in a safe way. But your intuition is even the machine
link |
learning part of it, where the machine has to learn what the heck it's talking to. That's even
link |
hard. I think it's even harder. And it's not, it's, it's easy to do when you're talking about
link |
hundreds of signals. It's, it's a totally different thing to say talking about billions of signals.
link |
So you don't think it's the raw, it's a machine learning problem. You don't think it could be
link |
learned? Well, I'm just saying, no, I think you'd have to have detailed knowledge. You'd have to
link |
know exactly what the types of neurons you're connecting to. I mean, in the brain, there's these
link |
neurons that do all different types of things. It's not like a neural network. It's a very
link |
complex organism system up here. We talked about the grid cells or the place cells, you know,
link |
you have to know what kind of cells you're talking to and what they're doing and how their
link |
timing works and all, all this stuff, which you can't today is no way of doing that, right?
link |
But I think it's, I think it's a, I think the problem, you're right that the biological aspect
link |
of like who wants to have a surgery and have this stuff inserted in your brain, that's a problem.
link |
But this is when we solve that problem. I think the, the information coding aspect is much worse.
link |
I think that's much worse. It's not like what they're doing today. Today, it's simple machine
link |
learning stuff because you're doing simple things. But if you want to merge your brain,
link |
like I'm thinking on the internet, I'm merged my brain with the machine and we're both doing,
link |
that's a totally different issue. That's interesting. I tend to think if the, okay,
link |
yeah, if you have a super clean signal from a bunch of neurons at the start, you don't know
link |
what those neurons are. I think that's much easier than the getting of the clean signal.
link |
I think if you think about today's machine learning, that's what you would conclude.
link |
I'm thinking about what's going on in the brain and I don't reach that conclusion. So we'll have
link |
to see. Sure. But I don't think even, even then, I think there's kind of a sad future.
link |
Like, you know, do I, do I have to like plug my brain into a computer? I'm still a biologic
link |
organism. I assume I'm still going to die. So what, what have I achieved? Right? You know,
link |
what have I achieved to doing some sort of? Oh, I disagree. We don't know what those are, but it
link |
seems like there could be a lot of different applications. It's like virtual reality is
link |
to expand your brain's capability to like read Wikipedia. Yeah, but fine. But you're still
link |
a biologic organism. Yes. Yes. You're still, you're still mortal. You're still all right. So,
link |
so what are you accomplishing? You're making your life in this short period of time better,
link |
right? Just like having the internet made our life better. Yeah. Yeah. Okay. So I think that's
link |
of, of, if I think about all the possible gains we can have here, that's a marginal one. It's
link |
an individual, hey, I'm better, you know, I'm smarter. But you'll find I'm not against it.
link |
I just don't think it's earth changing. I, but, but so this is the true of the internet.
link |
When each of us individuals are smarter, we get a chance to then share our smartness.
link |
We get smarter and smarter together as like, as a collective. This is kind of like the
link |
same colony. Why don't I just create an intelligent machine that doesn't have any of this biological
link |
nonsense? This is all the same. It's, it's everything except don't burden it with my brain.
link |
Yeah. Right. It has a brain. It is smart. It's like my child, but it's much, much smarter than
link |
me. So I have a choice between doing some implant, doing some hybrid weird, you know,
link |
biological thing that's bleeding and all these problems and limited by my brain or creating
link |
a system which is super smart that I can talk to that helps me understand the world that can read
link |
the internet, you know, read Wikipedia and talk to me. I guess my, the open questions there are
link |
what does the manifestation of superintelligence look like? So like, what are we going to,
link |
you talked about, why do I want to merge with AI? Like, what, what's the actual marginal benefit
link |
here? If I, if we have a super intelligent system, yeah, how will it make our life better?
link |
So let's, let's, that's a great question, but let's break it onto little pieces. All right.
link |
On the one hand, it can make our life better in lots of simple ways. You mentioned like a care robot
link |
or something that helps me do things, a cook side, I don't know what it does, right? Little things
link |
like that. We have super better, smarter cars. We can have, you know, better agents, aids helping
link |
us in our work environment and things like that. To me, that's like the easy stuff, the simple stuff
link |
in the beginning. And so in the same way that computers made our lives better in ways, many,
link |
many ways, I will have those kind of things. To me, the really exciting thing about AI is
link |
sort of its transcendent, transcendent quality in terms of humanity. We're still biological
link |
organisms. We're still stuck here on earth. It's going to be hard for us to live anywhere else.
link |
I don't think you and I are going to want to live on Mars anytime soon. And, and we're flawed,
link |
you know, we may end up destroying ourselves. It's totally possible. We, if not completely,
link |
we could destroy our civilizations. You know, it's just face the fact that we have issues here,
link |
but we can create intelligent machines that can help us in various ways. For example,
link |
one example I gave, and that sounds a little sci fi, but I believe this, if we really want to
link |
live on Mars, we'd have to have intelligent systems that go there and build the habitat for us,
link |
not humans. Humans are never going to do this. It's just too hard. But could we have a thousand or
link |
10,000, you know, engineer workers up there doing this stuff, building things, terraforming Mars?
link |
Sure. Maybe we can move Mars. But then if we want to, if we want to go around the universe,
link |
should I send my children around the universe? Or should I send some intelligent machine,
link |
which is like a child that represents me and understands our needs here on earth that could
link |
travel through space? So it sort of, in some sense, intelligence allows us to transcend the
link |
limitations of our biology. And don't think of it as a negative thing. It's in some sense,
link |
my children transcend my biology too, because they live beyond me. And they represent me,
link |
and they also have their own knowledge, and I can impart knowledge to them. So intelligent
link |
machines would be like that too, but not limited like us. But the question is, there's so many
link |
ways that transcendence can happen. And the merger with AI and humans is one of those ways. So you
link |
said intelligent, basically beings or systems propagating throughout the universe representing
link |
us humans. They represent us humans in the sense they represent our knowledge and our history,
link |
not us individually. Right, right. But I mean, the question is, is it just a database
link |
with the really damn good model of the world? No, no, they're conscious, just like us.
link |
Okay. But just different. They're different. Just like my children are different. They're like me,
link |
but they're different. These are more different. I guess maybe I've already, I kind of, I take
link |
a very broad view of our life here on Earth. I say, you know, why are we living here? Are we
link |
just living because we live? Are we surviving because we can survive? Are we fighting just
link |
because we want to just keep going? What's the point of it? Right? So to me, the point,
link |
if I ask myself, what's the point of life is, what transcends that ephemeral sort of biological
link |
experience is to me, this is my answer, is the acquisition of knowledge to understand more
link |
about the universe and to explore. And that's partly to learn more, right? I don't view it as
link |
a terrible thing if the ultimate outcome of humanity is we create systems that are intelligent,
link |
that are offspring, but they're not like us at all. And we stay here and live on Earth as long
link |
as we can, which won't be forever, but as long as we can. And, but that would be a great thing
link |
to do. It's not, it's not like a negative thing. Well, would you be okay then if the human
link |
species vanishes, but our knowledge is preserved and keeps being expanded by intelligent systems?
link |
I want our knowledge to be preserved and expanded. Yeah. Am I okay with humans dying? No, I don't
link |
want that to happen. But if it does happen, what if we were sitting here and this is the
link |
last two people on Earth who were saying, Lex, we blew it, it's all over, right? Wouldn't I feel
link |
better if I knew that our knowledge was preserved and that we had agents that knew about that,
link |
that were trans, you know, that left Earth? I would want that. It's better than not having that.
link |
You know, I make the analogy of like, you know, the dinosaurs, the poor dinosaurs, they live for,
link |
you know, tens of millions of years. They raised their kids. They, you know, they,
link |
they fought to survive. They were hungry. They, they did everything we do. And then they're all
link |
gone. Yeah. Like, you know, and, and if we didn't discover their bones, nobody would ever know that
link |
they ever existed, right? Do we want to be like that? I don't want to be like that. There's a sad
link |
aspect to it. And it kind of is jarring to think about that it's possible that a human like intelligent
link |
civilization has previously existed on Earth. The reason I say this is like, it is jarring to think
link |
that we would not, if they weren't extinct, we wouldn't be able to find evidence of them.
link |
After a sufficient amount of time. After a sufficient amount of time. Of course, there's like,
link |
like basically humans, like if we destroy ourselves now, human civilization destroy ourselves now,
link |
after a sufficient amount of time, we would not be, we'd find the evidence of the dinosaurs.
link |
We would not find evidence of us humans. Yeah. That's kind of an odd thing to think about. Although
link |
I'm not sure if we have enough knowledge about species going back through billions of years,
link |
but we could, we could, we might be able to eliminate that possibility. But it's an interesting
link |
question. Of course, this is a similar question to, you know, there were lots of intelligent
link |
species about our galaxy that have all disappeared. Yeah. That's super sad that they're exactly that
link |
there may have been much more intelligent alien civilizations in our galaxy. There are no longer
link |
there. Yeah. You actually talked about this, that humans might destroy ourselves. Yeah. And how we
link |
might preserve our knowledge and advertise that knowledge to other. Advertise is a funny word
link |
to use. From a PR perspective. There's no financial gain in this.
link |
You know, like make it like from a tourism perspective, make it interesting. Can you
link |
describe how? Well, there's a couple things. I broke it down into two parts, actually three
link |
parts. One is, you know, there's a lot of things we know that what if we were, what if we ended,
link |
what if our civilization collapsed? Yeah, I'm not talking tomorrow. Yeah, we could be a thousand
link |
years from that. Like, you know, we don't really know. But historically, it would be likely at
link |
some point. Time flies when you're having fun. Yeah. You know, could we, and then intelligent
link |
life evolved again on this planet? Wouldn't they want to know a lot about us and what we knew?
link |
Wouldn't they wouldn't be able to ask us questions? So one very simple thing I said,
link |
how would we archive what we know? That was a very simple idea. I said, you know what,
link |
it wouldn't be that hard to put a few satellites, you know, going around the sun and we upload
link |
Wikipedia every day and that kind of thing. So, you know, if we end up killing ourselves,
link |
well, it's up there and the next intelligence piece will find it and learn something. They
link |
would like that. They would appreciate that. So that's one thing. The next thing I said, well,
link |
what if, you know, how we're outside of our solar system? We have the SETI program. We're
link |
looking for these intelligent signals from everybody. And if you do a little bit of math,
link |
which I did in the book, and you say, well, what if intelligent species only live for 10,000 years
link |
before, you know, technologically intelligent species? Like, ones are really able to do the
link |
task we're just starting to be able to do. Well, the chances are we wouldn't be able to see any of
link |
them because they would have all been disappeared by now. They've lived for 10,000 years and now
link |
they're gone. And so we're not going to find these signals being sent from these people because
link |
if I say, what kind of signal could you create that would last a million years or a billion years?
link |
That someone would say, damn it, someone smart lived there. We know that. That would be a life
link |
changing event for us to figure that out. Well, what we're looking for today in the SETI program
link |
isn't that. We're looking for very coded signals in some sense. And so I asked myself, what would
link |
be a different type of signal one could create? I've always thought about this throughout my life
link |
and in the book I gave one possible suggestion, which was we now detect planets going around
link |
other suns, other stars, excuse me. And we do that by seeing this slight dimming of the light as
link |
the planets move in front of them. That's how we detect planets elsewhere in our galaxy.
link |
What if we created something like that that just rotated around the sun and it blocked out a little
link |
bit of light in a particular pattern that someone said, hey, that's not a planet. That is a sign
link |
that someone was once there. You can say, what if it's beating up pi, three point, whatever.
link |
So I did it from a distance, broadly broadcast, takes no continue activation on our part. This
link |
is the key, right? No one has to be seeing a running computer and supplying it with power.
link |
It just goes on. So we go, it's continuous. And I argued that part of the SETI program
link |
should be looking for signals like that. And to look for signals like that, you ought to figure
link |
out how would we create a signal? Like, what would we create that would be like that, that would
link |
persist for millions of years, that would be broadcast broadly that you could see from a
link |
distance that was unequivocal, came from an intelligent species. And so I gave that one
link |
example because they don't know what to know of actually. And then finally, right,
link |
if, if our, ultimately our solar system will die at some point in time, you know, how do we go
link |
beyond that? And I think it's possible, if at all possible, we'll have to create intelligent machines
link |
that travel throughout the solar system or throughout the galaxy. And I don't think that's
link |
going to be humans. I don't think it's going to be biological organisms. So these are just
link |
things to think about, you know, like, what's the, you know, I don't want to be like the dinosaur.
link |
I don't want to just live in, okay, that was it. We're done, you know. Well, there is a kind of
link |
presumption that we're going to live forever, which I think it is a bit sad to imagine that the
link |
message we send as, as he talked about is that we were once here instead of we are here. Well,
link |
it could be we are still here. But it's more of a, it's more of an insurance policy in case we're
link |
not here, you know? Well, I don't know, but there's something I think about, we humans don't often
link |
think about this, but it's like, like, whenever I record a video, I've done this a couple of times
link |
in my life, I've recorded a video for my future self, just for personal, just for fun. And it's
link |
always just fascinating to think about that, preserving yourself for future civilizations. For
link |
me, it was preserving myself for a future me, but that's a little, that's a little fun example
link |
of archival. Well, these podcasts are preserving you and I in a way, for future, hopefully well
link |
after we're gone. But you don't often, we're sitting here talking about this. You are not
link |
thinking about the fact that you and I are going to die, and there'll be like 10 years after somebody
link |
watching this, and we're still alive. You know, in some sense, I do. I'm here because I want to
link |
talk about ideas. And these ideas transcend me, and they transcend this time and on our planet.
link |
We're talking here about ideas that could be around a thousand years from now or a million years
link |
from now. When I wrote my book, I had an audience in mind, and one of the clearest audiences was
link |
aliens. No, were people reading this a hundred years from now? Yes. I said to myself, how do I
link |
make this book relevant to somebody reading this a hundred years from now? What would they want to
link |
know that we were thinking back then? What would make it like that was an interesting, it's still
link |
an interesting book. I'm not sure I can achieve that, but that was how I thought about it because
link |
these ideas, especially in the third part of the book, the ones we were just talking about,
link |
you know, these crazy, it sounds like crazy ideas about, you know, storing our knowledge and,
link |
and, you know, merging our brains or computers and sending, you know, our machines out into space,
link |
is not going to happen in my lifetime. And they may not, and they may not happen in the next
link |
hundred years. They may not happen for a thousand years. Who knows? But we have the unique opportunity
link |
right now, we, you, me, and other people like this, to sort of at least propose the agenda
link |
that might impact the future like that. It's a fascinating way to think, both like writing or
link |
creating, try to make, try to create ideas, try to create things that hold up in time. Yeah. You
link |
know, understanding how the brain works, we're going to figure that out once. That's it. It's
link |
going to be figured out once. And after that, that's the answer. And people will, people will study
link |
that thousands of years now. We still, we still, you know, venerate Newton and, and Einstein and,
link |
and, you know, because, because ideas are exciting even well into the future. Well, the interesting
link |
thing is like big ideas, even if they're wrong, are still useful. Like, yeah, especially if they're
link |
not completely wrong. Like you're right, right, right. Right. Noons laws are not wrong. They're
link |
just Einstein's are better. So, um, let's see. Yeah. I mean, but we're talking with Newton and
link |
Einstein. We're talking about physics. I wonder if we'll ever achieve that kind of clarity about
link |
understanding, um, like complex systems and the, this particular manifestation of complex systems,
link |
which is the human brain. I'm, I'm totally optimistic we can do that. I mean, we're making
link |
progress at it. I don't see any reason why we can't completely, I mean, completely understand in the
link |
sense, um, you know, we don't really completely understand what all the molecules in this water
link |
bottle are doing, but, you know, we have laws that sort of capture it pretty good. Um, and, uh,
link |
so we'll have that kind of understanding. I mean, it's not like you're going to have to know what
link |
every neuron in your brain is doing. Um, but enough to, um, first of all, to build it and second of
link |
all, to do, you know, do what physics does, which is like have concrete experiments where we can
link |
validate. We're, we're, we're, this is happening right now. Like it's not, this is not some future
link |
thing. Um, you know, I'm very optimistic about, I'm, I know about our, our work and what we're
link |
doing. We'll have to prove it to people. Um, but, um, I consider myself a rational person and, um,
link |
you know, until fairly recently, I wouldn't have said that, but right now I'm, where I'm sitting
link |
right now, I'm saying, you know, we, this is going to happen. There's, there's no big obstacles to
link |
it. Um, we finally have a framework for understanding what's going on in the cortex and, um, and
link |
that's liberating. It's, it's like, oh, it's happening. So I can't see why we wouldn't be able
link |
to understand it. I just can't. Okay. Oh, so, I mean, on that topic, let me ask you to play devil's
link |
advocate. Is it possible for you to imagine luck, look a hundred years from now and looking at your
link |
book, uh, in which ways might your ideas be wrong? Oh, I worry about this all the time. Um,
link |
yeah, it's still useful. Yeah. Yeah.
link |
Um, I think there's, you know, um, well, I can, I can best relate it to like things I'm worried
link |
about right now. So we talk about this voting idea, right? It's happening. There's, there's no
link |
question it's happening, but it could be far more, um, uh, there's, there's enough things I
link |
don't know about it that it might be working in different ways differently than I'm thinking about
link |
the kind of what's voting, who's voting, you know, where are representations. I talked about,
link |
like you have a thousand models of a coffee cup like that. That could turn out to be wrong, um,
link |
because it may be, maybe there are a thousand models that are sub models, but not really a
link |
single model of the coffee cup. Um, I mean, there's things that these are all sort of on the edges,
link |
things that I present as like, oh, it's so simple and clean. Well, that's not that. It's always going
link |
to be more complex. And, um, and there's parts of the theory, which I don't understand the
link |
complexity well. So I think, I think the idea that the brain is a distributed modeling system is
link |
not controversial at all, right? That's not, that's well understood by many people. The question then
link |
is, are each quarterly column an independent modeling system? Right. Um, I could be wrong about
link |
that. Um, I don't think so, but I worry about it. My intuition, not even thinking why you could be
link |
wrong is the same intuition I have about any sort of physicist, uh, like strength theory, that we,
link |
as humans, desire for a clean explanation. And, uh, a hundred years from now, uh, intelligent systems
link |
might look back at us and laugh at how we try to get rid of the whole mess by having simple
link |
explanation. When the reality is, it's, it's way messier. And in fact, it's impossible to understand
link |
you can only build it. It's like this idea of complex systems and cellular automata,
link |
you can only launch the thing, you cannot understand it. Yeah. I think that, you know,
link |
the history of science suggests that's not likely to occur. Um, the history of science suggests that
link |
like as a theorist and we're theorists, you look for simple explanations, right? Fully knowing
link |
that whatever simple explanation you're going to come up with is not going to be completely correct.
link |
I mean, it can't be. I mean, it's just, it's just more complexity. But that's the role of theorists
link |
play. They, they sort of, they give you a framework on which you now can talk about a problem and
link |
figure out, okay, now we can start digging more details. The best frameworks stick around while
link |
the details change. You know, again, you know, the classic example is Newton and Einstein, right?
link |
You know, um, Newton's theories are still used. They're still valuable. They're still practical.
link |
They're not like wrong. Just they've been refined. Yeah. But that's in physics. It's not obvious,
link |
by the way, it's not obvious for physics either that the universe should be such that's amenable
link |
to these simple, but it's so far it appears to be as far as we can tell. Um, yeah. I mean, but
link |
as far as we could tell, and, but it's also an open question whether the brain is amenable to
link |
such clean theories. That's the brain, but intelligence. Well, I, I, I don't know. I would
link |
take intelligence out of it. Just say, you know, um, well, okay. Um, the evidence we have suggested
link |
that the human brain is, is, is a, at the one time extremely messy and complex, but there's
link |
some parts that are very regular and structured. That's why we started the neocortex. It's
link |
extremely regular in its structure. Yeah. And unbelievably so. And then I mentioned earlier,
link |
the other thing is it's, it's universal abilities. It is so flexible to learn so many things. We
link |
don't, we haven't figured out what it can't learn yet. We don't know, but we haven't figured out
link |
yet, but he learns things that never was evolved to learn. So those give us hope. Um, that's why
link |
I went into this field because I said, you know, this regular structure, it's doing this amazing
link |
number of things. There's got to be some underlying principles that are, that are common
link |
and other, other scientists have come up with the same conclusions. Um, and so it's promising.
link |
It's promising. And, um, and that's, and whether the theories play out exactly this way or not,
link |
that is the role that theorists play. And so far it's worked out well, even though, you know,
link |
maybe, you know, we don't understand all the laws of physics, but so far it's been pretty damn
link |
useful. The ones we have are, our theories are pretty useful. You mentioned that, uh, we should
link |
not necessarily be at least to the degree that we are worried about the existential risks of
link |
artificial intelligence relative to, uh, human risks from human nature being existential risk.
link |
What aspect of human nature worries you the most in terms of the survival of the human species?
link |
I mean, I'm disappointed in humanity as humans. I mean, all of us, I'm, I'm one, so I'm disappointed
link |
myself too. Um, it's kind of a sad state. There's, there's two things that disappoint me. One is
link |
how it's difficult for us to separate our rational component of ourselves from our evolutionary
link |
heritage, which is, you know, not always pretty. You know, rape is a, is an evolutionary good
link |
strategy for reproduction. Murder can be at times too. You know, making other people miserable
link |
at times is a good strategy for reproduction. It's just, and it's just, and, and so now that we know
link |
that, and yet we have this sort of, you know, we, you and I can have this very rational discussion
link |
talking about, you know, intelligence and brains and life and so on. Some, it seems like it's so hard.
link |
It's just a big transition to get humans, all humans to, to make the transition from like,
link |
let's pay no attention to all that ugly stuff over here. Let's just focus on the
link |
incident. What's unique about humanity is our knowledge and our intellect.
link |
But the fact that we're striving isn't itself amazing, right? The fact that we're able to
link |
overcome that part and it seems like we are more and more becoming successful at overcoming that
link |
part. That is the optimistic view and I agree with you. Yeah. But I worry about it. I'm not saying,
link |
I'm worrying about it. I think that was your question. I still worry about it. Yes. You know,
link |
we could be, and tomorrow because some terrorists could get nuclear bombs and, you know,
link |
blow us all up. Who knows, right? The other thing I think I'm disappointed is, and it's just,
link |
I understand it. It's, I guess you can't really be disappointed. It's just a fact,
link |
is that we're so prone to false beliefs. We, you know, we have a model in our head,
link |
the things we can interact with directly, physical objects, people, that model is pretty good.
link |
And we can test it all the time, right? I touch something, I look at it, talk to you, see
link |
my model is correct. But so much of what we know is stuff I can't directly interact with. I can't,
link |
I don't even know because someone told me about it. And so, so we're prone, inherently prone to
link |
having false beliefs because if I'm told something, how am I going to know it's right or wrong, right?
link |
And so then we have the scientific process, which says we are inherently flawed. So the only way we
link |
can get closer to the truth is by looking for contrary evidence. Yeah. Like this conspiracy
link |
theory, this, this theory that scientists keep telling me about that the earth is round.
link |
As far as I can tell, when I look out, it looks pretty flat. Yeah. So yeah, there's, there's
link |
attention, but it's also, I tend to believe that we haven't figured out most of this thing, right?
link |
Most of nature around us is a mystery. And so it, but that doesn't work. Does that worry you?
link |
I mean, it's like, oh, that's, that's like a pleasure more to figure out, right? Yeah,
link |
that's exciting. But I'm saying like, there's going to be a lot of quote unquote, wrong ideas.
link |
I mean, I've been thinking a lot about engineering systems like social networks and so on. And I've
link |
been worried about censorship and thinking through all that kind of stuff because there's a lot of
link |
wrong ideas. There's a lot of dangerous ideas, but then I also read a history, read history and see
link |
when you censor ideas that are wrong. Now, this could be a small scale censorship, like a young
link |
grad student who comes up, who like raises their hand and says some crazy idea. It's a form of
link |
censorship could be, I shouldn't use the word censorship, but like de incentivize them from,
link |
no, no, no, no, this is the way it's been done. Yeah, you're foolish kid, don't think so. Yeah,
link |
you're foolish. So in some sense, those wrong ideas most of the time end up being wrong,
link |
but sometimes end up being. I agree with you. So I don't like the word censorship.
link |
At the very end of the book, I ended up with a sort of a plea or a recommended force of action.
link |
And the best way I could, I know how to deal with this issue that you bring up
link |
is if everybody understood, as part of your upbringing life, something about how your brain
link |
works, that it builds a model of the world, how it worked, how basic it builds that model of the
link |
world, and that the model is not the real world. It's just a model. And it's never going to reflect
link |
the entire world, and it can be wrong, and it's easy to be wrong. And here's all the ways you
link |
can get the wrong model in your head, right? It's not to prescribe what's right or wrong,
link |
it's just to understand that process. If we all understood the process, and then I got together
link |
and you said, I disagree with you, Jeff, and I said, Lex, I disagree with you, that at least we
link |
understand that we're both trying to model something. We both have different information
link |
which leads to our different models. And therefore, I shouldn't hold it against you,
link |
and you shouldn't hold it against me. And we can at least agree that, well, what can we look for
link |
in its common ground to test our beliefs, as opposed to so much as we raise our kids on dogma,
link |
which is this is a fact, and this is a fact, and these people are bad. And if everyone knew just
link |
to be skeptical of every belief and why and how their brains do that, I think we might have a
link |
better world. Do you think the human mind is able to comprehend reality? So you talk about
link |
this creating models that are better and better. How close do you think we get to reality? So
link |
the wildest ideas is like Donald Hoffman saying, we're very far away from reality. Do you think
link |
we're getting close to reality? Well, I guess it depends on what you define reality. We have a
link |
model of the world that's very useful for basic goals of survival. Well, for our survival and
link |
our pleasure, right? So that's useful. That means really useful. Oh, we can build planes,
link |
we can build computers, we can do these things. I don't think, I don't know the answer to that
link |
question. I think that's part of the question we're trying to figure out. Obviously, if we
link |
end up with a theory of everything that really is a theory of everything, and all of a sudden,
link |
everything comes into play and there's no room for something else, then you might feel like we
link |
have a good model of the world. Yeah, but if we have a theory of everything and somehow, first of
link |
all, you'll never be able to really conclusively say it's a theory of everything, but say somehow
link |
we are very damn sure it's a theory of everything. We understand what happened at the Big Bang and how
link |
just the entirety of the physical process. I'm still not sure that gives us an understanding of
link |
the next many layers of the hierarchy of abstractions that form. Well, also, what if string
link |
theory turns out to be true, and then you say, well, we have no reality, no modeling, what's
link |
going on in those other dimensions that are wrapped into it on each other, right? Or the multiverse,
link |
you know? I honestly don't know how for us, for human interaction, for ideas of intelligence,
link |
how it helps us to understand that we're made up of vibrating strings that are
link |
like 10 to the whatever times smaller than us. You could probably build better
link |
weapons of better rockets, but you're not going to be able to understand intelligence.
link |
I guess maybe better computers. No, you won't be able to. I think it's just more purely knowledge.
link |
You might lead to a better understanding of the beginning of the universe,
link |
right? It might lead to a better understanding of, I don't know. I think the acquisition of
link |
knowledge has always been one where you pursue it for its own pleasure and you don't always know
link |
what is going to make a difference. You're pleasantly surprised by the weird things you find.
link |
Do you think for the for the New York Cortex in general, do you think there's a lot of innovation
link |
to be done on the machine side? You use the computer as a metaphor quite a bit. Is there
link |
different types of computer that would help us build intelligence? What are the physical
link |
manifestations of intelligent machines? Oh, no, it's going to be totally crazy. We have no idea
link |
how this is going to look out yet. You can already see this. Today, of course, we modeled these things
link |
on traditional computers, and now GPUs are really popular with neural networks and so on.
link |
But there are companies coming up with fundamentally new physical substrates
link |
that are just really cool. I don't know if they're going to work or not,
link |
but I think there'll be decades of innovation here, totally.
link |
Do you think the final thing will be messy, like our biology is messy? Or do you think
link |
it's the old bird versus airplane question? Or do you think we could just
link |
build airplanes that fly way better than birds in the same way we can build
link |
electrical in New York Cortex? Can I riff on the bird thing a bit? Because I think it's
link |
interesting. People really misunderstand this. The Wright brothers, the problem they were trying
link |
to solve was controlled flight, how to turn an airplane, not how to propel an airplane.
link |
They weren't worried about that. They already had, at that time, there was already wing shapes,
link |
which they had from studying birds. There was already gliders that carried people.
link |
The problem was, if you put a rudder on the back of a glider and you turn it, the plane falls out
link |
of the sky. The problem was, how do you control flight? They studied birds. They actually had
link |
birds in captivity. They watched birds in wind tunnels. They observed them in the wild. They
link |
discovered the secret was the birds twist their wings when they turn. That's what they did on
link |
the Wright brothers fly. They had these sticks that you would twist the wing. That was their
link |
innovation, not their propeller. Today, airplanes still twist their wings. We don't twist the entire
link |
wing. We just the tail end of it. The flaps, which is the same thing. Today's airplanes
link |
fly on the same principles as birds, which we observe. Everyone get that analogy wrong.
link |
Let's step back from that. Once you understand the principles of flight,
link |
you can choose how to implement them. No one's going to use bones and feathers and muscles,
link |
but they do have wings. We don't flap them. We have propellers. When we have the principles
link |
of computation that goes on to modeling the world in a brain, we understand those principles
link |
very clearly. We have choices on how to implement them. Some of them be biological like and some
link |
won't. I do think there's going to be a huge amount of innovation here. Do you think about
link |
the innovation when in the computer? They had invented the transistor. They invented the silicon
link |
chip. They had invented software. It's the things they had to do, memory systems.
link |
It's going to be similar. It's interesting that the effectiveness of deep learning for
link |
specific tasks is driving a lot of innovation in the hardware, which may have effects
link |
for actually allowing us to discover intelligent systems that operate very differently or
link |
much bigger than deep learning. Ultimately, it's good to have an application that's making our
link |
life better now because the capitalist process, if you can make money, that works. The other way,
link |
Neil deGrasse Tyson writes about this, is the other way we fund science, of course, is through
link |
military conquests. It's an interesting thing that we're doing on this regard. We used to have
link |
a series of biological principles. We can see how to build these intelligent machines,
link |
but we've decided to apply some of these principles to today's machine learning techniques. One
link |
that we didn't talk about this principle, one is sparsity in the brain. Most of the neurons
link |
are inactive at any point in time. It's sparse and the connectivity is sparse. That's different
link |
than deep learning networks. We've already shown that we can speed up existing deep learning
link |
networks anywhere from 10 to a factor of 100, literally 100, and make them more robust at the
link |
same time. This is commercially very, very valuable. If we can prove this actually in the
link |
largest systems that are commercially applied today, there's a big commercial desire to do this.
link |
Well, sparsity is something that doesn't run really well on existing hardware. It doesn't
link |
really run really well on GPUs and on CPUs. That would be a way of bringing more brain
link |
principles into the existing system on a commercially valuable basis. Another thing we
link |
can think we can do is we're going to use these dendrites. I talked earlier about the prediction
link |
occurring from inside and around. That basic property can be applied to existing neural networks
link |
and allow them to learn continuously with something they don't do today.
link |
The dendritic spikes that you were talking about.
link |
Yeah. Well, we wouldn't model the spikes, but the idea that today's neural networks have to
link |
go to point neurons is a very simple model of a neuron. By adding dendrites to them,
link |
at just one more level of complexity that's in biological systems, you can solve problems
link |
in continuous learning and rapid learning. We're trying to bring the existing
link |
field. We'll see if we can do it. We're trying to bring the existing field of machine learning
link |
commercially along with us. You brought up this idea of keeping paying for it commercially along
link |
with us as we move towards the ultimate goal of a true AI system. Even small innovations on
link |
neural networks are really, really exciting. It seems like such a trivial model of the brain
link |
and applying different insights that just even, like you said, continuous learning or making it
link |
more asynchronous or maybe making more dynamic or incentivizing. Or more robust. Even just
link |
one more robust. And making it somehow much better, incentivizing sparsity somehow.
link |
Yeah. Well, if you can make things 100 times faster, then there's plenty of incentive.
link |
People spending millions of dollars just training some of these networks now,
link |
these transforming networks.
link |
Let me ask you a big question. For young people listening to this today in high school and college,
link |
what advice would you give them in terms of which career path to take and maybe just about life in
link |
general? Well, in my case, I didn't start life with any kind of goals. I was, when I was going to
link |
college, I was like, oh, what do I study? Well, maybe I'll do intellectual engineering stuff.
link |
I wasn't like, today you see some of these young kids are so motivated, change the world. I was like,
link |
hey, whatever. But then I did fall in love with something besides my wife. But I fell in love
link |
with this like, oh my God, it would be so cool to understand how the brain works.
link |
And then I said to myself, that's the most important thing I could work on. I can't
link |
imagine anything more important because if you understand how the brain's working,
link |
build tells machines and they could figure out all the other big questions in the world.
link |
So, and then I said, I want to understand how I work. So I fell in love with this idea and I
link |
became passionate about it. And this is a trope people say this, but it's true.
link |
Because I was passionate about it, I was able to put up with almost so much crap.
link |
You know, I was in that, you know, I was like,
link |
person said, you can't do this. I was a graduate student at Berkeley when they said,
link |
you can't study this problem. You know, no one's can solve this or you can't get funded for it.
link |
You know, then I went into do, you know, mobile computing and there's like people say,
link |
you can't do that. You can't build a cell phone. You know, so, but all along I kept being motivated
link |
because I wanted to work on this problem. I said, I want to understand the brain works.
link |
I got myself, you know, I got one lifetime, I'm going to figure it out, do as best I can.
link |
So by having that, because you know, these things, it's really, as you point out, Lex,
link |
it's really hard to do these things. People, it's just, there's so many downers along the way.
link |
So many way obstacles are getting your way. Yeah, I'm sitting here happy all the time,
link |
but trust me, it's not always like that.
link |
That's, I guess, the happiness that the passion is a prerequisite for surviving the whole thing.
link |
Yeah, I think so. I think that's right. And so I don't want to sit to someone say, you know,
link |
you need to find a passion and do it. No, maybe you don't. But if you do find something you're
link |
passionate about, then, then you can follow it as far as your passion will let you put up with it.
link |
Do you remember how you found it? This is how the spark happened.
link |
Why, specifically for me?
link |
Yeah, like, because you said, it's such an interesting, so like almost like later in life,
link |
by later, I mean, like not when you were five, you, you didn't really know. And then all of a
link |
sudden you fell in love with it. Yeah, yeah. There was there was two separate events that
link |
compounded one another. One, when I was probably a teenager, it might have been 17 or 18,
link |
I made a list of the most interesting problems I could think of. First was, why does the universe
link |
exist? Seems like not existing is more likely. Yeah. The second one was, well, given exists,
link |
why does it behave the way it does? You know, laws of physics, why is it equal to MC squared,
link |
not MC cubed? You know, that's interesting question. I don't know. Third one was like,
link |
what's the origin of life? And the fourth one was what's intelligence? And I stopped there.
link |
I said, well, that's probably the most interesting one. And I put that aside
link |
as a teenager. But then when I was 22, and I was reading the, no, I was, excuse me, I was 70,
link |
it was 1979, excuse me, 1979. I was reading, so I was at that time was 22. I was reading
link |
the September issue of Scientific American, which is all about the brain. And then the
link |
final essay was by Francis Crick, who of DNA fame, and he had taken his interest
link |
to studying the brain now. And he said, you know, there's something wrong here. He says,
link |
we got all this data, all this fact, this is 1979, all these facts about the brain,
link |
tons and tons of facts about the brain. Do we need more facts? Or do we just need to think
link |
about a way of rearranging the facts we have? Maybe we're just not thinking about the problem
link |
correctly. You know, he says, this shouldn't be, this shouldn't be like this, you know?
link |
So I read that and I said, wow. I said, I don't have to become like an experimental
link |
neuroscientist. I could just look at all those facts and try to become a theoretician and try
link |
to figure it out. And I said, that, I felt like it was something I would be good at. I said,
link |
I wouldn't be a good experimentalist. I don't have the patience for it. But I'm a good thinker
link |
and I love puzzles. And this is like the biggest puzzle in the world. This is the biggest puzzle
link |
of all time. And I got all the puzzle pieces in front of me. Damn, that was exciting.
link |
And there's something obviously you can't convert into words. They just kind of
link |
sparked this passion. And I have that a few times in my life, just something
link |
just, just like you, it grabs you. Yeah. I thought it was something that was both important
link |
and that I could make a contribution to. And so all of a sudden it felt like, oh, it gave me purpose
link |
in life, you know? I honestly don't think it has to be as big as one of those four questions.
link |
You can find those things in the smallest. Oh, absolutely. David Foster Wallace said,
link |
like, the key to life is to be unboreable. I think it's very possible to find that
link |
intensity of joy in the smallest thing. Absolutely. I'm just, you asked me my story.
link |
Yeah. No, but I'm actually speaking to the audience. It doesn't have to be those four.
link |
You happen to get excited by one of the bigger questions in the universe. But
link |
even the smallest things and watching the Olympics now, just giving yourself life,
link |
giving your life over to the study and the mastery of a particular sport is fascinating.
link |
And if it sparks joy and passion, you're able to, in the case of the Olympics,
link |
basically suffer for like a couple of decades to achieve. I mean, you can find joy and passion
link |
just being a parent. I mean, yeah, the parenting one is funny. So I was not always, but for a long
link |
time, wanted kids and get married and stuff. And especially it has to do with the fact that
link |
I've seen a lot of people that I respect get a whole other level of joy from kids. And at,
link |
you know, at first is like, you're thinking is, well, like, I don't have enough time in the day,
link |
right? If I have this passion to solve intelligence. Which is true.
link |
But like, if I want to solve intelligence, how's this kid situation going to help me?
link |
But then you realize that, you know, like you said, the things that sparks joy and it's very
link |
possible that kids can provide even a greater or deeper, more meaningful joy than those bigger
link |
questions when they enrich each other. And that seemed like a, obviously, when I was younger,
link |
it's probably a counterintuitive notion because there's only so many hours in the day. But then
link |
life is finite and you have to pick the things that give you joy.
link |
But you also understand you can be patient too. I mean, it's finite, but we do have, you know,
link |
whatever, 50 years or something. It's us alone. Yeah. So in my case, you know, in my case,
link |
I had to give up on my dream of the neuroscience because I was a graduate student at Berkeley
link |
and they told me I couldn't do this and I couldn't get funded. And, you know, and,
link |
and so I went back in, and went back into the computing industry for a number of years. I
link |
thought it would be four, but it turned out to be more. But I said, but I said, I'll come back.
link |
You know, I definitely, I'm definitely going to come back. I know I'm going to do this computer
link |
stuff for a while, but I'm definitely coming back. Everyone knows that. And it's the same
link |
as raising kids. Well, yeah, you still, you have to spend a lot of time with your kids. It's fun,
link |
enjoyable. But that doesn't mean you have to give up on other dreams. It just means that you have
link |
to wait a week or two to work on that next idea. You talk about the darker side of me,
link |
disappointing sides of human nature that we're hoping to overcome so that we don't destroy
link |
ourselves. I tend to put a lot of value in the broad general concept of love, of the human capacity
link |
to, of compassion towards each other, of just kindness, whatever that longing of like just the
link |
human, human to human connection, it connects back to our initial discussion. I tend to see a lot
link |
of value in this collective intelligence aspect. I think some of the magic of human civilization
link |
happens when there's a party is not as fun when you're alone. I totally agree with you on these
link |
issues. Do you think from a New York Cortex perspective, what role does love play in the
link |
human condition? Well, those are two separate things from the New York Cortex. I don't think it
link |
doesn't impact our thinking about the New York Cortex. From a human condition point of view,
link |
I think it's core. I mean, we get so much pleasure out of loving people and helping people.
link |
I'll rack it up to old brain stuff, and maybe we can throw it under the bus of evolution,
link |
if you want. That's fine. It doesn't impact how we think about how we model the world,
link |
but from a humanity point of view, I think it's essential.
link |
Well, I tend to give it to the new brain, and also I tend to think the sum of aspects of that
link |
need to be engineered into AI systems, both in their ability to have compassion for other humans
link |
and their ability to maximize love in the world between humans. I'm more thinking about the social
link |
network. Whenever there's a deep integration between AI systems and humans, there's specific
link |
applications where it's AI and humans. I think that's something that's often not talked about in
link |
terms of metrics over which you try to maximize, which metric to maximize in a system. It seems
link |
like one of the most powerful things in societies is the capacity to love.
link |
It's a great way of thinking about it. I have been thinking more of these fundamental
link |
mechanisms in the brain as opposed to the social interaction between humans and AI systems in
link |
the future. If you think about that, you're absolutely right, but that's a complex system.
link |
I can have intelligent systems that don't have that component, but they're not interacting
link |
with people. They're just running something or building a building someplace or something,
link |
I don't know. If you think about interacting with humans, it has to be engineered in there.
link |
I don't think it's going to appear on its own. That's a good question.
link |
In terms of, from a reinforcement learning perspective, whether the darker
link |
size of human nature or the better angels of our nature win out, statistically speaking,
link |
I don't know. I tend to be optimistic and hope that love wins out in the end.
link |
You've done a lot of incredible stuff. Your book is driving towards this fourth question
link |
that you started with on the nature of intelligence. What do you hope your legacy
link |
for people reading 100 years from now? How do you hope they remember your work?
link |
How do you hope they remember this book? Well, I think as an entrepreneur or a scientist or
link |
any human who's trying to accomplish some things, I have a view that really all you can do is
link |
accelerate the inevitable. It's like, if we didn't figure out, if we didn't study the brain,
link |
someone else would study the brain. If Elon just didn't make electric cars, someone else would do
link |
it eventually. If Thomas Edison didn't invent a light bulb, we wouldn't be using candles today.
link |
What you can do as an individual is you can accelerate something that's beneficial and make
link |
it happen sooner than whatever. That's really it. That's all you can do. You can't create a new
link |
reality that it wasn't going to happen. From that perspective, I would hope that our work,
link |
not just me, but our work in general, people would look back and said, hey, they really helped make
link |
this better future happen sooner. They helped us understand the nature of false beliefs sooner
link |
than what I made up. Now, we're so happy that we have these intelligent machines doing these things,
link |
helping us that maybe that solved the climate change problem, and they made it happen sooner.
link |
I think that's the best I would hope for. Some would say, those guys just moved the needle forward
link |
a little bit in time. Well, it feels like the progress of human civilization is not,
link |
there's a lot of trajectories. If you have individuals that accelerate towards one direction
link |
that helps steer human civilization. I think in those long stretch of time, all trajectories will
link |
be traveled, but I think it's nice for this particular civilization on earth to travel down
link |
one that's not. Yeah. Well, I think you're right. We have to take the whole period of World War II
link |
and Nazism or something like that. Well, that was a bad side step. We've been on with that for a
link |
while, but there is the optimistic view about life that ultimately it does converge in a positive
link |
way. It progresses ultimately, even if we have years of darkness. I think you could perhaps,
link |
that's accelerating the positive. It could also mean eliminating some bad missteps along the way,
link |
too. But I'm an optimistic in that way. Despite we talked about the end of civilization,
link |
I think we're going to live for a long time. I hope we are. I think our society in the future
link |
is going to be better. We're going to have less discord. We're going to have less people killing
link |
each other. We'll make them live in some way that's compatible with the carrying capacity of the
link |
earth. I'm optimistic these things will happen. All we can do is try to get there sooner.
link |
At the very least, if we do destroy ourselves, we'll have a few satellites that will tell alien
link |
civilization that we were once here. Or maybe our future inhabitants of earth. Imagine the
link |
planet of the ape scenario. We kill ourselves a million years from now or a billion years from
link |
now. There's another species on the planet. There's curious creatures who are once here.
link |
Jeff, thank you so much for your work and thank you so much for talking to me once again.
link |
Well, it's great. I love what you do. I love your podcast. You have those interesting people
link |
me aside. It's a real service I think you do for a very broader sense for humanity, I think.
link |
Thanks, Jeff. All right. It's a pleasure. Thanks for listening to this conversation with Jeff
link |
Hawkins. And thank you to Code Academy, Bio Optimizers, ExpressVPN, A Sleep, and Blinkist.
link |
Check them out in the description to support this podcast. And now let me leave you with some words
link |
from Albert Camus. An intellectual is someone whose mind watches itself. I like this because I'm
link |
happy to be both halves, the watcher and the watched. Can they be brought together? This
link |
is a practical question we must try to answer. Thank you for listening. I hope to see you next time.