back to index

Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208


small model | large model

link |
00:00:00.000
The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand
link |
00:00:05.440
the structure, function, and the origin of intelligence in the human brain.
link |
00:00:10.080
He previously wrote a seminal book on the subject titled On Intelligence, and recently
link |
00:00:15.760
a new book called A Thousand Brains, which presents a new theory of intelligence
link |
00:00:21.120
that Richard Dawkins, for example, has been raving about, calling the book, quote,
link |
00:00:26.800
brilliant and exhilarating. I can't read those two words and not think of him saying it in his
link |
00:00:33.280
British accent. Quick mention of our sponsors, Code Academy, Biooptimizers, ExpressVPN,
link |
00:00:40.400
ASleep, and Blinkist. Check them out in the description to support this podcast.
link |
00:00:45.200
As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions in his
link |
00:00:50.320
new book is that if human civilization were to destroy itself, all of knowledge, all our creations
link |
00:00:57.360
will go with us. He proposes that we should think about how to save that knowledge in a way that
link |
00:01:03.120
long outlives us, whether that's on Earth, in orbit around Earth, or in deep space. And then
link |
00:01:10.320
to send messages that advertise this backup of human knowledge to other intelligent alien
link |
00:01:15.200
civilizations. The main message of this advertisement is not that we are here, but that we were once
link |
00:01:24.800
here. This little difference somehow was deeply humbling to me, that we may with some nonzero
link |
00:01:32.240
likelihood destroy ourselves, and that an alien civilization, thousands or millions of years
link |
00:01:37.600
from now may come across this knowledge store, and they would only with some low probability even
link |
00:01:43.760
notice it, not to mention be able to interpret it. And the deeper question here for me is what
link |
00:01:49.840
information in all of human knowledge is even essential? Does Wikipedia capture it or not at
link |
00:01:55.040
all? This thought experiment forces me to wonder what are the things we've accomplished and are
link |
00:02:00.800
hoping to still accomplish that will outlive us? Is it things like complex buildings, bridges,
link |
00:02:07.200
cars, rockets? Is it ideas like science, physics, and mathematics? Is it music and art? Is it
link |
00:02:15.120
computers, computational systems, or even artificial intelligence systems? I personally
link |
00:02:20.320
can't imagine that aliens wouldn't already have all of these things. In fact, much more and much
link |
00:02:26.960
better. To me, the only unique thing we may have is consciousness itself, and the actual
link |
00:02:33.120
subjective experience of suffering, of happiness, of hatred, of love. If we can record these experiences
link |
00:02:40.400
in the highest resolution directly from the human brain, such that aliens will be able to replay
link |
00:02:45.040
them, that is what we should store and send as a message. Not Wikipedia, but the extremes of
link |
00:02:52.080
conscious experiences, the most important of which, of course, is love. This is the Lex
link |
00:02:58.720
Friedman podcast, and here is my conversation with Jeff Hawkins. We previously talked over two
link |
00:03:06.400
years ago. Do you think there's still neurons in your brain that remember that conversation,
link |
00:03:12.160
that remember me and got excited? There's a Lex neuron in your brain that just finally has a purpose?
link |
00:03:18.240
I do remember our conversation, or I have some memories of it, and I formed additional memories
link |
00:03:23.520
of you in the meantime. I wouldn't say there's a neuron or a neuron in my brain that know you,
link |
00:03:29.360
but there are synapses in my brain that have formed that reflect my knowledge of you and
link |
00:03:35.520
the model I have of you in the world. Whether the exact same synapses were formed two years ago,
link |
00:03:40.640
it's hard to say because these things come and go all the time. One thing to note about
link |
00:03:45.600
brains is that when you think of things, you often erase the memory and rewrite it again.
link |
00:03:49.040
So yes, but I have a memory of you, and that's instantiated in synapses. There's a simpler way
link |
00:03:54.880
to think about it, Lex. We have a model of the world in your head, and that model is continually
link |
00:04:01.200
being updated. I updated it this morning. You offered me this water, you said it was from the
link |
00:04:06.080
refrigerator. I remember these things, and so the model includes where we live, the places we know,
link |
00:04:12.400
the words, the objects in the world, but it's just monstrous model, and it's constantly being
link |
00:04:16.560
updated, and people are just part of that model. So are animals, so are other physical objects,
link |
00:04:22.080
so are events we've done. So it's no special in my mind, special place for the memories of humans.
link |
00:04:29.760
I mean, obviously, I know a lot about my wife, but and friends, and so on, but it's not like a
link |
00:04:39.440
special place for humans were over here, but we model everything, and we model other people's
link |
00:04:43.920
behaviors, too. So if I said, there's a copy of your mind in my mind, it's just because I know
link |
00:04:48.480
how humans, I've learned how humans behave, and I learned some things about you, and that's part
link |
00:04:55.200
of my world model. Well, I just also mean the collective intelligence of the human species.
link |
00:05:02.320
I wonder if there's something fundamental to the brain that enables that, so modeling other
link |
00:05:09.440
humans with their ideas. You're actually jumping into a lot of big profits, like collective
link |
00:05:14.560
intelligence is a separate topic that a lot of people like to talk about, we can talk about that.
link |
00:05:19.360
But so that's interesting, like, you know, we're not just individuals, we live in society and so on.
link |
00:05:25.920
But from our research point of view, and so again, let's just talk, we study the neocortex,
link |
00:05:30.960
it's a sheet of neural tissue, it's about 75% of your brain, it runs on this very repetitive
link |
00:05:36.960
algorithm. It's a very repetitive circuit. And so you can apply that algorithm to lots of different
link |
00:05:44.000
problems, but it's all underneath, it's the same thing, we're just building this model.
link |
00:05:47.760
So from our point of view, we wouldn't look for these special circuits someplace buried in your
link |
00:05:52.320
brain that might be related to understanding other humans. It's more like, how do we build a
link |
00:05:58.240
model of anything? How do we understand anything in the world? And humans are just another part
link |
00:06:02.000
of the things we understand. So there's nothing, there's nothing to the brain that knows the
link |
00:06:08.720
emergent phenomenon of collective intelligence? Well, I certainly know about that. I've heard
link |
00:06:13.120
the terms I've read. No, but that's, right. Well, okay, right. As an idea. Well, I think we have
link |
00:06:17.760
language, which is sort of built into our brains, and that's a key part of collective intelligence.
link |
00:06:22.480
So there are some, you know, prior assumptions about the world we're going to live in, when we're
link |
00:06:27.920
born, we're not just a blank slate. And so, you know, did we evolve to take advantage of those
link |
00:06:34.960
situations? Yes. But again, we study only part of the brain, the neocortex. There's other parts
link |
00:06:38.960
of the brain are very much involved in societal interactions and human emotions and how we interact
link |
00:06:46.160
and even societal issues about, you know, how we interact with other people when we support them,
link |
00:06:53.520
when we're greedy and things like that. I mean, certainly the brain is a great place
link |
00:07:00.240
where to study intelligence. I wonder if it's the fundamental atom of intelligence? Well,
link |
00:07:06.960
I would say it's absolutely an essential component, even if you believe in collective
link |
00:07:12.000
intelligence as, hey, that's where it's all happening. That's what we need to study,
link |
00:07:16.320
which I don't believe that, by the way. I think it's really important, but I don't think that is
link |
00:07:19.360
the thing. But even if you do believe that, then you have to understand how the brain works in
link |
00:07:26.080
doing that. It's, you know, it's more like we are intelligent, we are intelligent individuals,
link |
00:07:31.360
and together we are much more magnified, our intelligence. We can do things that we couldn't
link |
00:07:35.360
do individually. But even as individuals, we're pretty damn smart. And we can model things and
link |
00:07:40.640
understand the world and interact with it. So to me, if you're going to start someplace,
link |
00:07:45.440
you need to start with the brain, then you could say, well, how do brains interact with each other?
link |
00:07:50.160
And what is the nature of language? And how do we share models that I've learned something about
link |
00:07:55.200
the world? How do I share it with you? Which is really what, you know, sort of communal
link |
00:07:59.280
intelligence is. I know something, you know something. We've had different experiences in
link |
00:08:03.520
the world. I've learned something about brains. Maybe I can impart that to you. You've learned
link |
00:08:07.040
something about, you know, whatever physics and you can part that to me. But it all comes down to
link |
00:08:13.440
even just the epistemological question of, well, what is knowledge and how do you represent it in
link |
00:08:17.840
the brain? Right? And it's not, that's where it's going to reside, right? Or in our writings.
link |
00:08:23.200
It's obvious that human collaboration, human interaction is how we build societies. But
link |
00:08:29.920
some of the things you talk about and work on, some of those elements of what makes up an intelligent
link |
00:08:37.760
entity is there with a single person. Absolutely. I mean, we can't deny that the brain is the core
link |
00:08:44.240
element here in, at least I think it's obvious, the brain is the core element in all theories of
link |
00:08:50.000
intelligence. It's where knowledge is represented. It's where knowledge is created. We interact,
link |
00:08:56.000
we share, we build upon each other's work. But without a brain, you'd have nothing. You know,
link |
00:09:01.920
there would be no intelligence without brains. And so that's where we start. I got into this field
link |
00:09:08.080
because I just was curious as to who I am. You know, how do I think? What's going on in my head
link |
00:09:13.040
when I'm thinking? What does it mean to know something? I can ask what it means for me to
link |
00:09:17.840
know something independent of how I learned it from you or from someone else or from society.
link |
00:09:22.880
What does it mean for me to know that I have a model of you in my head? What does it mean to
link |
00:09:26.400
know I know what this microphone does and how it works physically, even when I can't see it right
link |
00:09:29.920
now? How do I know that? What does it mean? How do the neurons do that at the fundamental level
link |
00:09:35.760
of neurons and synapses and so on? Those are really fascinating questions. And I'm happy to
link |
00:09:42.560
just happy to understand those if I could. So in your new book, you talk about our brain,
link |
00:09:52.080
our mind as being made up of many brains. So the book is called A Thousand Brains,
link |
00:09:57.920
A Thousand Brain Theory of Intelligence. What is the key idea of this book?
link |
00:10:02.800
The book has three sections and it has sort of maybe three big ideas. So the first section is
link |
00:10:09.360
all about what we've learned about the neocortex and that's the thousand brains theory. Just to
link |
00:10:13.760
complete the picture, the second section is all about AI and the third section is about the future
link |
00:10:17.040
of humanity. So the thousand brains theory, the big idea there, if I had to summarize into one
link |
00:10:26.720
one big idea, is that we think of the brain, the neocortex is learning this model of the world.
link |
00:10:33.440
But what we learned is actually there's tens of thousands of independent modeling systems going
link |
00:10:38.560
on. And so each, what we call a column in the cortex with about 150,000 of them is a complete
link |
00:10:44.080
modeling system. So it's a collective intelligence in your head in some sense. So the thousand brains
link |
00:10:50.080
theory says about where do I have knowledge about, you know, this coffee cup or where is the model
link |
00:10:54.400
of this cell phone? It's not in one place. It's in thousands of separate models that are complementary
link |
00:10:59.280
and they communicate with each other through voting. So this idea that we have, we feel like
link |
00:11:03.440
we're one person, you know, that's our experience, we can explain that. But reality, there's lots of
link |
00:11:08.640
these like, it's almost like little brains, like, but they're sophisticated modeling systems,
link |
00:11:13.680
about 150,000 of them in each human brain. And that's a totally different way of thinking about
link |
00:11:19.760
how the neocortex is structured than we or anyone else thought of even just five years ago.
link |
00:11:25.040
So you mentioned you started this journey just looking in the mirror and trying to understand
link |
00:11:30.960
who you are. So if you have many brains, who are you then?
link |
00:11:35.920
So it's interesting, we have a singular perception, right? You know, we think, oh, I'm just here,
link |
00:11:40.000
I'm looking at you. But it's, it's composed of all these things, like there's sounds and there's,
link |
00:11:43.840
and there's this vision and there's touch and all kinds of inputs yet we have the singular
link |
00:11:48.800
perception. And what the 1000 brain theory says, we have these models that are visual models,
link |
00:11:52.720
we have audit models, auditory models, models of talk to models and so on, but they vote.
link |
00:11:57.680
And so they send in the cortex, you can think about that these columns as about like little
link |
00:12:02.480
grains of rice, 150,000 stacked next to each other. And each one is its own little modeling system.
link |
00:12:09.600
But they have these long range connections that go between them. And we call those voting
link |
00:12:14.480
connections or voting neurons. And so the different columns try to reach a consensus,
link |
00:12:20.960
like, what am I looking at? Okay, you know, each one has some ambiguity, but they come to
link |
00:12:24.800
a consensus. Oh, there's a water bottle, I'm looking at. We are only consciously able to
link |
00:12:30.640
perceive the voting. We're not able to perceive anything that goes under the hood. So the voting
link |
00:12:36.000
is what we're aware of. The results of the vote. Yeah, the voting. Well, it's, you can imagine
link |
00:12:41.760
it this way. We were just talking about eye movements a moment ago. So as I'm looking at
link |
00:12:45.280
something, my eyes are moving about three times a second. And with each movement,
link |
00:12:49.120
a completely new input is coming into the brain. It's not repetitive. It's not shifting around.
link |
00:12:53.360
It's completely new. I'm totally unaware of it. I can't perceive it. But yet if I looked at the
link |
00:12:58.320
neurons in your brain, they're going on and off, on and off, on and off, on up. But the voting neurons
link |
00:13:02.080
are not. The voting neurons are saying, you know, we all agree, even though I'm looking at different
link |
00:13:05.760
parts of this is a water bottle right now. And that's not changing. And it's in some position
link |
00:13:10.720
and pose relative to me. So I have this perception of the water bottle about two feet away from me
link |
00:13:15.520
at a certain pose to me. That is not changing. That's the only part I'm aware of. I can't be
link |
00:13:20.480
aware of the fact of the inputs from the eyes are moving and changing and all this other tapping.
link |
00:13:25.040
So these long range connections are the part we can be conscious of. The individual activity in
link |
00:13:31.200
each column is doesn't go anywhere else. It doesn't get shared anywhere else. It doesn't,
link |
00:13:36.560
there's no way to extract it and talk about it or extract it and even remember it to say, oh,
link |
00:13:42.480
yes, I can recall that. So, but these long range connections are the things that are accessible
link |
00:13:47.760
to language. And to our, you know, it's like the hippocampus or our memories, you know, our
link |
00:13:52.960
short term memory systems and so on. So we're not aware of 95% or maybe it's even 98% of what's
link |
00:13:59.840
going on in your brain. We're only aware of this sort of stable, somewhat stable voting outcome
link |
00:14:06.800
of all these things that are going on underneath the hood.
link |
00:14:09.280
So what would you say is the basic element in the 1000 brains theory of intelligence,
link |
00:14:16.160
of intelligence? Like, what's the atom of intelligence when you think about it?
link |
00:14:21.360
Is it the individual brains and then what is a brain?
link |
00:14:24.480
Well, let's, let's, can we just talk about what intelligence is first and then,
link |
00:14:28.400
and then we can talk about the elements are. So in my, in my book, intelligence is the ability
link |
00:14:33.760
to learn a model of the world, to build the internal to your head, a model that represents
link |
00:14:40.240
the structure of everything, you know, to know what this is a table and that's a coffee cup and
link |
00:14:44.800
this is a goose neck lamp and all these things. To know these things, I have to have a model
link |
00:14:48.720
in my head. I just don't look at them and go, what is that? I already have internal representations
link |
00:14:53.520
of these things in my head and I had to learn them. I wasn't born of any of that knowledge.
link |
00:14:57.360
You were, you know, we have some lights in the room here. I, you know, that's not part of my
link |
00:15:01.680
evolutionary heritage, right? It's not in my genes. So we have this incredible model and the model
link |
00:15:07.120
includes not only what things look like and feel like, but where they are relative to each other
link |
00:15:10.800
and how they behave. I've never picked up this water bottle before, but I know that if I took
link |
00:15:14.880
my hand on that blue thing and I turn it, it'll probably make a funny little sound as the little
link |
00:15:18.320
plastic things detach and then it'll rotate and it'll rotate a certain way and it'll come off.
link |
00:15:22.480
How do I know that? Because I have this model in my head. So the essence of intelligence
link |
00:15:27.120
is our ability to learn a model and the more sophisticated our model is, the smarter we are.
link |
00:15:33.040
Not that there is a single intelligence because you can know about, you know a lot about things
link |
00:15:36.800
that I don't know and I know about things you don't know and we can both be very smart,
link |
00:15:40.400
but we both learned a model of the world through interacting with it. So that is the
link |
00:15:44.240
essence of intelligence. Then we can ask ourselves, what are the mechanisms in the brain that allow
link |
00:15:48.960
us to do that? And what are the mechanisms of learning, not just the neural mechanisms,
link |
00:15:52.320
what are the general process by how we learn a model? So that was a big insight for us.
link |
00:15:56.240
It's like, what are the, what are the actual things that, how do you learn this stuff?
link |
00:16:00.800
It turns out you have to learn it through movement. You can't learn it just by,
link |
00:16:04.480
that's how we learn, we learn through movement, we learn. So you build up this model by observing
link |
00:16:09.120
things and touching them and moving them and walking around the world and so on.
link |
00:16:12.080
So either you move or the thing moves. Somehow. Yeah, obviously you can learn
link |
00:16:16.720
things just by reading a book, something like that, but think about if I were to say, oh,
link |
00:16:20.160
here's a new house. I want you to learn, you know, what do you do? You have to walk,
link |
00:16:23.920
you have to walk from room to room. You have to open the doors, look around,
link |
00:16:27.760
see what's on the left, what's on the right. As you do this, you're building a model in your head.
link |
00:16:32.000
It's just, that's what you're doing. You can't just sit there and say, I'm going to grok the
link |
00:16:35.600
house. No, you know, or you don't even want to just sit down and read some description of it,
link |
00:16:39.760
right? You literally physically interact with them. The same with like a smartphone. If I'm
link |
00:16:43.440
going to learn a new app, I touch it and I move things around. I see what happens when I,
link |
00:16:47.360
when I do things with it. So that's the basic way we learn in the world.
link |
00:16:50.640
And by the way, when you say model, you mean something that can be used for prediction
link |
00:16:55.360
in the future. It's, it's used for prediction and for behavior and planning.
link |
00:17:00.960
Right. And does a pretty good job at doing so.
link |
00:17:04.640
Yeah. Here's the way to think about the model. A lot of people get hung up on this. So
link |
00:17:10.160
you can imagine an architect making a model of a house, right? So there's a physical model
link |
00:17:15.120
that's small. And why don't they do that? Well, we do that because you can imagine what it would
link |
00:17:19.120
look like from different angles. Okay, look from here, look from there. And you can also say,
link |
00:17:22.640
well, how far to get from the garage to the, to the swimming pool or something like that,
link |
00:17:27.440
right? You can imagine looking at this. And so what would be the view from this location?
link |
00:17:30.480
So we build these physical models to let you imagine the future and imagine behaviors.
link |
00:17:35.920
Now we can take that same model and put it in a computer. So we now, today, they all build
link |
00:17:41.040
models of houses in a computer and they, and they do that using a set of,
link |
00:17:44.720
we'll come back to this term in a moment, reference frames, but eventually you assign a
link |
00:17:49.840
reference frame to the house and you assign different things for the house in different
link |
00:17:52.560
locations. And then the computer can generate an image and say, okay, this is what it looks
link |
00:17:56.720
like in this direction. The brain is doing something remarkably similar to this.
link |
00:18:00.640
Surprising. It's using reference frames. It's building these, it's similar to a model in a
link |
00:18:05.280
computer, which has the same benefits of building a physical model. It allows me to say, what would
link |
00:18:09.280
this thing look like if it was in this orientation? What would likely happen if I pushed this button?
link |
00:18:14.080
I've never pushed this button before. Or how would I accomplish something? I want to convey
link |
00:18:20.720
a new idea of learned. How would I do that? I can imagine in my head, well, I could talk about it.
link |
00:18:25.360
I could write a book. I could do some podcasts. I could, you know, maybe tell my neighbor,
link |
00:18:32.000
you know, and I can imagine the outcomes of all these things before I do any of them.
link |
00:18:36.080
That's the model that you do. Let's just plan the future and imagine the consequences or actions.
link |
00:18:41.840
Prediction, you asked about prediction. Prediction is not the goal of the model. Prediction is an
link |
00:18:47.600
inherent property of it. And it's how the model corrects itself.
link |
00:18:52.320
So prediction is fundamental to intelligence. It's fundamental to building a model and the
link |
00:18:57.760
model's intelligent. And let me go back and be very precise about this. Prediction,
link |
00:19:02.000
you can think of prediction two ways. One is like, hey, what would happen if I did this?
link |
00:19:05.200
That's the type of prediction. That's a key part of intelligence. But it isn't prediction. It's like,
link |
00:19:09.600
oh, what's this water bottle going to feel like when I pick it up? And that doesn't seem very
link |
00:19:15.200
intelligent. But the way to think, one way to think about prediction is it's a way for us to learn
link |
00:19:21.520
where our model is wrong. So if I picked up this water bottle and it felt hot, I'd be very surprised.
link |
00:19:27.520
Or if I picked up it was very light, it would be very, I'd be surprised. Or if I turned this top
link |
00:19:32.400
and it didn't, I had to turn it the other way, I'd be surprised. And so all those might have
link |
00:19:37.040
a prediction like, okay, I'm going to do it. I'm going to drink some water. Okay, I do this.
link |
00:19:40.480
There it is. I feel opening, right? What if I had to turn it the other way? Or what if it's
link |
00:19:44.000
split in two? Then I say, oh my gosh, I misunderstood this. I didn't have the right model. This thing,
link |
00:19:48.880
my attention would be drawn to, I'll be looking at it going, well, how did that happen? Why did it
link |
00:19:52.720
open up that way? And I would update my model by doing it. Just by looking at it and playing around
link |
00:19:57.520
that update and saying, this is a new type of water bottle. So you're talking about sort of
link |
00:20:02.320
complicated things like a water bottle. But this also applies for just basic vision, just like
link |
00:20:08.560
seeing things. That's almost like a precondition of just perceiving the world as predicting.
link |
00:20:15.840
Everything that you see is first passed through your prediction.
link |
00:20:20.960
Everything you see and feel, in fact, this is the insight I had back in the late 80s,
link |
00:20:26.480
and excuse me, early 80s. And another people have reached the same idea is that every sensory input
link |
00:20:33.440
you get, not just vision, but touch and hearing, you have an expectation about it and a prediction.
link |
00:20:41.360
Sometimes you can predict very accurately. Sometimes you can't. I can't predict what next
link |
00:20:45.120
word is going to come out of your mouth. But as you start talking, I was better and better
link |
00:20:48.400
predictions. And if you talked about some topics, I'd be very surprised. So I have this sort of
link |
00:20:53.600
background prediction that's going on all the time for all of my senses. Again, the way I think
link |
00:20:59.600
about that is this is how we learn. It's more about how we learn. It's a test of our understanding,
link |
00:21:06.560
our predictions, our test. Is this really a water bottle? If it is, I shouldn't see
link |
00:21:11.840
a little finger sticking out the side. And if I saw a little finger sticking out, I was like,
link |
00:21:15.200
what the hell's going on? That's not normal.
link |
00:21:17.680
I mean, that's fascinating. Let me linger on this for a second. It really honestly feels
link |
00:21:26.800
that prediction is fundamental to everything, to the way our mind operates, to intelligence.
link |
00:21:33.280
So it's just a different way to see intelligence, which is like everything starts a prediction.
link |
00:21:39.840
And prediction requires a model. You can't predict something unless you have a model of it.
link |
00:21:45.040
Right. But the action is prediction. So the thing the model does is prediction.
link |
00:21:51.200
But you can then extend it to things like, what would happen if I took this today? I went and
link |
00:21:59.040
did this. What would be likely? Or you can extend prediction to like, oh, I want to get a promotion
link |
00:22:04.320
at work. What action should I take? And you can say, if I did this, I could predict what might
link |
00:22:09.040
happen. If I spoke to someone, I predict what would happen. So it's not just low level predictions.
link |
00:22:13.360
Yeah, it's all predictions. It's all predictions. It's like this black box, so you can ask basically
link |
00:22:17.440
any question, low level or high level. So we started off with that observation. It's all,
link |
00:22:21.200
it's this nonstop prediction. And I write about this in the book about, and then we asked,
link |
00:22:26.480
how do neurons actually make predictions? And physically, like, what does the neuron do when
link |
00:22:30.240
it makes a prediction? And, or the neural tissue does when it makes prediction. And then we asked,
link |
00:22:35.360
what are the mechanisms by how we build a model that allows you to make prediction? So we started
link |
00:22:39.520
with prediction as sort of the fundamental research agenda, if in some sense, like, and say, well,
link |
00:22:46.720
we understand how the brain makes predictions. We will understand how it builds these models and how
link |
00:22:50.800
it learns. And that's the core of intelligence. So it was like, it was the key that got us in the door
link |
00:22:55.680
to say, that is our research agenda. Understand predictions. So in this whole process,
link |
00:23:00.480
this, where does intelligence originate, would you say? So if we look at things that are much
link |
00:23:10.880
less intelligence to humans, and you start to build up a human, the process of evolution,
link |
00:23:16.080
where's this magic thing that has a prediction model or a model that's able to predict that
link |
00:23:23.680
starts to look a lot more like intelligence? Is there a place where Richard Dawkins wrote an
link |
00:23:29.360
introduction to your, to your book, an excellent introduction? I mean, it puts a lot of things
link |
00:23:34.880
into context. And it's funny, just looking at parallels for your book and Darwin's origin of
link |
00:23:40.320
species. So Darwin wrote about the origin of species. So what is the origin of intelligence?
link |
00:23:48.640
Yeah, well, we have a theory about it. And it's just that is the theory. Theory goes as follows.
link |
00:23:54.160
As soon as living things started to move, they're not just floating in sea, they're not just
link |
00:23:59.200
a plant, you know, grounded someplace. As soon as they started to move, there was an advantage to
link |
00:24:04.960
moving intelligently, to moving in certain ways. And there's some very simple things you can do,
link |
00:24:10.000
you know, bacteria or single cell organisms can move towards a source of gradient of food or
link |
00:24:16.000
something like that. But an animal that might know where it is and know where it's been and how to
link |
00:24:20.720
get back to that place, or an animal that might say, oh, there was a source of food someplace,
link |
00:24:25.120
how do I get to it? Or there was a danger, how do I get to it? There was a mate, how do I get to them?
link |
00:24:31.280
There was a big evolutionary advantage to that. So early on, there was a pressure to start
link |
00:24:35.440
understanding your environment, like, where am I? And where have I been? And what happened in those
link |
00:24:41.360
different places? So we still have this neural mechanism in our brains. It's in the mammals,
link |
00:24:50.160
it's in the hippocampus and enterinocortex, these are older parts of the brain. And these are very
link |
00:24:56.400
well studied. We build a map of our environment. So these neurons in these parts of the brain know
link |
00:25:03.120
where I am in this room and where the door was and things like that. So a lot of other mammals
link |
00:25:08.880
have this kind of animal? All mammals have this, right? And almost any animal that knows where it
link |
00:25:14.240
is and get around must have some mapping system, must have some way of saying, I've learned a map
link |
00:25:20.000
of my environment. I have hummingbirds in my backyard. And they go to the same places all the
link |
00:25:24.800
time. They must know where they are. They just know where they are. They're not just randomly
link |
00:25:29.360
flying around. They know particular flowers they come back to. So we all have this. And it turns
link |
00:25:35.600
out it's very tricky to get neurons to do this, to build a map of an environment. And so we now
link |
00:25:41.920
know there's these famous studies that's still very active about place cells and grid cells and
link |
00:25:47.200
these other types of cells in the older parts of the brain and how they build these maps of the
link |
00:25:51.440
world. It's really clever. It's obviously been under a lot of evolutionary pressure over a long
link |
00:25:55.600
period of time to get good at this. So animals know where they are. What we think has happened,
link |
00:26:01.920
and there's a lot of evidence that suggests this, is that mechanism we learned to map
link |
00:26:06.080
a space is was repackaged. The same type of neurons was repackaged into a more compact form.
link |
00:26:17.840
And that became the cortical column. And it was in some sense,
link |
00:26:22.080
genericized, if that's a word. It was turned into a very specific thing about learning maps
link |
00:26:26.800
of environments to learning maps of anything, learning a model of anything, not just your
link |
00:26:32.000
space, but coffee cups and so on. And it got sort of repackaged into a more compact version,
link |
00:26:39.200
a more universal version, and then replicated. So the reason we're so flexible is we have a very
link |
00:26:45.760
generic version of this mapping algorithm, and we have 150,000 copies of it. Sounds a lot like
link |
00:26:52.160
the progress of deep learning. How so? To take neural networks that seem to work well for a
link |
00:26:59.600
specific task, compress them, and multiply it by a lot. And then you just stack them on top of it.
link |
00:27:09.120
It's like the story of transformers in natural language processing.
link |
00:27:13.680
Deep learning networks, they end up, you're replicating an element, but you still need
link |
00:27:18.080
the entire network to do anything. Here, what's going on, each individual element is a complete
link |
00:27:24.560
learning system. This is why I can take a human brain, cut it in half, and it still works. It's
link |
00:27:30.880
pretty amazing. It's fundamentally distributed. It's fundamentally distributed, complete modeling
link |
00:27:34.960
systems. But that's our story we like to tell. I would guess it's likely largely right,
link |
00:27:43.760
but there's a lot of evidence supporting that story, this evolutionary story.
link |
00:27:47.920
The thing which brought me to this idea is that the human brain got big very quickly,
link |
00:27:55.920
so that led to the proposal a long time ago that, well, there's this common element just
link |
00:28:01.680
instead of creating new things, it just replicated something. We also are extremely flexible. We
link |
00:28:06.720
can learn things that we had no history about. So that tells it that the learning algorithm is
link |
00:28:14.000
very generic. It's very universal because it doesn't assume any prior knowledge about what it's
link |
00:28:19.760
learning. So you combine those things together and you say, okay, well, how did that come about?
link |
00:28:26.000
Where did that universal algorithm come from? It had to come from something that wasn't universal.
link |
00:28:29.600
It came from something that was more specific. Anyway, this led to our hypothesis that you
link |
00:28:34.160
would find grid cells and play cell equivalents in the New York Cortex. When we first published our
link |
00:28:39.840
first papers on this theory, we didn't know of evidence for that. It turns out there was some,
link |
00:28:44.240
but we didn't know about it. And since then, so then we became aware of evidence for grid cells
link |
00:28:49.200
in certain parts of the New York Cortex. And then now there's been new evidence coming out. There's
link |
00:28:53.040
some interesting papers that came out just January of this year. So one of our predictions was if
link |
00:28:58.800
this evolutionary hypothesis is correct, we would see grid cell play cell equivalents, cells that
link |
00:29:03.760
work like them through every column in the New York Cortex, and that's starting to be seen.
link |
00:29:07.440
And what does it mean that why is it important that they're present?
link |
00:29:11.760
Because it tells us, well, we're asking about the evolutionary origin of intelligence, right?
link |
00:29:16.000
So our theory is that these columns in the Cortex are working on the same principles,
link |
00:29:22.640
the modeling systems. And it's hard to imagine how neurons do this. And so we said, hey,
link |
00:29:28.640
it's really hard to imagine how neurons could learn these models of things. We can talk about
link |
00:29:32.000
the details of that if you want. But let's, but there's this other part of the brain, we know
link |
00:29:37.920
the learned models of environments. So could that mechanism to learn to model this room be
link |
00:29:43.280
used to learn to model the water bottle? Is it the same mechanism? So we said it's much more
link |
00:29:48.320
likely the brain's using the same mechanism, which case it would have these equivalent cell types.
link |
00:29:54.160
So it's basically the whole theory is built on the idea that these columns have reference frames
link |
00:29:59.360
and they're learning these models and these grid cells create these reference frames. So it's
link |
00:30:04.000
basically the major, in some sense, the major predictive part of this theory is that we will
link |
00:30:10.080
find these equivalent mechanisms in each column in the near Cortex, which tells us that that's
link |
00:30:15.760
what they're doing. They're learning these sensory model models of the world. So just
link |
00:30:21.120
we're pretty confident that would happen. But now we're seeing the evidence.
link |
00:30:23.840
So the evolutionary process nature does a lot of copy and paste and see what happens.
link |
00:30:28.560
Yeah. Yeah, there's no direction to it. But, but it just found out like, Hey, if I took this,
link |
00:30:34.320
these elements and made more of them, what happens? And let's hook them up to the eyes and
link |
00:30:38.400
let's hook them to ears. And that seems to work pretty well for us. Again, just to take a quick
link |
00:30:44.880
step back to our conversation of collective intelligence. Do you sometimes see that as just
link |
00:30:51.120
another copy and paste aspect is copying pasting these brains and humans and making a lot of them
link |
00:31:00.400
and then creating social structures that then almost operates as a single brain?
link |
00:31:06.240
I wouldn't have said it, but you said it sounded pretty good.
link |
00:31:10.160
So to you, the brain is fundamental is like, is it something?
link |
00:31:15.040
Yeah. I mean, our goal is to understand how the neocortex works. We can argue how essential that
link |
00:31:20.880
is to understanding human brain because it's not the entire human brain. You can argue how
link |
00:31:25.440
essential that is to understanding human intelligence. You can argue how essential this
link |
00:31:29.920
to, to, you know, sort of communal intelligence. I'm not, I didn't, our goal was to understand
link |
00:31:38.080
the neocortex. Yeah. So what is the neocortex and where does it fit in the various aspect of
link |
00:31:43.680
what the brain does? Like, how important is it to you? Well, obviously, again, we, I mentioned
link |
00:31:48.800
again, in the beginning, it's, it's about 70 to 75% of the volume of a human brain. So it, you know,
link |
00:31:55.360
it dominates our brain in terms of size, not in terms of number of neurons, but in terms of size.
link |
00:32:00.640
Size isn't everything, Jeff. I know. But it's, it's nothing, it's nothing to be, it's not that.
link |
00:32:06.560
We know that all high level vision, hearing and touch happens in neocortex. We know that
link |
00:32:11.760
all language occurs and is understood in the neocortex, whether that's spoken language, written
link |
00:32:16.480
language, sign language, whether language of mathematics, language of physics, music, math,
link |
00:32:21.040
you know, we know that all high level planning and thinking occurs in the neocortex. If I were to
link |
00:32:25.920
say, you know, what part of your brain designed a computer and understands programming and, and
link |
00:32:30.480
creates music, it's all the neocortex. So then that's a kind of undeniable fact.
link |
00:32:36.240
If, but then there's other parts of our brain are important too, right? Our emotional states,
link |
00:32:41.360
our body regulating our body. So the way I like to look at it is, you know, could you, can you
link |
00:32:49.760
understand the neocortex without the rest of the brain? And some people say you can't,
link |
00:32:53.120
and I think absolutely you can. It's not that they're not interacting, but you can understand it.
link |
00:32:58.400
Can you understand the neocortex without understanding the emotions of fear? Yes,
link |
00:33:01.840
you can. You can understand how the system works. It's just a modeling system. I make the,
link |
00:33:05.920
the analogy in the book that it's, it's like a map of the world and how that map is used
link |
00:33:11.040
depends on who's using it. So how our map of our world in our neocortex, how we, how we
link |
00:33:17.040
manifest as a human depends on the rest of our brain. What are our motivations? You know,
link |
00:33:21.360
what are my desires? Am I a nice guy or not a nice guy? Am I a cheater or am I, you know,
link |
00:33:25.680
not a cheater? You know, how important different things are in my life. So, so, but the neocortex
link |
00:33:35.200
can be understood on its own. And, and I say that as a neuroscientist, I know there's all these
link |
00:33:40.800
interactions and I want to say I don't know them and we don't think about them. But a layperson's
link |
00:33:45.840
point of view, you can say it's a modeling system. I don't generally think too much about the communal
link |
00:33:51.520
aspect of intelligence, which you brought up a number of times already. So that's not really
link |
00:33:55.840
been my concern. I just wonder if there's a continuum from the origin of the universe, like
link |
00:34:00.560
this pockets of complexities that form living organisms. I wonder if we're just, if you look
link |
00:34:10.400
at humans, we feel like we're at the top. And I wonder if there's like just where everybody
link |
00:34:16.560
probably, every living type pocket of complexity is probably thinks they're the part in the French,
link |
00:34:24.000
they're the shit. They're at the top of the pyramid. Well, if they're thinking.
link |
00:34:27.760
Well, and then what is thinking? Well, that in a sense, the whole point is in their sense of the
link |
00:34:35.680
world, they, their sense is that they're at the top of it. I think what is the turtle?
link |
00:34:41.840
But you're, you're, you're bringing up, you know, the problems of complexity and complexity theory
link |
00:34:46.800
are, you know, it's a huge, interesting problem in science. And, you know, I think we've made
link |
00:34:53.600
surprisingly little progress in understanding complex systems in general. And so, you know,
link |
00:35:00.080
the Santa Fe Institute was founded to, to study this and even the scientists there will say it's
link |
00:35:04.240
really hard. We haven't really been able to figure out exactly, you know, that science
link |
00:35:09.040
hasn't really congealed yet. We're still trying to figure out the basic elements of that science.
link |
00:35:13.760
What, you know, where does complexity come from and what is it and how you define it,
link |
00:35:17.840
whether it's DNA, creating bodies or phenotypes or individuals creating societies or ants and,
link |
00:35:24.400
you know, markets and so on. It's, it's a very complex thing. I'm not a complexity theorist
link |
00:35:29.680
person, right? I, I think you need to ask, well, the brain itself is a complex system. So,
link |
00:35:36.000
can we understand that? I think we've made a lot of progress understanding how the brain works.
link |
00:35:40.560
So, but I haven't brought it out to like, oh, well, where are we on the complexity spectrum?
link |
00:35:45.920
You know, it's like, it's a great question. I prefer for that answer to be, we're not special.
link |
00:35:54.560
It seems like if we're honest, most likely we're not special. So, if there is a spectrum,
link |
00:36:00.240
we're probably not in some kind of significant place. I think there's one thing we could say
link |
00:36:04.000
that we are special. And again, only here on earth, I'm not saying I'm bad, is that if we
link |
00:36:09.760
think about knowledge, what we know, we clearly, human brains have, the only brains have a certain
link |
00:36:20.240
types of knowledge. We're the only brains on, on this earth to understand what the earth is,
link |
00:36:24.720
how old it is, that the universe is a picture as a whole with the only organisms understand DNA and
link |
00:36:30.400
the origins of, you know, of species. No other species on, on this planet has that knowledge.
link |
00:36:37.200
So, if we think about, I like to think about, you know, one of the endeavors of humanity is to
link |
00:36:43.440
understand the universe as much as we can. I think our species is further along in that,
link |
00:36:49.920
undeniably, whether our theories are right or wrong, we can debate, but at least we have theories.
link |
00:36:54.800
You know, we, we know that what the sun is and how fusion is and how what black holes are and,
link |
00:37:00.480
you know, we know general theory and relativity and no other animal has any of this knowledge.
link |
00:37:05.120
So, from that sense, we're special. Are we special in terms of the, the hierarchy of complexity in,
link |
00:37:11.840
in the universe? Probably not.
link |
00:37:16.560
Can we look at a neuron? Yeah, you say that prediction happens in the neuron. What does
link |
00:37:22.800
that mean? So, the neuron tradition is seen as the basic element of the, the brain.
link |
00:37:27.440
So, we, I mentioned this earlier, that prediction was our research agenda.
link |
00:37:31.760
Yeah. We said, okay, how does the brain make a prediction? Like, I'm about to grab this water
link |
00:37:37.840
bottle and my brain is predicting what I'm going to feel on, on all my parts of my fingers. If I
link |
00:37:42.720
felt something really odd on any part here, I'd notice it. So, my brain is predicting what it's
link |
00:37:46.560
going to feel as I grab this thing. So, what is that? How does that manifest itself in neural
link |
00:37:51.360
tissue, right? We got brains made of neurons and there's chemicals and there's neurons and there's
link |
00:37:57.360
spikes and the connect, you know, where is, where is the prediction going on? And one argument could
link |
00:38:03.360
be that, well, when I'm predicting something, a neuron must be firing in advance. It's like, okay,
link |
00:38:09.360
this neuron represents what you're going to feel and it's firing. It's sending a spike. And certainly,
link |
00:38:14.080
that happens to some extent. But our predictions are so ubiquitous that we're making so many of them,
link |
00:38:19.840
which we're totally unaware of. Just the vast majority of them have no idea that you're doing
link |
00:38:23.280
this. That it, there wasn't really, we were trying to figure how could this be? Where are these,
link |
00:38:29.520
where are these happening, right? And I won't walk you through the whole story unless you insist on
link |
00:38:34.880
it, but we came to the realization that most of your predictions are occurring inside individual
link |
00:38:42.480
neurons, especially these, the most common neuron, the pyramidal cells. And there are, there's a
link |
00:38:47.600
property of neurons. I mean, everyone knows, or most people know that a neuron is a cell and it
link |
00:38:52.400
has this spike called an action potential and it sends information. But we now note that there's
link |
00:38:57.760
these spikes internal to the neuron. They're called dendritic spikes. They travel along the
link |
00:39:02.480
branches of the neuron and they don't leave the neuron. They're just internal only. They're far
link |
00:39:07.200
more dendritic spikes than there are action potentials, far more. They're happening all the
link |
00:39:11.920
time. And what we came to understand that those dendritic spikes, the ones that are occurring,
link |
00:39:17.680
are actually a form of prediction. They're telling the neuron, the neuron is saying, I expect that
link |
00:39:23.280
I might become active shortly. And that internal, so the internal spike is a way of saying,
link |
00:39:29.440
you're going to, you might be generating external spikes soon. I predicted you're going to become
link |
00:39:33.760
active. And we've, we've, we wrote a paper in 2016, which explained how this manifests itself
link |
00:39:40.800
in neural tissue and how it is that this all works together. But the vast, we think it's,
link |
00:39:47.040
there's a lot of evidence supporting it. So we, that's where we think that most of these predictions
link |
00:39:51.600
are internal. That's why you can't be, the internal neuron, you can't perceive them.
link |
00:39:56.960
From understanding the, the prediction mechanism of a single neuron, do you think there's deep
link |
00:40:01.680
insights to be gained about the prediction capabilities of the mini brains within the
link |
00:40:06.640
bigger brain and the brain? Oh yeah. Yeah. Yeah. So having a prediction
link |
00:40:09.600
side of their individual neuron is not that useful. You know, what, so what?
link |
00:40:13.200
The way it manifests itself in neural tissue is that when a neuron, a neuron emits these spikes
link |
00:40:22.240
are a very singular type of vent. If a neuron is predicting that it's going to be active,
link |
00:40:26.720
it emits its spike very a little bit sooner, just a few milliseconds sooner than it would
link |
00:40:31.280
have otherwise. It's like, I give the analogy to the book, it's like a sprinter on a starting
link |
00:40:35.200
block in a race. And if someone says, get ready, set, you get up and you're ready to go. And then
link |
00:40:41.920
when your race starts, you get a little bit earlier start. So that, it's that, that ready set is like
link |
00:40:45.920
the prediction and the neurons like ready to go quicker. And what happens is when you have a whole
link |
00:40:50.400
bunch of neurons together and they're all getting these inputs, the ones that are in the predictive
link |
00:40:55.120
state, the ones that are anticipating to become active, if they do become active, they, they
link |
00:40:59.520
happen sooner, they disable everything else and it leads to different representations in the brain.
link |
00:41:03.520
So you have to, it's not isolated just to the neuron, the prediction occurs with the neuron,
link |
00:41:09.600
but the network behavior changes. So what happens under different predictions, different inputs
link |
00:41:14.880
have different representations. So how I, what I predict is going to be different under different
link |
00:41:20.800
contexts, you know, what my input will be is different under different contexts. So this is,
link |
00:41:24.960
this is a key to the whole theory, how this works. So the theory of the 1000 brains, if you were to
link |
00:41:31.200
count the number of brains, how would you do it? The 1000 main theory says that basically every
link |
00:41:36.880
cortical column in the, in your neocortex is a complete modeling system. And that when I ask
link |
00:41:43.600
where do I have a model of something like a coffee cup, it's not in one of those models,
link |
00:41:47.360
it's in thousands of those models. There's thousands of models of coffee cups. That's what
link |
00:41:51.200
the 1000 brains, there's a voting mechanism, then there's a voting mechanism, which you
link |
00:41:54.960
leads, which you're, which is the thing you're, which you're conscious of, which leads to your
link |
00:41:58.240
singular perception. That's why you, you perceive something. So that's the 1000 brain theory.
link |
00:42:04.880
The details of how we got to that theory are complicated. It wasn't, we just thought of it
link |
00:42:11.680
one day. And one of those details that we had to ask, how does a model make predictions? And we
link |
00:42:16.320
talked about just these predictive neurons. That's part of this theory. That's like saying, oh,
link |
00:42:20.800
it's a detail, but it was like a crack in the door. It's like, how are we going to figure out how
link |
00:42:24.080
these neurons build, do this? You know, what is going on here? So we just looked at prediction
link |
00:42:28.400
as like, well, we know that's ubiquitous. We know that every part of the cortex is making
link |
00:42:32.480
predictions. Therefore, whatever the predictive system is, it's going to be everywhere. We know
link |
00:42:37.280
there's a gazillion predictions happening at once. So this is, we can start teasing apart,
link |
00:42:41.760
you know, ask questions about, you know, how can neurons be making these predictions? And that
link |
00:42:46.560
sort of built up to now what we have this 1000 brain theory, which is complex, you know, which
link |
00:42:50.720
is, I can state it simply, but we just didn't think of it. We had to get there step by step,
link |
00:42:55.920
very, it took years to get there. And where does reference frames fit in? So yeah.
link |
00:43:04.560
Okay. So again, a reference frame, I mentioned earlier about the, you know, model of a house.
link |
00:43:10.800
And I said, if you're going to build a model of a house in a computer, they have a reference
link |
00:43:14.160
frame. And you can think of referencing like Cartesian coordinates, like X, Y and Z axes.
link |
00:43:19.200
So I can say, oh, I'm going to design a house. I can say, well, the front door is at this location,
link |
00:43:24.000
XYZ and the roof is at this location, XYZ and so on. That's the type of reference frame.
link |
00:43:29.440
So it turns out for you to make a prediction and then I walk you through the thought experiment
link |
00:43:33.440
in the book where I was predicting what my finger was going to feel when I touched a coffee cup,
link |
00:43:37.920
was a ceramic coffee cup, but this one will do. And what I realized is that to make a prediction
link |
00:43:45.200
when my finger is going to feel like it's going to feel different than this, which it feel different
link |
00:43:48.480
if I touch the hole or the thing on the bottom, make that prediction. The cortex needs to know
link |
00:43:53.520
where the finger is, the tip of the finger relative to the coffee cup and exactly relative to the
link |
00:43:59.520
coffee cup. And to do that, I have to have a reference frame for the coffee cup. There has
link |
00:44:03.280
to have a way of representing the location of my finger to the coffee cup. And then we realized,
link |
00:44:08.080
of course, every part of your skin has to have a reference frame relative to things that touch.
link |
00:44:11.360
And then we did the same thing with vision. But so the idea that a reference frame is necessary
link |
00:44:16.240
to make a prediction when you're touching something or when you're seeing something
link |
00:44:19.520
and you're moving your eyes or you're moving your fingers, it's just a requirement to know what to
link |
00:44:24.160
predict. If I have a structure, I'm going to make a prediction. I have to know where it is.
link |
00:44:28.960
I'm looking or touching it. So then we say, well, how do neurons make reference frames?
link |
00:44:34.160
It's not obvious. You know, XYZ coordinates don't exist in the brain. It's just not the way it works.
link |
00:44:39.840
So that's when we looked at the older part of the brain, the hippocampus and the enteronocortex,
link |
00:44:43.760
where we knew that in that part of the brain, there's a reference frame for a room or a reference
link |
00:44:49.520
frame for environment. Remember, I talked earlier about how you could make a map of this room.
link |
00:44:54.320
So we said, oh, they are implementing reference frames there. So we knew that a reference
link |
00:45:00.240
frames needed to exist in every quarter of a column. And so that was a deductive thing. We
link |
00:45:06.320
just deduced it. So you take the old mammalian ability to know where you are in a particular
link |
00:45:14.800
space and you start applying that to higher and higher levels. Yeah. You first you apply it to
link |
00:45:19.600
like where your finger is. So here's what I think about it. The old part of the brain says,
link |
00:45:23.600
where's my body in this room? Yeah. The new part of the brain says, where's my finger relative to
link |
00:45:29.040
this object? Yeah. Where is a section of my retina relative to this object? I'm looking at one
link |
00:45:36.880
little corner. Where is that relative to this patch of my retina? Yeah. And then we take the same
link |
00:45:42.160
thing and apply the concepts, mathematics, physics, you know, humanity, whatever you want to think
link |
00:45:48.080
of. And eventually you're pondering your own mortality. Well, whatever. But the point is,
link |
00:45:52.880
when we think about the world, when we have knowledge about the world, how is that knowledge
link |
00:45:56.800
organized, Lex? Where is it in your head? The answer is, it's in reference frames. So the way I
link |
00:46:02.400
learned the structure of this water bottle, where the features are relative to each other,
link |
00:46:07.920
when I think about history or democracy or mathematics, there's same basic underlying
link |
00:46:12.960
structures happening. There's reference frames for where the knowledge that you're signing
link |
00:46:16.480
things to. So in the book, I go through examples like mathematics and language and politics.
link |
00:46:21.040
But the evidence is very clear in the neuroscience. The same mechanism that we used to model this
link |
00:46:26.400
coffee cup, we're going to use to model high level thoughts. You're the demise of the humanity,
link |
00:46:32.080
whatever you want to think about. It's interesting to think about how different are the representations
link |
00:46:36.640
of those higher dimensional concepts, higher level concepts, how different the representation
link |
00:46:43.760
there is in terms of reference frames versus spatial. But interesting thing, it's a different
link |
00:46:49.920
application, but it's the exact same mechanism. But isn't there some aspect to higher level
link |
00:46:57.120
concepts that they seem to be hierarchical? They just seem to integrate a lot of information
link |
00:47:02.720
into them. So is our physical objects. So take this water bottle. I'm not particular to this
link |
00:47:10.320
brand, but this is a Fiji water bottle, and it has a logo on it. I use this example in my book,
link |
00:47:16.480
in my book, our company's coffee cup has a logo on it. But this object is hierarchical.
link |
00:47:23.680
It's got like a cylinder and a cap, but then it has this logo on it, and the logo has a word.
link |
00:47:27.600
The word has letters, the letters have different features. And so I don't have to remember,
link |
00:47:32.720
I don't have to think about this. So I say, oh, there's a Fiji logo on this water bottle. I don't
link |
00:47:36.160
have to go through and say, oh, what is the Fiji logo? It's the F and I and the J and I, and there's
link |
00:47:40.560
a hibiscus flower. And oh, it has a, you know, the stamen on it. I don't have to do that. I just
link |
00:47:45.600
incorporate all of that in some sort of hierarchical representation. I say, you know, put this logo on
link |
00:47:51.600
this water bottle. And, and, and then the logo has a word and the word has letters, all hierarchical.
link |
00:47:57.440
It's all that stuff is big. It's amazing that the brain instantly just does all that. The idea
link |
00:48:02.000
that there's, there's water, it's liquid, and the idea that you can drink it when you're thirsty,
link |
00:48:08.080
the idea that there's brands. And then there's like, all of that information is instantly
link |
00:48:13.840
like built into the whole thing once you proceed. So I wanted to get back to your point about
link |
00:48:18.240
hierarchical representation. The world itself is hierarchical, right? And I can take this
link |
00:48:23.040
microphone in front of me. I know inside there's going to be some electronics. I know there's
link |
00:48:26.240
going to be some wires and I know there's going to be a little dive from them was back and forth.
link |
00:48:30.400
I don't see that, but I know it. So everything in the world is hierarchical. Just go into room.
link |
00:48:35.840
It's composed of other components of kitchen has a refrigerator, you know, the refrigerator has a
link |
00:48:39.840
door, the door has a hinge, the hinge has screws and pin. So anyway, the modeling system that
link |
00:48:46.080
exists in every cortical column learns the hierarchical structure of objects. So it's a
link |
00:48:52.000
very sophisticated modeling system in this grain of rice. It's hard to imagine, but this
link |
00:48:55.360
grain of rice can do really sophisticated things. It's got a hundred thousand neurons in it.
link |
00:49:00.240
It's very sophisticated. So that same mechanism that can model a water bottle or a coffee cup
link |
00:49:07.440
can model conceptual objects as well. That's the beauty of this discovery that this guy,
link |
00:49:13.600
Vernon Mountcastle, made many, many years ago, which is that there's a single cortical algorithm
link |
00:49:18.720
underlying everything we're doing. So common sense concepts and higher level concepts are
link |
00:49:24.960
all represented in the same way. They're set in the same mechanisms. It's a little bit like
link |
00:49:29.840
computers, right? All computers are universal Turing machines. Even the little teeny one
link |
00:49:34.640
that's in my toaster and the big one that's running some cloud servers someplace.
link |
00:49:40.080
They're all running on the same principle. They can apply different things. So the brain is all
link |
00:49:44.080
built on the same principle. It's all about learning these models, structured models using
link |
00:49:48.960
movement and reference frames. And it can be applied to something as simple as a water bottle
link |
00:49:54.480
and a coffee cup. And it can be like just thinking like, what's the future of humanity? And, you
link |
00:49:58.240
know, why do you have a hedgehog on your desk? I don't know. Nobody knows. Well, I think it's
link |
00:50:05.840
a hedgehog. That's right. It's a hedgehog in the fog. It's a Russian reference. Does it give you any
link |
00:50:14.080
inclination or hope about how difficult that is to engineer common sense reasoning?
link |
00:50:19.200
So how complicated is this whole process? So looking at the brain, is this a marvel of
link |
00:50:26.480
engineering? Or is it pretty dumb stuff stacked on top of each other over a pretty extensive
link |
00:50:32.240
copy? Can it be both? Can it be both, right? I don't know if it can be both, because if it's
link |
00:50:39.760
an incredible engineering job, that means it's so evolution did a lot of work.
link |
00:50:46.480
Yeah, but then it just copied that, right? So as I said earlier, the figuring out how to model
link |
00:50:52.960
something like a space is really hard. And evolution had to go through a lot of trick
link |
00:50:57.760
and these cells I was talking about, these grid cells and place cells, they're really complicated.
link |
00:51:01.760
This is not simple stuff. This neural tissue works on these really unexpected weird mechanisms.
link |
00:51:08.800
But it did it. It figured it out. But now you could just make lots of copies of it.
link |
00:51:13.680
But then finding, yeah, so it's a very interesting idea that it's a lot of copies
link |
00:51:18.320
of a basic mini brain. But the question is, how difficult it is to find that mini brain that
link |
00:51:25.440
you can copy and paste effectively? Well, today, we know enough to build this. I'm sitting here with,
link |
00:51:34.880
you know, I know the steps we have to go. There's still some engineering problems to solve,
link |
00:51:38.800
but we know enough. And it's not like, Oh, this is an interesting idea, we have to go think about
link |
00:51:44.400
it in other few decades. No, we actually understand it in pretty well details. So not all the details,
link |
00:51:50.000
but most of them. So it's complicated, but it is an engineering problem. So in my company,
link |
00:51:56.480
we are working on that. We are basically the roadmap, how we do this. It's not going to take
link |
00:52:02.240
decades. It's better a few years, optimistically, but I think that's possible. It's, you know,
link |
00:52:10.480
complex things. If you understand them, you can build them. So in which domain do you think it's
link |
00:52:14.640
best to build them? Are we talking about robotics, like entities that operate in the physical world
link |
00:52:23.200
that are able to interact with that world? Are we talking about entities that operate in the
link |
00:52:26.880
digital world? Are we talking about something more like, more specific, like is done in the
link |
00:52:33.200
machine learning community, where you look at natural language or computer vision?
link |
00:52:37.520
Where do you think is easiest? It's the first, it's the first two more than the third one,
link |
00:52:43.520
I would say. Again, let's just use computers as an analogy. The pioneers are computing people
link |
00:52:52.000
like John Van Norman on Turing, they created this thing, you know, we now call the universal
link |
00:52:56.880
Turing machine, which is a computer, right? Did they know how it was going to be applied,
link |
00:53:00.800
where it was going to be used, you know, could they envision any of the future? No,
link |
00:53:04.320
they just said, this is like a really interesting computational idea about algorithms and how you
link |
00:53:10.320
can implement them in a machine. And we're doing something similar to that today, like we are,
link |
00:53:17.680
we are building this sort of universal learning principle that can be applied to many, many
link |
00:53:22.960
different things. But the robotics piece of that, the interactive.
link |
00:53:27.520
Okay, all right. Let us be specific. You can think of this cortical column as what we call a
link |
00:53:31.920
sensory motor learning system. It has the idea that there's a sensor, and then it's moving.
link |
00:53:36.720
That sensor can be physical. It could be like my finger, and it's moving in the world. It could
link |
00:53:41.200
be like my eye, and it's physically moving. It can also be virtual. So it could be, an example
link |
00:53:48.080
would be, I could have a system that lives in the internet that that actually samples information
link |
00:53:54.080
on the internet and moves by following links. That's, that's a sensory motor system. So
link |
00:53:58.320
it's just something that echoes the process of a finger moving along.
link |
00:54:02.800
But in a very, very loose sense, it's, it's like, again, learning is inherently about
link |
00:54:08.720
the subring, the structure in the world and discover the structure in the world,
link |
00:54:11.440
you have to move through the world, even if it's a virtual world, even if it's a conceptual world,
link |
00:54:16.240
you have to move through it. You don't, it doesn't exist in one, it has some structure to it.
link |
00:54:21.360
So here's, here's a couple of predictions that getting what you're talking about.
link |
00:54:25.840
So in humans, the same algorithm does robotics, right? It moves my arms, my eyes, my body, right?
link |
00:54:34.560
And so in my, in the future, to me, robotics and AI will merge. They're not going to be
link |
00:54:39.040
separate fields because they're going to, the, the, the algorithms to really controlling robots
link |
00:54:44.160
are going to be the same algorithms we have in our brain, that the brain, that these
link |
00:54:47.280
sensory motor algorithms, today we're not there, but I think that's going to happen.
link |
00:54:50.400
And, and then, so, but not all AI systems will have to be robotics. You can have systems that
link |
00:54:58.320
have very different types of embodiments. Some will have physical movements, some will have
link |
00:55:01.920
not have physical movements. It's a very generic learning system. Again, it's like computers,
link |
00:55:07.680
the Turing machine is, it's like, it doesn't say how it's supposed to be implemented. It doesn't
link |
00:55:10.880
tell you how big it is. It doesn't tell you what you can apply it to, but it's an interesting,
link |
00:55:14.000
it's a computational principle. Cortical column equivalent is a computational principle about
link |
00:55:19.360
learning. It's about how you learn and it can be applied to a gazillion things. This is, I think
link |
00:55:23.760
this is, I think this impact of AI is going to be as large, if not larger than computing has been
link |
00:55:29.200
in the last century, by far, because it's, it's getting at a fundamental thing. It's not a vision
link |
00:55:34.480
system or a learning system. It's a, it's not a vision system or a hearing system. It is a learning
link |
00:55:39.360
system. It's a fundamental principle, how you learn the structure in the world, how you gain
link |
00:55:42.960
knowledge and be intelligent. And that's what the thousand brain says was going on. And we have a
link |
00:55:47.520
particular implementation in our head, but it doesn't have to be like that at all. Do you think
link |
00:55:51.280
there's going to be some kind of impact? Okay, let me ask it another way. What do increasingly
link |
00:55:58.720
intelligent AI systems do with us humans in the following way? Like, how hard is the human in a
link |
00:56:06.480
loop problem? How hard is it to interact the finger on the coffee cup equivalent of having a
link |
00:56:13.600
conversation with a human being? So how hard is it to fit into our little human world?
link |
00:56:20.160
I don't, I think it's a lot of engineering problems. I don't think it's a fundamental problem.
link |
00:56:25.200
I could ask you the same question. How hard is it for computers to fit into a human world?
link |
00:56:29.440
Right. That, I mean, that's essentially what I'm asking. Like, how much are we,
link |
00:56:36.000
elitist are we as humans? Like, we tried to keep out systems?
link |
00:56:41.600
I don't know. I'm not sure. I think I'm not sure that's the right question. Let's look at
link |
00:56:47.440
computers as an analogy. Computers are million times faster than us. They do things we can't
link |
00:56:52.080
understand. Most people have no idea what's going on when they use computers, right? How
link |
00:56:56.480
we integrate them in our society? Well, we don't think of them as their own entity. They're not
link |
00:57:02.320
living things. We don't afford them rights. We, we rely on them. Our survival as seven billion
link |
00:57:12.160
people or something like that is relying on computers now.
link |
00:57:15.920
Don't you think that's a fundamental problem that we see them as something we can't,
link |
00:57:21.120
we don't give rights to? Computers?
link |
00:57:23.280
So yeah, computers. So robots, computers, intelligent systems, it feels like for them to
link |
00:57:28.320
operate successfully, they would need to have a lot of the elements that we would start having
link |
00:57:36.160
to think about, like, should this entity have rights?
link |
00:57:39.760
I don't think so. I think it's tempting to think that way. First of all, I don't think anyone,
link |
00:57:45.520
hardly anyone thinks that's for computers today. No one says, oh, this thing needs a right. I
link |
00:57:49.840
shouldn't be able to turn it off. Or, you know, if I throw it in the trash can, you know, and hit
link |
00:57:53.760
it with a sledgehammer, my, my, for my criminal act, no, no one thinks that. And now we think
link |
00:57:59.920
about intelligent machines, which is where you're going. And, and all of a sudden we're like, well,
link |
00:58:06.800
now we can't do that. I think the basic problem we have here is that people think intelligent
link |
00:58:11.040
machines will be like us. They're going to have the same emotions as we do, the same feelings as
link |
00:58:15.360
we do. What if I can build an intelligent machine that have absolutely could care less about whether
link |
00:58:20.320
it was on or off or destroyed or not? It just doesn't care. It's just like a map. It's just
link |
00:58:24.000
a modeling system. It has no desires to live, nothing. Is it possible to create a system that
link |
00:58:31.760
can model the world deeply and not care about whether it lives or dies? Absolutely. No question
link |
00:58:38.080
about it. To me, that's not 100% obvious. It's obvious to me. So we can, we can debate if you
link |
00:58:43.760
want. Where does your, where does your desire to live come from? It's an old evolutionary design.
link |
00:58:51.760
I mean, we can argue, does it really matter if we live or not? Objectively no, right? We're all
link |
00:58:57.200
going to die eventually. But evolution makes us want to live. Evolution makes us want to fight
link |
00:59:04.640
to live. Evolutionists want to care and love one another and to care for our children and our,
link |
00:59:09.600
and our relatives and our family and, and so on. And those are all good things. But they come about
link |
00:59:16.160
not because we're smart, because we're animals that grew up. You know, the hummingbird in my
link |
00:59:21.120
backyard cares about its offspring. You know, they, every living thing in some sense cares about,
link |
00:59:26.400
you know, surviving. But when we talk about creating intelligent machines, we're not creating
link |
00:59:31.600
life. We're not creating evolving creatures. We're not creating living things. We're just
link |
00:59:36.640
creating a machine that can learn really sophisticated stuff. And that machine, it may even be able to
link |
00:59:41.280
talk to us, but it doesn't, it's not going to have a desire to live unless somehow we put it
link |
00:59:47.120
into that system. Well, there's learning, right? The thing is, but you don't learn to like want to
link |
00:59:52.720
live. It's built into you. It's, well, people like Ernest Becker argue. So, okay, there's the fact
link |
01:00:00.400
of finiteness of life. The way we think about it is something we learned, perhaps. So, okay.
link |
01:00:08.560
Yeah. And some people decide they don't want to live. And some people decide, you know,
link |
01:00:12.880
you can, but the desire to live is built in DNA, right? But I think what I'm trying to get to is,
link |
01:00:18.240
in order to accomplish goals, it's useful to have the urgency of mortality. So what the Stoics
link |
01:00:23.280
talked about is meditating in your mortality. It might be a very useful thing to do, to die
link |
01:00:31.440
and have the urgency of death. And to realize the, to conceive yourself as an entity that operates
link |
01:00:38.560
in this world that eventually will no longer be a part of this world. And actually conceive of
link |
01:00:43.040
yourself as a conscious entity might be very useful for you to be a system that makes sense of the
link |
01:00:49.600
world. Otherwise, you might get lazy. Well, okay. We're going to build these machines, right?
link |
01:00:55.760
So, are we talking about building AI? But we're building the equivalent of the
link |
01:01:03.440
cortical columns. The neocortex. The neocortex. And the question is, where do they arrive at?
link |
01:01:11.200
Because we're not hard coding everything in. Well, in terms of, if you build the neocortex
link |
01:01:17.120
equivalent, it will not have any of these desires or emotional states. Now, you could argue that
link |
01:01:23.520
that neocortex won't be useful unless I give it some agency, unless I give it some desire,
link |
01:01:28.240
unless I give it some motivation. Otherwise, you'll be just lazy to do nothing, right? You
link |
01:01:31.760
could argue that. But on its own, it's not going to do those things. It's just not, it's just not
link |
01:01:36.960
going to sit there and say, I understand the world. Therefore, I care to live. No, it's not going to
link |
01:01:41.280
do that. It's just going to say, I understand the world. Why is that obvious to you? Do you think
link |
01:01:46.320
it's possible? Okay, let me ask it this way. Do you think it's possible it will at least assign to
link |
01:01:53.040
itself agency and perceive itself in this world as being a conscious entity as a useful way to
link |
01:02:04.240
operate in the world and to make sense of the world? I think intelligent machine can be conscious,
link |
01:02:09.360
but that does not, again, imply any of these desires and goals that you're worried about.
link |
01:02:16.800
We can talk about what it means for a machine to be conscious.
link |
01:02:20.560
By the way, not worry about, but get excited about. It's not necessarily that we should worry
link |
01:02:24.640
about it. I think there's a legitimate problem or not a problem. A question asks, if you build
link |
01:02:29.760
this modeling system, what's it going to model? What's its desire? What's its goal? What are we
link |
01:02:36.320
applying it to? That's an interesting question. One thing, and it depends on the application.
link |
01:02:44.080
It's not something that inherent to the modeling system. It's something we apply to the modeling
link |
01:02:48.000
system in a particular way. If I wanted to make a really smart car, it would have to know about
link |
01:02:54.560
driving in cars and what's important in driving in cars. It's not going to figure that out on its
link |
01:02:59.680
own. It's not going to sit there and say, you know, I've understood the world and I've decided,
link |
01:03:03.760
you know, no, no, no, we're going to have to tell it. We're going to have to say like,
link |
01:03:06.720
so I imagine I make this car really smart. It learns about your driving habits. It learns
link |
01:03:10.880
about the world. It's just, you know, is it one day going to wake up and say, you know what,
link |
01:03:16.320
I'm tired of driving and doing what you want. I think I have better ideas about how to spend my
link |
01:03:21.040
time. Okay. No, it's not going to do that. Well, part of me is playing a little bit of devil's
link |
01:03:25.760
advocate, but part of me is also trying to think through this because I've studied cars quite a
link |
01:03:32.000
bit and I've studied pedestrians and cyclists quite a bit. And as part of me that thinks
link |
01:03:38.560
that there needs to be more intelligence than we realize in order to drive successfully,
link |
01:03:46.080
that game theory of human interaction seems to require some deep understanding of human nature.
link |
01:03:54.800
Okay. When a pedestrian crosses the street, there's some sense. They look at a car usually
link |
01:04:05.600
and then they look away. There's some sense in which they say, I believe that you're not going
link |
01:04:11.600
to murder me. You don't have the guts to murder me. This is the little dance of pedestrian car
link |
01:04:16.960
interaction is saying, I'm going to look away and I'm going to put my life in your hands
link |
01:04:22.640
because I think you're human. You're not going to kill me. And then the car in order to successfully
link |
01:04:28.080
operate in like Manhattan streets has to say, no, no, no, no, I am going to kill you like a
link |
01:04:34.080
little bit. There's a little bit of this weird inkling of mutual murder and that's a dance
link |
01:04:39.680
and then somehow successfully operate through that. Do you think you were born of that?
link |
01:04:43.120
Did you learn that social interaction? I think it might have a lot of the same elements that
link |
01:04:50.000
you're talking about, which is we're leveraging things we were born with and applying them in the
link |
01:04:55.760
context that I would have said that that kind of interaction is learned because people in
link |
01:05:03.600
different cultures have different interactions like that. If you cross the street in different
link |
01:05:06.800
cities and different parts of the world, they have different ways of interacting. I would say
link |
01:05:10.240
that's learned and I would say an intelligent system can learn that too, but that does not
link |
01:05:14.720
lead and the intelligent system can understand humans. It could understand that just like I can
link |
01:05:23.040
study an animal and learn something about that animal. I could study apes and learn something
link |
01:05:27.440
about their culture and so on. I don't have to be an ape to know that. I may not be completely,
link |
01:05:32.640
but I can understand something. So intelligent machine can model that. That's just part of
link |
01:05:36.400
the world. It's just part of the interactions. The question we're trying to get at, will the
link |
01:05:40.560
intelligent machine have its own personal agency that's beyond what we assigned to it or its own
link |
01:05:46.400
personal goals or will evolve and create these things? My confidence comes from understanding
link |
01:05:53.280
the mechanisms I'm talking about creating. This is not hand wavy stuff. It's down in the details.
link |
01:05:59.520
I'm going to build it and I know what it's going to look like and I know what's it going to behave.
link |
01:06:02.880
I know what the kind of things it could do and the kind of things it can't do. Just like when I
link |
01:06:06.400
build a computer, I know it's not going to on its own decide to put another register inside
link |
01:06:11.280
of it. It can't do that. It's no way. No matter what your software does, it can't add a register
link |
01:06:15.520
to the computer. So in this way, when we build AI systems, we have to make choices about how we
link |
01:06:25.600
embed them. So I talk about this in the book. I said, intelligent system is not just a neocortex
link |
01:06:30.880
equivalent. You have to have that, but it has to have some kind of embodiment, physical, virtual.
link |
01:06:36.800
It has to have some sort of goals. It has to have some sort of ideas about dangers, about
link |
01:06:41.280
things it shouldn't do like we build in safeguards in the systems. We have them in our bodies. We
link |
01:06:47.760
have put them in the cars. My car follows my directions until the day it sees I'm about to
link |
01:06:53.280
hit something and it ignores my directions and puts the brakes on. So we can build those things in.
link |
01:06:58.240
So that's a very interesting problem, how to build those in. I think my differing opinion
link |
01:07:06.160
about the risks of AI for most people is that people assume that somehow those things will
link |
01:07:10.960
just appear automatically and it will evolve. And intelligence itself begets that stuff or
link |
01:07:16.560
requires it. But it's not. Intelligence of the neocortex equivalent doesn't require this. The
link |
01:07:20.640
neocortex equivalent just says, I'm a learning system. Tell me what you want me to learn. And
link |
01:07:25.520
I'll ask me questions and I'll tell you the answers. But in that, again, it's like a map.
link |
01:07:31.440
It doesn't, a map has no intent about things, but you can use it to solve problems.
link |
01:07:37.360
Okay. So the building, engineering, the neocortex in itself is just creating an intelligent
link |
01:07:44.560
prediction system. Modeling system. Sorry, modeling system. You can use it to then make
link |
01:07:50.480
predictions. But you can also put it inside a thing that's actually acting in this world.
link |
01:07:56.800
You have to put it inside something. Again, think of the map analogy. A map on its own
link |
01:08:02.000
doesn't do anything. It's just inert. It can learn, but it's inert. So we have to embed
link |
01:08:07.280
it somehow in something to do something. So what's your intuition here? You had a conversation
link |
01:08:13.200
with Sam Harris recently that was sort of, you've had a bit of a disagreement and you're
link |
01:08:19.920
sticking on this point. Elon Musk, Stuart Russell kind of have us worry existential
link |
01:08:28.560
threats of AI. What's your intuition? Why, if we engineer an increasingly intelligent
link |
01:08:35.360
neocortex type of system in the computer, why that shouldn't be a thing that we...
link |
01:08:40.960
It was interesting to use the word intuition and Sam Harris used the word intuition too.
link |
01:08:45.680
And when he used that intuition, that word immediately stopped and said,
link |
01:08:48.640
that's the cut to the problem. He's using intuition. I'm not speaking about my intuition.
link |
01:08:53.760
I'm speaking about something I understand, something I'm going to build, something I am
link |
01:08:56.720
building, something I understand completely or at least well enough to know what it's all
link |
01:09:01.440
I'm guessing. I know what this thing's going to do. And I think most people who are worried,
link |
01:09:07.840
they have trouble separating out. They don't have the unknowledge or the understanding about
link |
01:09:12.560
like, what is intelligence? How's it manifest in the brain? How's it separate from these other
link |
01:09:16.240
functions in the brain? And so they imagine it's going to be human like or animal like.
link |
01:09:20.560
It's going to have the same sort of drives and emotions we have, but there's no reason for that.
link |
01:09:27.120
That's just because there's an unknown. If the unknown is like, oh my God,
link |
01:09:31.280
I don't know what this is going to do. We have to be careful. It could be like us,
link |
01:09:33.520
but really smarter. I'm saying, no, it won't be like us. It'll be really smarter,
link |
01:09:37.520
but it won't be like us at all. But I'm coming from that not because I just
link |
01:09:43.040
guessing I'm not using intuition. I'm basically like, okay, I understand this thing works.
link |
01:09:47.760
This is what it does. It makes money to you. Okay. But to push back, so I also disagree with
link |
01:09:54.560
the intuitions that Sam has, but so disagree with what you just said, which, you know,
link |
01:10:01.920
what's a good analogy. So if you look at the Twitter algorithm in the early days,
link |
01:10:07.840
just recommender systems, you can understand how recommender systems work. What you can't
link |
01:10:13.760
understand in the early days is when you apply that recommender system at scale to thousands
link |
01:10:18.480
and millions of people, how that can change societies. So the question is, yes, you're just
link |
01:10:25.440
saying this is how an engineer in your cortex works, but when you have a very useful
link |
01:10:31.840
TikTok type of service that goes viral, when your neural cortex goes viral, and then millions of
link |
01:10:39.280
people start using it, can that destroy the world? No. Well, first of all, this is back,
link |
01:10:43.760
one thing I want to say is that AI is a dangerous technology. I'm not denying that.
link |
01:10:48.480
All technologies dangerous. Well, and AI, maybe particularly so. Okay. So
link |
01:10:53.600
am I worried about it? Yeah, I'm totally worried about it. The thing where the narrow component
link |
01:10:58.160
we're talking about now is the existential risk of AI. So I want to make that distinction because
link |
01:11:03.360
I think AI can be applied poorly. It can be applied in ways that people are going to understand
link |
01:11:09.360
the consequences of it. These are all potentially very bad things, but they're not the AI system
link |
01:11:18.400
creating this existential risk on its own. And that's the only place that I disagree with other
link |
01:11:22.800
people. Right. So I think the existential risk thing is humans are really damn good at surviving.
link |
01:11:29.280
So to kill off the human race would be very, very different. Yes, but I'll go further. I don't think
link |
01:11:36.880
AI systems are ever going to try to. I don't think AI systems are ever going to like say,
link |
01:11:41.520
I'm going to ignore you. I'm going to do what I think is best. I don't think that's going to
link |
01:11:46.240
happen, at least not in the way I'm talking about it. So the Twitter recommendation algorithm
link |
01:11:53.680
is an interesting example. Let's use computers as an analogy again. I build a computer. It's a
link |
01:12:00.480
universal computing machine. I can't predict what people are going to use it for. They can build
link |
01:12:04.160
all kinds of things. They can even create computer viruses. It's all kinds of stuff.
link |
01:12:10.000
So there's some unknown about its utility and about where it's going to go. But in the other
link |
01:12:14.560
hand, I pointed out that once I build a computer, it's not going to fundamentally change how it
link |
01:12:19.520
computes. It's like, I use the example of a register, which is a part internal part of a
link |
01:12:23.280
computer. You know, I say it can't just say it because computers don't evolve. They don't replicate.
link |
01:12:28.880
They don't evolve. They don't, you know, the physical manifestation of the computer itself
link |
01:12:32.400
is not going to, there's certain things that can't do. Right. So we can break into things like
link |
01:12:37.200
things that are possible to happen we can't predict and things that are just impossible to
link |
01:12:41.200
happen. Unless we go out of our way to make them happen, they're not going to happen unless somebody
link |
01:12:45.280
makes them happen. Yeah. So there's, there's a bunch of things to say. One is the physical
link |
01:12:49.760
aspect, which you're absolutely right. We have to build a thing for it to operate in the physical
link |
01:12:55.200
world and you can just stop building them. You know, the moment they're not doing the thing you
link |
01:13:02.240
want them to do or just change the design or change the design. The question is, I mean,
link |
01:13:06.400
there's, it's possible in the physical world, this is probably longer term is you automate the
link |
01:13:11.360
building. It makes, it makes a lot of sense to automate the building. There's a lot of factories
link |
01:13:15.920
that are doing more and more and more automation to go from raw resources to the final product.
link |
01:13:21.360
It's possible to imagine that obviously much more efficient to keep, to create a factory
link |
01:13:26.800
that's creating robots that do something, you know, they do something extremely useful for society.
link |
01:13:32.880
It could be personal assistance. It could be, it could be, it could be your toaster, but a toaster
link |
01:13:38.240
that's much has a deeper knowledge of your culinary preferences. And that could. Well,
link |
01:13:44.480
I think now you've hit on the right thing. The real thing we need to be worried about next is
link |
01:13:47.920
self replication. Right. That is the thing that we're in the physical world or even the virtual
link |
01:13:53.360
world. Self replication, because self replication is dangerous. It's probably more likely to be
link |
01:13:58.560
killed by a virus, you know, or a human engineered virus. Anybody can create a, you know, this
link |
01:14:03.920
technology is getting so almost anybody, well, not anybody, but a lot of people could create
link |
01:14:08.240
a human engineered virus that could wipe out humanity. That is really dangerous. No intelligence
link |
01:14:13.760
required. Just self replication. So, so we need to be careful about that. So when I think about,
link |
01:14:22.400
you know, AI, I'm not thinking about robots building robots. Don't do that. Don't build a,
link |
01:14:27.280
you know, just. Well, that's because you're interested in creating intelligence. It seems
link |
01:14:32.000
like self replication is a good way to make a lot of money. Well, fine. But so is, you know,
link |
01:14:38.880
maybe editing viruses is a good way too. I don't know. The point is, if as a society,
link |
01:14:44.240
when we want to look at existential risks, the existential risks we face that we can control
link |
01:14:51.200
almost all evolve around self replication. Yes. The question is, I don't see a good way to make
link |
01:14:58.480
a lot of money by engineering viruses and deploying them on the world. There could be,
link |
01:15:02.640
there could be applications that are useful. But let's separate out. Let's separate out. I mean,
link |
01:15:07.200
you don't need to. You only need some, you know, terrorists who want to do it because it doesn't
link |
01:15:10.080
take a lot of money to make viruses. Let's just separate out what's risky and what's not risky.
link |
01:15:15.920
I'm arguing that the intelligence side of this equation is not risky. It's not risky at all.
link |
01:15:21.280
It's the self replication side of the equation that's risky. And I'm not dismissing that. I'm
link |
01:15:26.320
scared as hell. It's like the paperclip maximizer thing. Those are often like talked about in the
link |
01:15:33.520
same conversation. I think you're right. Like creating ultra intelligence, super intelligent
link |
01:15:40.720
systems is not necessarily coupled with a self replicating, arbitrarily self replicating
link |
01:15:46.800
systems. Yeah. And you don't get evolution unless you're self replicating. Yeah. And so I think
link |
01:15:51.840
that's just this argument that people have trouble separating those two out. They just think, oh,
link |
01:15:56.960
yeah, intelligence looks like us. And look how, look at the damage we've done to this planet.
link |
01:16:00.960
Like how we've, you know, destroyed all these other species. Yeah. Well, we replicate,
link |
01:16:04.640
which 8 billion of us or 7 billion of us now. I think the idea is that the more intelligent
link |
01:16:12.240
we're able to build systems, the more tempting it becomes from a capitalist perspective of creating
link |
01:16:18.160
products, the more tempting it becomes to create self reproducing systems.
link |
01:16:22.400
All right. So let's say that's true. So does that mean we don't build intelligent systems? No,
link |
01:16:26.880
that means we regulate, we understand the risks, we regulate them. You know, look, there's a lot
link |
01:16:34.400
of things we could do as society, which have some sort of financial benefit to someone,
link |
01:16:38.320
which could do a lot of harm. And we have to learn how to regulate those things.
link |
01:16:42.560
We have to learn how to deal with those things. I will argue this. I would say the opposite.
link |
01:16:46.080
Like I would say having intelligent machines at our disposal will actually help us in the end more
link |
01:16:52.000
because it'll help us understand these risks better, help us mitigate these risks better.
link |
01:16:55.440
There might be ways of saying, oh, well, how do we solve climate change problems? You know,
link |
01:16:59.040
how do we do this or how do we do that? That just like computers are dangerous in the hands of the
link |
01:17:05.680
wrong people, but they've been so great for so many other things, we live with those dangers.
link |
01:17:09.840
And I think we have to do the same with intelligent machines. But we have to be
link |
01:17:13.520
constantly vigilant about this idea of A, bad actors doing bad things with them and B,
link |
01:17:20.080
don't ever, ever create a self replicating system. And by the way, I don't even know
link |
01:17:25.600
if you could create a self replicating system that uses a factory that's really dangerous.
link |
01:17:30.320
You know, nature's way of self replicating is so amazing.
link |
01:17:34.560
You know, it doesn't require anything. It just, you know, the thing and resources and it goes,
link |
01:17:39.360
right? Yeah. If I said to you, you know what, we have to build, our goal is to build a factory
link |
01:17:45.360
that can make, that builds new factories. And it has to end the end supply chain. It has to
link |
01:17:51.600
bind the resources, get the energy. I mean, that's really hard. You know, no one's doing that in the
link |
01:17:59.360
next, you know, 100 years. I've been extremely impressed by the efforts of Elon Musk and Tesla
link |
01:18:05.760
to try to do exactly that, not from raw resource. Well, he actually, I think states the goal is
link |
01:18:12.160
to go from raw resource to the final car in one factory. That's the main goal. Of course,
link |
01:18:19.440
it's not currently possible, but they're taking huge leaps. Well, he's not the only one to do that.
link |
01:18:24.160
This has been a goal for many industries for a long, long time.
link |
01:18:27.920
It's difficult to do. Well, a lot of people, what they do is instead they have like
link |
01:18:32.160
a million suppliers and then they, like there's everybody's management.
link |
01:18:36.240
They all co locate them and they kind of systems together.
link |
01:18:40.560
It's a fundamental distributed system. I think that's, that also is not getting at the issue
link |
01:18:45.040
I was just talking about, which is self replication. I mean, self replication means
link |
01:18:51.760
there's no entity involved other than the entity that's replicating.
link |
01:18:57.680
Right. And so if there are humans in this, in the loop, that's not really self replicating, right?
link |
01:19:02.160
It's, unless somehow we're duped into it. But it's also don't necessarily
link |
01:19:09.520
agree with you because you've kind of mentioned that AI will not say no to us.
link |
01:19:16.400
I just think they will. Yeah. Yeah. So like, I think it's a useful feature to build in.
link |
01:19:23.280
I'm just trying to like put myself in the mind of engineers to sometimes say no,
link |
01:19:29.760
you know, if you, I get an example earlier, right? I get an example of my car, right?
link |
01:19:35.760
My car turns the wheel and applies the accelerator and the brake as I say,
link |
01:19:41.200
until it decides there's something dangerous. Yes. And then it doesn't do that.
link |
01:19:45.360
Yeah. Now, that was something it didn't decide to do. It's something we programmed into the car.
link |
01:19:52.880
And so good. It's a good idea, right? The question again, isn't like,
link |
01:19:57.600
if we create an intelligent system or ever ignore our commands, of course we will sometimes.
link |
01:20:03.200
Is it going to do it because it came up with its own goals that serve its purposes and it
link |
01:20:08.960
doesn't care about our purposes? No, I don't think that's going to happen.
link |
01:20:12.480
Okay. So let me ask you about these super intelligent cortical systems that we engineer
link |
01:20:16.960
and us humans. Do you think with these entities operating out there in the world,
link |
01:20:24.160
what is the future, most promising future look like? Is it us merging with them?
link |
01:20:29.680
Or is it us? Like, how do we keep us humans around when you have increasingly intelligent
link |
01:20:37.440
beings? Is it one of the dreams is to upload our minds in the digital space? So can we just
link |
01:20:44.320
give our minds to these systems so they can operate on them? Is there some kind of more
link |
01:20:49.920
interesting merger or is there more? In the third part of my book, I talked about all these scenarios
link |
01:20:54.400
and let me just walk through them. Sure. The uploading the mind one. Yes.
link |
01:21:00.880
Extremely, really difficult to do. Like, we have no idea how to do this even remotely right now.
link |
01:21:08.080
So it would be a very long way away, but I make the argument you wouldn't like the result.
link |
01:21:14.320
And you wouldn't be pleased with the result. It's really not what you think it's going to be.
link |
01:21:18.480
Imagine I could upload your brain into a computer right now and now the computer's
link |
01:21:22.000
sitting there going, Hey, I'm over here. Great. Get rid of that old bio person. I don't need
link |
01:21:25.840
them. You're still sitting here. Yeah. What are you going to do? No, no, that's not me. I'm here.
link |
01:21:30.560
Right. Yeah. Are you going to feel satisfied? Then you, but people imagine, look, I'm on my
link |
01:21:34.880
deathbed and I'm about to, you know, expire and I pushed the button and now I'm uploaded. But
link |
01:21:39.840
think about it a little differently. And so I don't think it's going to be a thing because
link |
01:21:44.320
people by the time we're able to do this, if ever, because you have to replicate the entire body,
link |
01:21:49.760
not just the brain, it's, it's really, it's, I walk through the issues. It's really substantial.
link |
01:21:56.000
Do you have a sense of what makes us us? Is there, is there a shortcut to it can only save a certain
link |
01:22:01.920
part that makes us truly arts? No, but I think that machine would feel like it's you too.
link |
01:22:07.280
Right. Right. If you people just like, I have a child, right? I have two daughters.
link |
01:22:12.400
They're independent people. I created them. Well, partly. Yeah. And
link |
01:22:15.600
I don't, just because they're somewhat like me, I don't feel on them and they don't feel
link |
01:22:22.720
like I'm me. So if you split the part, you have two people. So we can tell them to come back to
link |
01:22:26.160
what makes, what consciousness we want. We can talk about that, but we don't have a remote
link |
01:22:30.800
consciousness. I'm not sitting there going, oh, I'm conscious of that. I'm in that system over there.
link |
01:22:35.280
So let's, let's stay on our topic. So one was uploading a brain. Yeah.
link |
01:22:40.800
It ain't going to happen in a hundred years, maybe a thousand, but I don't think people are going to
link |
01:22:44.960
want to do it. The merging your mind with, uh, you know, the neural link thing, right? Like,
link |
01:22:51.840
again, really, really difficult. It's, it's one thing to make progress to control a prosthetic
link |
01:22:56.720
arm. It's another to have like a billion or several billion, you know, things and understanding what
link |
01:23:01.440
those signals mean. Like it's the one thing to like, okay, I can learn to think some patterns
link |
01:23:06.160
to make something happen. It's quite another thing to have a system, a computer, which actually
link |
01:23:10.960
knows exactly what cells it's talking to and how it's talking to them and interacting in a way
link |
01:23:14.720
like that. Very, very difficult. We're not getting anywhere closer to that.
link |
01:23:19.440
Interesting. Can I, can I ask a question here? So for me, what makes that merger very difficult
link |
01:23:26.960
practically in the next 10, 20, 50 years is like literally the biology side of it, which is like,
link |
01:23:34.080
it's just hard to do that kind of surgery in a safe way. But your intuition is even the machine
link |
01:23:39.840
learning part of it, where the machine has to learn what the heck it's talking to. That's even
link |
01:23:45.440
hard. I think it's even harder. And it's not, it's, it's easy to do when you're talking about
link |
01:23:50.080
hundreds of signals. It's, it's a totally different thing to say talking about billions of signals.
link |
01:23:55.600
So you don't think it's the raw, it's a machine learning problem. You don't think it could be
link |
01:23:59.680
learned? Well, I'm just saying, no, I think you'd have to have detailed knowledge. You'd have to
link |
01:24:04.080
know exactly what the types of neurons you're connecting to. I mean, in the brain, there's these
link |
01:24:08.320
neurons that do all different types of things. It's not like a neural network. It's a very
link |
01:24:11.360
complex organism system up here. We talked about the grid cells or the place cells, you know,
link |
01:24:15.600
you have to know what kind of cells you're talking to and what they're doing and how their
link |
01:24:18.320
timing works and all, all this stuff, which you can't today is no way of doing that, right?
link |
01:24:23.360
But I think it's, I think it's a, I think the problem, you're right that the biological aspect
link |
01:24:27.680
of like who wants to have a surgery and have this stuff inserted in your brain, that's a problem.
link |
01:24:31.760
But this is when we solve that problem. I think the, the information coding aspect is much worse.
link |
01:24:36.880
I think that's much worse. It's not like what they're doing today. Today, it's simple machine
link |
01:24:41.040
learning stuff because you're doing simple things. But if you want to merge your brain,
link |
01:24:45.360
like I'm thinking on the internet, I'm merged my brain with the machine and we're both doing,
link |
01:24:49.920
that's a totally different issue. That's interesting. I tend to think if the, okay,
link |
01:24:54.320
yeah, if you have a super clean signal from a bunch of neurons at the start, you don't know
link |
01:25:01.040
what those neurons are. I think that's much easier than the getting of the clean signal.
link |
01:25:07.600
I think if you think about today's machine learning, that's what you would conclude.
link |
01:25:13.040
I'm thinking about what's going on in the brain and I don't reach that conclusion. So we'll have
link |
01:25:16.640
to see. Sure. But I don't think even, even then, I think there's kind of a sad future.
link |
01:25:22.560
Like, you know, do I, do I have to like plug my brain into a computer? I'm still a biologic
link |
01:25:27.840
organism. I assume I'm still going to die. So what, what have I achieved? Right? You know,
link |
01:25:32.480
what have I achieved to doing some sort of? Oh, I disagree. We don't know what those are, but it
link |
01:25:37.680
seems like there could be a lot of different applications. It's like virtual reality is
link |
01:25:42.480
to expand your brain's capability to like read Wikipedia. Yeah, but fine. But you're still
link |
01:25:49.120
a biologic organism. Yes. Yes. You're still, you're still mortal. You're still all right. So,
link |
01:25:53.280
so what are you accomplishing? You're making your life in this short period of time better,
link |
01:25:57.200
right? Just like having the internet made our life better. Yeah. Yeah. Okay. So I think that's
link |
01:26:03.280
of, of, if I think about all the possible gains we can have here, that's a marginal one. It's
link |
01:26:08.320
an individual, hey, I'm better, you know, I'm smarter. But you'll find I'm not against it.
link |
01:26:15.280
I just don't think it's earth changing. I, but, but so this is the true of the internet.
link |
01:26:20.240
When each of us individuals are smarter, we get a chance to then share our smartness.
link |
01:26:24.800
We get smarter and smarter together as like, as a collective. This is kind of like the
link |
01:26:28.480
same colony. Why don't I just create an intelligent machine that doesn't have any of this biological
link |
01:26:32.480
nonsense? This is all the same. It's, it's everything except don't burden it with my brain.
link |
01:26:38.720
Yeah. Right. It has a brain. It is smart. It's like my child, but it's much, much smarter than
link |
01:26:43.120
me. So I have a choice between doing some implant, doing some hybrid weird, you know,
link |
01:26:47.760
biological thing that's bleeding and all these problems and limited by my brain or creating
link |
01:26:53.360
a system which is super smart that I can talk to that helps me understand the world that can read
link |
01:26:58.320
the internet, you know, read Wikipedia and talk to me. I guess my, the open questions there are
link |
01:27:04.640
what does the manifestation of superintelligence look like? So like, what are we going to,
link |
01:27:10.640
you talked about, why do I want to merge with AI? Like, what, what's the actual marginal benefit
link |
01:27:15.840
here? If I, if we have a super intelligent system, yeah, how will it make our life better?
link |
01:27:24.560
So let's, let's, that's a great question, but let's break it onto little pieces. All right.
link |
01:27:28.720
On the one hand, it can make our life better in lots of simple ways. You mentioned like a care robot
link |
01:27:33.760
or something that helps me do things, a cook side, I don't know what it does, right? Little things
link |
01:27:37.360
like that. We have super better, smarter cars. We can have, you know, better agents, aids helping
link |
01:27:43.520
us in our work environment and things like that. To me, that's like the easy stuff, the simple stuff
link |
01:27:47.840
in the beginning. And so in the same way that computers made our lives better in ways, many,
link |
01:27:54.480
many ways, I will have those kind of things. To me, the really exciting thing about AI is
link |
01:28:01.120
sort of its transcendent, transcendent quality in terms of humanity. We're still biological
link |
01:28:06.400
organisms. We're still stuck here on earth. It's going to be hard for us to live anywhere else.
link |
01:28:10.400
I don't think you and I are going to want to live on Mars anytime soon. And, and we're flawed,
link |
01:28:18.720
you know, we may end up destroying ourselves. It's totally possible. We, if not completely,
link |
01:28:24.960
we could destroy our civilizations. You know, it's just face the fact that we have issues here,
link |
01:28:30.080
but we can create intelligent machines that can help us in various ways. For example,
link |
01:28:34.000
one example I gave, and that sounds a little sci fi, but I believe this, if we really want to
link |
01:28:38.480
live on Mars, we'd have to have intelligent systems that go there and build the habitat for us,
link |
01:28:43.760
not humans. Humans are never going to do this. It's just too hard. But could we have a thousand or
link |
01:28:49.200
10,000, you know, engineer workers up there doing this stuff, building things, terraforming Mars?
link |
01:28:54.000
Sure. Maybe we can move Mars. But then if we want to, if we want to go around the universe,
link |
01:28:58.640
should I send my children around the universe? Or should I send some intelligent machine,
link |
01:29:02.400
which is like a child that represents me and understands our needs here on earth that could
link |
01:29:07.520
travel through space? So it sort of, in some sense, intelligence allows us to transcend the
link |
01:29:13.920
limitations of our biology. And don't think of it as a negative thing. It's in some sense,
link |
01:29:20.800
my children transcend my biology too, because they live beyond me. And they represent me,
link |
01:29:28.720
and they also have their own knowledge, and I can impart knowledge to them. So intelligent
link |
01:29:31.840
machines would be like that too, but not limited like us. But the question is, there's so many
link |
01:29:37.840
ways that transcendence can happen. And the merger with AI and humans is one of those ways. So you
link |
01:29:44.320
said intelligent, basically beings or systems propagating throughout the universe representing
link |
01:29:51.360
us humans. They represent us humans in the sense they represent our knowledge and our history,
link |
01:29:56.560
not us individually. Right, right. But I mean, the question is, is it just a database
link |
01:30:05.600
with the really damn good model of the world? No, no, they're conscious, just like us.
link |
01:30:11.760
Okay. But just different. They're different. Just like my children are different. They're like me,
link |
01:30:16.720
but they're different. These are more different. I guess maybe I've already, I kind of, I take
link |
01:30:23.440
a very broad view of our life here on Earth. I say, you know, why are we living here? Are we
link |
01:30:29.120
just living because we live? Are we surviving because we can survive? Are we fighting just
link |
01:30:33.920
because we want to just keep going? What's the point of it? Right? So to me, the point,
link |
01:30:39.680
if I ask myself, what's the point of life is, what transcends that ephemeral sort of biological
link |
01:30:46.800
experience is to me, this is my answer, is the acquisition of knowledge to understand more
link |
01:30:54.080
about the universe and to explore. And that's partly to learn more, right? I don't view it as
link |
01:31:02.720
a terrible thing if the ultimate outcome of humanity is we create systems that are intelligent,
link |
01:31:09.920
that are offspring, but they're not like us at all. And we stay here and live on Earth as long
link |
01:31:14.800
as we can, which won't be forever, but as long as we can. And, but that would be a great thing
link |
01:31:21.920
to do. It's not, it's not like a negative thing. Well, would you be okay then if the human
link |
01:31:31.840
species vanishes, but our knowledge is preserved and keeps being expanded by intelligent systems?
link |
01:31:38.560
I want our knowledge to be preserved and expanded. Yeah. Am I okay with humans dying? No, I don't
link |
01:31:45.600
want that to happen. But if it does happen, what if we were sitting here and this is the
link |
01:31:51.040
last two people on Earth who were saying, Lex, we blew it, it's all over, right? Wouldn't I feel
link |
01:31:56.000
better if I knew that our knowledge was preserved and that we had agents that knew about that,
link |
01:32:01.520
that were trans, you know, that left Earth? I would want that. It's better than not having that.
link |
01:32:06.640
You know, I make the analogy of like, you know, the dinosaurs, the poor dinosaurs, they live for,
link |
01:32:10.080
you know, tens of millions of years. They raised their kids. They, you know, they,
link |
01:32:13.520
they fought to survive. They were hungry. They, they did everything we do. And then they're all
link |
01:32:18.560
gone. Yeah. Like, you know, and, and if we didn't discover their bones, nobody would ever know that
link |
01:32:24.960
they ever existed, right? Do we want to be like that? I don't want to be like that. There's a sad
link |
01:32:30.000
aspect to it. And it kind of is jarring to think about that it's possible that a human like intelligent
link |
01:32:37.680
civilization has previously existed on Earth. The reason I say this is like, it is jarring to think
link |
01:32:44.640
that we would not, if they weren't extinct, we wouldn't be able to find evidence of them.
link |
01:32:49.040
After a sufficient amount of time. After a sufficient amount of time. Of course, there's like,
link |
01:32:54.000
like basically humans, like if we destroy ourselves now, human civilization destroy ourselves now,
link |
01:32:59.920
after a sufficient amount of time, we would not be, we'd find the evidence of the dinosaurs.
link |
01:33:04.480
We would not find evidence of us humans. Yeah. That's kind of an odd thing to think about. Although
link |
01:33:09.840
I'm not sure if we have enough knowledge about species going back through billions of years,
link |
01:33:14.880
but we could, we could, we might be able to eliminate that possibility. But it's an interesting
link |
01:33:18.960
question. Of course, this is a similar question to, you know, there were lots of intelligent
link |
01:33:23.200
species about our galaxy that have all disappeared. Yeah. That's super sad that they're exactly that
link |
01:33:31.280
there may have been much more intelligent alien civilizations in our galaxy. There are no longer
link |
01:33:36.800
there. Yeah. You actually talked about this, that humans might destroy ourselves. Yeah. And how we
link |
01:33:44.720
might preserve our knowledge and advertise that knowledge to other. Advertise is a funny word
link |
01:33:54.480
to use. From a PR perspective. There's no financial gain in this.
link |
01:34:00.720
You know, like make it like from a tourism perspective, make it interesting. Can you
link |
01:34:04.560
describe how? Well, there's a couple things. I broke it down into two parts, actually three
link |
01:34:09.280
parts. One is, you know, there's a lot of things we know that what if we were, what if we ended,
link |
01:34:17.040
what if our civilization collapsed? Yeah, I'm not talking tomorrow. Yeah, we could be a thousand
link |
01:34:21.280
years from that. Like, you know, we don't really know. But historically, it would be likely at
link |
01:34:25.440
some point. Time flies when you're having fun. Yeah. You know, could we, and then intelligent
link |
01:34:33.040
life evolved again on this planet? Wouldn't they want to know a lot about us and what we knew?
link |
01:34:37.440
Wouldn't they wouldn't be able to ask us questions? So one very simple thing I said,
link |
01:34:41.120
how would we archive what we know? That was a very simple idea. I said, you know what,
link |
01:34:44.960
it wouldn't be that hard to put a few satellites, you know, going around the sun and we upload
link |
01:34:48.960
Wikipedia every day and that kind of thing. So, you know, if we end up killing ourselves,
link |
01:34:53.920
well, it's up there and the next intelligence piece will find it and learn something. They
link |
01:34:57.280
would like that. They would appreciate that. So that's one thing. The next thing I said, well,
link |
01:35:03.040
what if, you know, how we're outside of our solar system? We have the SETI program. We're
link |
01:35:08.720
looking for these intelligent signals from everybody. And if you do a little bit of math,
link |
01:35:12.640
which I did in the book, and you say, well, what if intelligent species only live for 10,000 years
link |
01:35:18.240
before, you know, technologically intelligent species? Like, ones are really able to do the
link |
01:35:21.840
task we're just starting to be able to do. Well, the chances are we wouldn't be able to see any of
link |
01:35:26.320
them because they would have all been disappeared by now. They've lived for 10,000 years and now
link |
01:35:31.040
they're gone. And so we're not going to find these signals being sent from these people because
link |
01:35:36.080
if I say, what kind of signal could you create that would last a million years or a billion years?
link |
01:35:41.120
That someone would say, damn it, someone smart lived there. We know that. That would be a life
link |
01:35:46.400
changing event for us to figure that out. Well, what we're looking for today in the SETI program
link |
01:35:50.080
isn't that. We're looking for very coded signals in some sense. And so I asked myself, what would
link |
01:35:54.800
be a different type of signal one could create? I've always thought about this throughout my life
link |
01:35:58.960
and in the book I gave one possible suggestion, which was we now detect planets going around
link |
01:36:06.400
other suns, other stars, excuse me. And we do that by seeing this slight dimming of the light as
link |
01:36:13.600
the planets move in front of them. That's how we detect planets elsewhere in our galaxy.
link |
01:36:20.000
What if we created something like that that just rotated around the sun and it blocked out a little
link |
01:36:26.480
bit of light in a particular pattern that someone said, hey, that's not a planet. That is a sign
link |
01:36:31.760
that someone was once there. You can say, what if it's beating up pi, three point, whatever.
link |
01:36:37.840
So I did it from a distance, broadly broadcast, takes no continue activation on our part. This
link |
01:36:45.360
is the key, right? No one has to be seeing a running computer and supplying it with power.
link |
01:36:49.040
It just goes on. So we go, it's continuous. And I argued that part of the SETI program
link |
01:36:55.200
should be looking for signals like that. And to look for signals like that, you ought to figure
link |
01:36:58.960
out how would we create a signal? Like, what would we create that would be like that, that would
link |
01:37:03.840
persist for millions of years, that would be broadcast broadly that you could see from a
link |
01:37:07.760
distance that was unequivocal, came from an intelligent species. And so I gave that one
link |
01:37:13.760
example because they don't know what to know of actually. And then finally, right,
link |
01:37:18.560
if, if our, ultimately our solar system will die at some point in time, you know, how do we go
link |
01:37:26.640
beyond that? And I think it's possible, if at all possible, we'll have to create intelligent machines
link |
01:37:31.600
that travel throughout the solar system or throughout the galaxy. And I don't think that's
link |
01:37:36.880
going to be humans. I don't think it's going to be biological organisms. So these are just
link |
01:37:40.720
things to think about, you know, like, what's the, you know, I don't want to be like the dinosaur.
link |
01:37:44.560
I don't want to just live in, okay, that was it. We're done, you know. Well, there is a kind of
link |
01:37:48.480
presumption that we're going to live forever, which I think it is a bit sad to imagine that the
link |
01:37:55.600
message we send as, as he talked about is that we were once here instead of we are here. Well,
link |
01:38:04.000
it could be we are still here. But it's more of a, it's more of an insurance policy in case we're
link |
01:38:10.080
not here, you know? Well, I don't know, but there's something I think about, we humans don't often
link |
01:38:17.280
think about this, but it's like, like, whenever I record a video, I've done this a couple of times
link |
01:38:26.080
in my life, I've recorded a video for my future self, just for personal, just for fun. And it's
link |
01:38:30.640
always just fascinating to think about that, preserving yourself for future civilizations. For
link |
01:38:40.160
me, it was preserving myself for a future me, but that's a little, that's a little fun example
link |
01:38:46.320
of archival. Well, these podcasts are preserving you and I in a way, for future, hopefully well
link |
01:38:53.600
after we're gone. But you don't often, we're sitting here talking about this. You are not
link |
01:39:00.880
thinking about the fact that you and I are going to die, and there'll be like 10 years after somebody
link |
01:39:07.600
watching this, and we're still alive. You know, in some sense, I do. I'm here because I want to
link |
01:39:13.520
talk about ideas. And these ideas transcend me, and they transcend this time and on our planet.
link |
01:39:21.520
We're talking here about ideas that could be around a thousand years from now or a million years
link |
01:39:27.440
from now. When I wrote my book, I had an audience in mind, and one of the clearest audiences was
link |
01:39:34.000
aliens. No, were people reading this a hundred years from now? Yes. I said to myself, how do I
link |
01:39:39.760
make this book relevant to somebody reading this a hundred years from now? What would they want to
link |
01:39:44.000
know that we were thinking back then? What would make it like that was an interesting, it's still
link |
01:39:48.880
an interesting book. I'm not sure I can achieve that, but that was how I thought about it because
link |
01:39:54.080
these ideas, especially in the third part of the book, the ones we were just talking about,
link |
01:39:58.160
you know, these crazy, it sounds like crazy ideas about, you know, storing our knowledge and,
link |
01:40:01.920
and, you know, merging our brains or computers and sending, you know, our machines out into space,
link |
01:40:06.640
is not going to happen in my lifetime. And they may not, and they may not happen in the next
link |
01:40:11.440
hundred years. They may not happen for a thousand years. Who knows? But we have the unique opportunity
link |
01:40:17.280
right now, we, you, me, and other people like this, to sort of at least propose the agenda
link |
01:40:25.200
that might impact the future like that. It's a fascinating way to think, both like writing or
link |
01:40:30.400
creating, try to make, try to create ideas, try to create things that hold up in time. Yeah. You
link |
01:40:39.680
know, understanding how the brain works, we're going to figure that out once. That's it. It's
link |
01:40:43.200
going to be figured out once. And after that, that's the answer. And people will, people will study
link |
01:40:48.000
that thousands of years now. We still, we still, you know, venerate Newton and, and Einstein and,
link |
01:40:54.960
and, you know, because, because ideas are exciting even well into the future. Well, the interesting
link |
01:41:02.080
thing is like big ideas, even if they're wrong, are still useful. Like, yeah, especially if they're
link |
01:41:10.560
not completely wrong. Like you're right, right, right. Right. Noons laws are not wrong. They're
link |
01:41:14.480
just Einstein's are better. So, um, let's see. Yeah. I mean, but we're talking with Newton and
link |
01:41:20.960
Einstein. We're talking about physics. I wonder if we'll ever achieve that kind of clarity about
link |
01:41:25.760
understanding, um, like complex systems and the, this particular manifestation of complex systems,
link |
01:41:32.560
which is the human brain. I'm, I'm totally optimistic we can do that. I mean, we're making
link |
01:41:37.200
progress at it. I don't see any reason why we can't completely, I mean, completely understand in the
link |
01:41:42.400
sense, um, you know, we don't really completely understand what all the molecules in this water
link |
01:41:47.120
bottle are doing, but, you know, we have laws that sort of capture it pretty good. Um, and, uh,
link |
01:41:52.400
so we'll have that kind of understanding. I mean, it's not like you're going to have to know what
link |
01:41:55.680
every neuron in your brain is doing. Um, but enough to, um, first of all, to build it and second of
link |
01:42:02.880
all, to do, you know, do what physics does, which is like have concrete experiments where we can
link |
01:42:08.480
validate. We're, we're, we're, this is happening right now. Like it's not, this is not some future
link |
01:42:13.520
thing. Um, you know, I'm very optimistic about, I'm, I know about our, our work and what we're
link |
01:42:18.640
doing. We'll have to prove it to people. Um, but, um, I consider myself a rational person and, um,
link |
01:42:28.320
you know, until fairly recently, I wouldn't have said that, but right now I'm, where I'm sitting
link |
01:42:32.080
right now, I'm saying, you know, we, this is going to happen. There's, there's no big obstacles to
link |
01:42:36.320
it. Um, we finally have a framework for understanding what's going on in the cortex and, um, and
link |
01:42:42.560
that's liberating. It's, it's like, oh, it's happening. So I can't see why we wouldn't be able
link |
01:42:48.320
to understand it. I just can't. Okay. Oh, so, I mean, on that topic, let me ask you to play devil's
link |
01:42:53.360
advocate. Is it possible for you to imagine luck, look a hundred years from now and looking at your
link |
01:43:02.400
book, uh, in which ways might your ideas be wrong? Oh, I worry about this all the time. Um,
link |
01:43:12.320
yeah, it's still useful. Yeah. Yeah.
link |
01:43:16.320
Um, I think there's, you know, um, well, I can, I can best relate it to like things I'm worried
link |
01:43:24.800
about right now. So we talk about this voting idea, right? It's happening. There's, there's no
link |
01:43:29.600
question it's happening, but it could be far more, um, uh, there's, there's enough things I
link |
01:43:35.680
don't know about it that it might be working in different ways differently than I'm thinking about
link |
01:43:40.480
the kind of what's voting, who's voting, you know, where are representations. I talked about,
link |
01:43:44.560
like you have a thousand models of a coffee cup like that. That could turn out to be wrong, um,
link |
01:43:49.840
because it may be, maybe there are a thousand models that are sub models, but not really a
link |
01:43:54.800
single model of the coffee cup. Um, I mean, there's things that these are all sort of on the edges,
link |
01:44:00.480
things that I present as like, oh, it's so simple and clean. Well, that's not that. It's always going
link |
01:44:04.480
to be more complex. And, um, and there's parts of the theory, which I don't understand the
link |
01:44:11.600
complexity well. So I think, I think the idea that the brain is a distributed modeling system is
link |
01:44:17.680
not controversial at all, right? That's not, that's well understood by many people. The question then
link |
01:44:22.400
is, are each quarterly column an independent modeling system? Right. Um, I could be wrong about
link |
01:44:28.800
that. Um, I don't think so, but I worry about it. My intuition, not even thinking why you could be
link |
01:44:35.840
wrong is the same intuition I have about any sort of physicist, uh, like strength theory, that we,
link |
01:44:43.280
as humans, desire for a clean explanation. And, uh, a hundred years from now, uh, intelligent systems
link |
01:44:51.200
might look back at us and laugh at how we try to get rid of the whole mess by having simple
link |
01:44:57.760
explanation. When the reality is, it's, it's way messier. And in fact, it's impossible to understand
link |
01:45:04.320
you can only build it. It's like this idea of complex systems and cellular automata,
link |
01:45:09.040
you can only launch the thing, you cannot understand it. Yeah. I think that, you know,
link |
01:45:13.840
the history of science suggests that's not likely to occur. Um, the history of science suggests that
link |
01:45:19.760
like as a theorist and we're theorists, you look for simple explanations, right? Fully knowing
link |
01:45:25.920
that whatever simple explanation you're going to come up with is not going to be completely correct.
link |
01:45:30.720
I mean, it can't be. I mean, it's just, it's just more complexity. But that's the role of theorists
link |
01:45:35.840
play. They, they sort of, they give you a framework on which you now can talk about a problem and
link |
01:45:41.600
figure out, okay, now we can start digging more details. The best frameworks stick around while
link |
01:45:46.560
the details change. You know, again, you know, the classic example is Newton and Einstein, right?
link |
01:45:53.360
You know, um, Newton's theories are still used. They're still valuable. They're still practical.
link |
01:45:59.920
They're not like wrong. Just they've been refined. Yeah. But that's in physics. It's not obvious,
link |
01:46:05.120
by the way, it's not obvious for physics either that the universe should be such that's amenable
link |
01:46:10.400
to these simple, but it's so far it appears to be as far as we can tell. Um, yeah. I mean, but
link |
01:46:18.640
as far as we could tell, and, but it's also an open question whether the brain is amenable to
link |
01:46:23.040
such clean theories. That's the brain, but intelligence. Well, I, I, I don't know. I would
link |
01:46:28.960
take intelligence out of it. Just say, you know, um, well, okay. Um, the evidence we have suggested
link |
01:46:37.280
that the human brain is, is, is a, at the one time extremely messy and complex, but there's
link |
01:46:42.800
some parts that are very regular and structured. That's why we started the neocortex. It's
link |
01:46:47.200
extremely regular in its structure. Yeah. And unbelievably so. And then I mentioned earlier,
link |
01:46:52.640
the other thing is it's, it's universal abilities. It is so flexible to learn so many things. We
link |
01:46:59.760
don't, we haven't figured out what it can't learn yet. We don't know, but we haven't figured out
link |
01:47:03.120
yet, but he learns things that never was evolved to learn. So those give us hope. Um, that's why
link |
01:47:08.960
I went into this field because I said, you know, this regular structure, it's doing this amazing
link |
01:47:14.880
number of things. There's got to be some underlying principles that are, that are common
link |
01:47:18.240
and other, other scientists have come up with the same conclusions. Um, and so it's promising.
link |
01:47:24.400
It's promising. And, um, and that's, and whether the theories play out exactly this way or not,
link |
01:47:31.440
that is the role that theorists play. And so far it's worked out well, even though, you know,
link |
01:47:36.480
maybe, you know, we don't understand all the laws of physics, but so far it's been pretty damn
link |
01:47:41.040
useful. The ones we have are, our theories are pretty useful. You mentioned that, uh, we should
link |
01:47:48.160
not necessarily be at least to the degree that we are worried about the existential risks of
link |
01:47:53.360
artificial intelligence relative to, uh, human risks from human nature being existential risk.
link |
01:48:02.720
What aspect of human nature worries you the most in terms of the survival of the human species?
link |
01:48:07.600
I mean, I'm disappointed in humanity as humans. I mean, all of us, I'm, I'm one, so I'm disappointed
link |
01:48:15.440
myself too. Um, it's kind of a sad state. There's, there's two things that disappoint me. One is
link |
01:48:24.880
how it's difficult for us to separate our rational component of ourselves from our evolutionary
link |
01:48:30.640
heritage, which is, you know, not always pretty. You know, rape is a, is an evolutionary good
link |
01:48:39.040
strategy for reproduction. Murder can be at times too. You know, making other people miserable
link |
01:48:45.760
at times is a good strategy for reproduction. It's just, and it's just, and, and so now that we know
link |
01:48:51.040
that, and yet we have this sort of, you know, we, you and I can have this very rational discussion
link |
01:48:55.040
talking about, you know, intelligence and brains and life and so on. Some, it seems like it's so hard.
link |
01:49:00.720
It's just a big transition to get humans, all humans to, to make the transition from like,
link |
01:49:06.480
let's pay no attention to all that ugly stuff over here. Let's just focus on the
link |
01:49:11.600
incident. What's unique about humanity is our knowledge and our intellect.
link |
01:49:16.160
But the fact that we're striving isn't itself amazing, right? The fact that we're able to
link |
01:49:21.200
overcome that part and it seems like we are more and more becoming successful at overcoming that
link |
01:49:28.240
part. That is the optimistic view and I agree with you. Yeah. But I worry about it. I'm not saying,
link |
01:49:33.120
I'm worrying about it. I think that was your question. I still worry about it. Yes. You know,
link |
01:49:37.040
we could be, and tomorrow because some terrorists could get nuclear bombs and, you know,
link |
01:49:40.800
blow us all up. Who knows, right? The other thing I think I'm disappointed is, and it's just,
link |
01:49:45.920
I understand it. It's, I guess you can't really be disappointed. It's just a fact,
link |
01:49:49.120
is that we're so prone to false beliefs. We, you know, we have a model in our head,
link |
01:49:55.360
the things we can interact with directly, physical objects, people, that model is pretty good.
link |
01:50:01.440
And we can test it all the time, right? I touch something, I look at it, talk to you, see
link |
01:50:05.600
my model is correct. But so much of what we know is stuff I can't directly interact with. I can't,
link |
01:50:10.640
I don't even know because someone told me about it. And so, so we're prone, inherently prone to
link |
01:50:16.800
having false beliefs because if I'm told something, how am I going to know it's right or wrong, right?
link |
01:50:21.920
And so then we have the scientific process, which says we are inherently flawed. So the only way we
link |
01:50:27.840
can get closer to the truth is by looking for contrary evidence. Yeah. Like this conspiracy
link |
01:50:37.920
theory, this, this theory that scientists keep telling me about that the earth is round.
link |
01:50:42.480
As far as I can tell, when I look out, it looks pretty flat. Yeah. So yeah, there's, there's
link |
01:50:49.280
attention, but it's also, I tend to believe that we haven't figured out most of this thing, right?
link |
01:50:58.480
Most of nature around us is a mystery. And so it, but that doesn't work. Does that worry you?
link |
01:51:04.000
I mean, it's like, oh, that's, that's like a pleasure more to figure out, right? Yeah,
link |
01:51:07.600
that's exciting. But I'm saying like, there's going to be a lot of quote unquote, wrong ideas.
link |
01:51:13.520
I mean, I've been thinking a lot about engineering systems like social networks and so on. And I've
link |
01:51:19.600
been worried about censorship and thinking through all that kind of stuff because there's a lot of
link |
01:51:24.320
wrong ideas. There's a lot of dangerous ideas, but then I also read a history, read history and see
link |
01:51:32.400
when you censor ideas that are wrong. Now, this could be a small scale censorship, like a young
link |
01:51:38.880
grad student who comes up, who like raises their hand and says some crazy idea. It's a form of
link |
01:51:45.040
censorship could be, I shouldn't use the word censorship, but like de incentivize them from,
link |
01:51:51.280
no, no, no, no, this is the way it's been done. Yeah, you're foolish kid, don't think so. Yeah,
link |
01:51:55.040
you're foolish. So in some sense, those wrong ideas most of the time end up being wrong,
link |
01:52:04.320
but sometimes end up being. I agree with you. So I don't like the word censorship.
link |
01:52:09.360
At the very end of the book, I ended up with a sort of a plea or a recommended force of action.
link |
01:52:17.520
And the best way I could, I know how to deal with this issue that you bring up
link |
01:52:22.640
is if everybody understood, as part of your upbringing life, something about how your brain
link |
01:52:28.640
works, that it builds a model of the world, how it worked, how basic it builds that model of the
link |
01:52:34.000
world, and that the model is not the real world. It's just a model. And it's never going to reflect
link |
01:52:39.600
the entire world, and it can be wrong, and it's easy to be wrong. And here's all the ways you
link |
01:52:44.080
can get the wrong model in your head, right? It's not to prescribe what's right or wrong,
link |
01:52:49.520
it's just to understand that process. If we all understood the process, and then I got together
link |
01:52:54.480
and you said, I disagree with you, Jeff, and I said, Lex, I disagree with you, that at least we
link |
01:52:58.560
understand that we're both trying to model something. We both have different information
link |
01:53:03.440
which leads to our different models. And therefore, I shouldn't hold it against you,
link |
01:53:06.560
and you shouldn't hold it against me. And we can at least agree that, well, what can we look for
link |
01:53:11.040
in its common ground to test our beliefs, as opposed to so much as we raise our kids on dogma,
link |
01:53:18.640
which is this is a fact, and this is a fact, and these people are bad. And if everyone knew just
link |
01:53:27.200
to be skeptical of every belief and why and how their brains do that, I think we might have a
link |
01:53:33.360
better world. Do you think the human mind is able to comprehend reality? So you talk about
link |
01:53:41.520
this creating models that are better and better. How close do you think we get to reality? So
link |
01:53:49.520
the wildest ideas is like Donald Hoffman saying, we're very far away from reality. Do you think
link |
01:53:55.040
we're getting close to reality? Well, I guess it depends on what you define reality. We have a
link |
01:54:01.280
model of the world that's very useful for basic goals of survival. Well, for our survival and
link |
01:54:07.440
our pleasure, right? So that's useful. That means really useful. Oh, we can build planes,
link |
01:54:15.040
we can build computers, we can do these things. I don't think, I don't know the answer to that
link |
01:54:20.960
question. I think that's part of the question we're trying to figure out. Obviously, if we
link |
01:54:27.120
end up with a theory of everything that really is a theory of everything, and all of a sudden,
link |
01:54:32.000
everything comes into play and there's no room for something else, then you might feel like we
link |
01:54:35.760
have a good model of the world. Yeah, but if we have a theory of everything and somehow, first of
link |
01:54:40.160
all, you'll never be able to really conclusively say it's a theory of everything, but say somehow
link |
01:54:44.480
we are very damn sure it's a theory of everything. We understand what happened at the Big Bang and how
link |
01:54:50.480
just the entirety of the physical process. I'm still not sure that gives us an understanding of
link |
01:54:56.560
the next many layers of the hierarchy of abstractions that form. Well, also, what if string
link |
01:55:03.280
theory turns out to be true, and then you say, well, we have no reality, no modeling, what's
link |
01:55:08.960
going on in those other dimensions that are wrapped into it on each other, right? Or the multiverse,
link |
01:55:14.880
you know? I honestly don't know how for us, for human interaction, for ideas of intelligence,
link |
01:55:21.520
how it helps us to understand that we're made up of vibrating strings that are
link |
01:55:26.720
like 10 to the whatever times smaller than us. You could probably build better
link |
01:55:32.960
weapons of better rockets, but you're not going to be able to understand intelligence.
link |
01:55:36.640
I guess maybe better computers. No, you won't be able to. I think it's just more purely knowledge.
link |
01:55:41.680
You might lead to a better understanding of the beginning of the universe,
link |
01:55:46.160
right? It might lead to a better understanding of, I don't know. I think the acquisition of
link |
01:55:52.720
knowledge has always been one where you pursue it for its own pleasure and you don't always know
link |
01:56:01.840
what is going to make a difference. You're pleasantly surprised by the weird things you find.
link |
01:56:07.680
Do you think for the for the New York Cortex in general, do you think there's a lot of innovation
link |
01:56:13.200
to be done on the machine side? You use the computer as a metaphor quite a bit. Is there
link |
01:56:19.760
different types of computer that would help us build intelligence? What are the physical
link |
01:56:23.600
manifestations of intelligent machines? Oh, no, it's going to be totally crazy. We have no idea
link |
01:56:30.880
how this is going to look out yet. You can already see this. Today, of course, we modeled these things
link |
01:56:36.640
on traditional computers, and now GPUs are really popular with neural networks and so on.
link |
01:56:44.960
But there are companies coming up with fundamentally new physical substrates
link |
01:56:50.400
that are just really cool. I don't know if they're going to work or not,
link |
01:56:54.240
but I think there'll be decades of innovation here, totally.
link |
01:56:58.560
Do you think the final thing will be messy, like our biology is messy? Or do you think
link |
01:57:05.120
it's the old bird versus airplane question? Or do you think we could just
link |
01:57:12.000
build airplanes that fly way better than birds in the same way we can build
link |
01:57:20.640
electrical in New York Cortex? Can I riff on the bird thing a bit? Because I think it's
link |
01:57:26.000
interesting. People really misunderstand this. The Wright brothers, the problem they were trying
link |
01:57:33.040
to solve was controlled flight, how to turn an airplane, not how to propel an airplane.
link |
01:57:38.320
They weren't worried about that. They already had, at that time, there was already wing shapes,
link |
01:57:42.880
which they had from studying birds. There was already gliders that carried people.
link |
01:57:46.640
The problem was, if you put a rudder on the back of a glider and you turn it, the plane falls out
link |
01:57:50.160
of the sky. The problem was, how do you control flight? They studied birds. They actually had
link |
01:57:57.360
birds in captivity. They watched birds in wind tunnels. They observed them in the wild. They
link |
01:58:01.040
discovered the secret was the birds twist their wings when they turn. That's what they did on
link |
01:58:06.240
the Wright brothers fly. They had these sticks that you would twist the wing. That was their
link |
01:58:10.240
innovation, not their propeller. Today, airplanes still twist their wings. We don't twist the entire
link |
01:58:16.000
wing. We just the tail end of it. The flaps, which is the same thing. Today's airplanes
link |
01:58:21.520
fly on the same principles as birds, which we observe. Everyone get that analogy wrong.
link |
01:58:26.720
Let's step back from that. Once you understand the principles of flight,
link |
01:58:31.520
you can choose how to implement them. No one's going to use bones and feathers and muscles,
link |
01:58:37.680
but they do have wings. We don't flap them. We have propellers. When we have the principles
link |
01:58:43.040
of computation that goes on to modeling the world in a brain, we understand those principles
link |
01:58:48.880
very clearly. We have choices on how to implement them. Some of them be biological like and some
link |
01:58:53.760
won't. I do think there's going to be a huge amount of innovation here. Do you think about
link |
01:59:00.000
the innovation when in the computer? They had invented the transistor. They invented the silicon
link |
01:59:05.680
chip. They had invented software. It's the things they had to do, memory systems.
link |
01:59:11.280
It's going to be similar. It's interesting that the effectiveness of deep learning for
link |
01:59:19.760
specific tasks is driving a lot of innovation in the hardware, which may have effects
link |
01:59:25.520
for actually allowing us to discover intelligent systems that operate very differently or
link |
01:59:31.360
much bigger than deep learning. Ultimately, it's good to have an application that's making our
link |
01:59:37.680
life better now because the capitalist process, if you can make money, that works. The other way,
link |
01:59:46.240
Neil deGrasse Tyson writes about this, is the other way we fund science, of course, is through
link |
01:59:50.320
military conquests. It's an interesting thing that we're doing on this regard. We used to have
link |
01:59:57.920
a series of biological principles. We can see how to build these intelligent machines,
link |
02:00:01.360
but we've decided to apply some of these principles to today's machine learning techniques. One
link |
02:00:08.240
that we didn't talk about this principle, one is sparsity in the brain. Most of the neurons
link |
02:00:12.880
are inactive at any point in time. It's sparse and the connectivity is sparse. That's different
link |
02:00:16.080
than deep learning networks. We've already shown that we can speed up existing deep learning
link |
02:00:21.920
networks anywhere from 10 to a factor of 100, literally 100, and make them more robust at the
link |
02:00:29.680
same time. This is commercially very, very valuable. If we can prove this actually in the
link |
02:00:38.160
largest systems that are commercially applied today, there's a big commercial desire to do this.
link |
02:00:43.760
Well, sparsity is something that doesn't run really well on existing hardware. It doesn't
link |
02:00:50.240
really run really well on GPUs and on CPUs. That would be a way of bringing more brain
link |
02:01:00.000
principles into the existing system on a commercially valuable basis. Another thing we
link |
02:01:04.640
can think we can do is we're going to use these dendrites. I talked earlier about the prediction
link |
02:01:11.760
occurring from inside and around. That basic property can be applied to existing neural networks
link |
02:01:17.120
and allow them to learn continuously with something they don't do today.
link |
02:01:20.720
The dendritic spikes that you were talking about.
link |
02:01:23.520
Yeah. Well, we wouldn't model the spikes, but the idea that today's neural networks have to
link |
02:01:28.960
go to point neurons is a very simple model of a neuron. By adding dendrites to them,
link |
02:01:34.320
at just one more level of complexity that's in biological systems, you can solve problems
link |
02:01:39.280
in continuous learning and rapid learning. We're trying to bring the existing
link |
02:01:46.720
field. We'll see if we can do it. We're trying to bring the existing field of machine learning
link |
02:01:52.240
commercially along with us. You brought up this idea of keeping paying for it commercially along
link |
02:01:56.720
with us as we move towards the ultimate goal of a true AI system. Even small innovations on
link |
02:02:01.520
neural networks are really, really exciting. It seems like such a trivial model of the brain
link |
02:02:08.160
and applying different insights that just even, like you said, continuous learning or making it
link |
02:02:17.520
more asynchronous or maybe making more dynamic or incentivizing. Or more robust. Even just
link |
02:02:25.760
one more robust. And making it somehow much better, incentivizing sparsity somehow.
link |
02:02:33.760
Yeah. Well, if you can make things 100 times faster, then there's plenty of incentive.
link |
02:02:40.080
People spending millions of dollars just training some of these networks now,
link |
02:02:44.320
these transforming networks.
link |
02:02:46.800
Let me ask you a big question. For young people listening to this today in high school and college,
link |
02:02:53.520
what advice would you give them in terms of which career path to take and maybe just about life in
link |
02:03:01.520
general? Well, in my case, I didn't start life with any kind of goals. I was, when I was going to
link |
02:03:09.680
college, I was like, oh, what do I study? Well, maybe I'll do intellectual engineering stuff.
link |
02:03:15.120
I wasn't like, today you see some of these young kids are so motivated, change the world. I was like,
link |
02:03:18.880
hey, whatever. But then I did fall in love with something besides my wife. But I fell in love
link |
02:03:26.880
with this like, oh my God, it would be so cool to understand how the brain works.
link |
02:03:30.240
And then I said to myself, that's the most important thing I could work on. I can't
link |
02:03:34.480
imagine anything more important because if you understand how the brain's working,
link |
02:03:37.360
build tells machines and they could figure out all the other big questions in the world.
link |
02:03:41.120
So, and then I said, I want to understand how I work. So I fell in love with this idea and I
link |
02:03:45.440
became passionate about it. And this is a trope people say this, but it's true.
link |
02:03:52.880
Because I was passionate about it, I was able to put up with almost so much crap.
link |
02:03:57.680
You know, I was in that, you know, I was like,
link |
02:04:02.080
person said, you can't do this. I was a graduate student at Berkeley when they said,
link |
02:04:05.600
you can't study this problem. You know, no one's can solve this or you can't get funded for it.
link |
02:04:09.920
You know, then I went into do, you know, mobile computing and there's like people say,
link |
02:04:13.040
you can't do that. You can't build a cell phone. You know, so, but all along I kept being motivated
link |
02:04:18.880
because I wanted to work on this problem. I said, I want to understand the brain works.
link |
02:04:22.320
I got myself, you know, I got one lifetime, I'm going to figure it out, do as best I can.
link |
02:04:26.240
So by having that, because you know, these things, it's really, as you point out, Lex,
link |
02:04:31.680
it's really hard to do these things. People, it's just, there's so many downers along the way.
link |
02:04:36.880
So many way obstacles are getting your way. Yeah, I'm sitting here happy all the time,
link |
02:04:40.080
but trust me, it's not always like that.
link |
02:04:42.080
That's, I guess, the happiness that the passion is a prerequisite for surviving the whole thing.
link |
02:04:47.520
Yeah, I think so. I think that's right. And so I don't want to sit to someone say, you know,
link |
02:04:53.120
you need to find a passion and do it. No, maybe you don't. But if you do find something you're
link |
02:04:57.920
passionate about, then, then you can follow it as far as your passion will let you put up with it.
link |
02:05:04.000
Do you remember how you found it? This is how the spark happened.
link |
02:05:08.960
Why, specifically for me?
link |
02:05:10.800
Yeah, like, because you said, it's such an interesting, so like almost like later in life,
link |
02:05:14.800
by later, I mean, like not when you were five, you, you didn't really know. And then all of a
link |
02:05:20.560
sudden you fell in love with it. Yeah, yeah. There was there was two separate events that
link |
02:05:24.160
compounded one another. One, when I was probably a teenager, it might have been 17 or 18,
link |
02:05:29.920
I made a list of the most interesting problems I could think of. First was, why does the universe
link |
02:05:35.360
exist? Seems like not existing is more likely. Yeah. The second one was, well, given exists,
link |
02:05:40.320
why does it behave the way it does? You know, laws of physics, why is it equal to MC squared,
link |
02:05:44.000
not MC cubed? You know, that's interesting question. I don't know. Third one was like,
link |
02:05:47.760
what's the origin of life? And the fourth one was what's intelligence? And I stopped there.
link |
02:05:53.840
I said, well, that's probably the most interesting one. And I put that aside
link |
02:05:58.160
as a teenager. But then when I was 22, and I was reading the, no, I was, excuse me, I was 70,
link |
02:06:05.520
it was 1979, excuse me, 1979. I was reading, so I was at that time was 22. I was reading
link |
02:06:12.960
the September issue of Scientific American, which is all about the brain. And then the
link |
02:06:16.880
final essay was by Francis Crick, who of DNA fame, and he had taken his interest
link |
02:06:24.000
to studying the brain now. And he said, you know, there's something wrong here. He says,
link |
02:06:29.280
we got all this data, all this fact, this is 1979, all these facts about the brain,
link |
02:06:34.960
tons and tons of facts about the brain. Do we need more facts? Or do we just need to think
link |
02:06:40.000
about a way of rearranging the facts we have? Maybe we're just not thinking about the problem
link |
02:06:43.680
correctly. You know, he says, this shouldn't be, this shouldn't be like this, you know?
link |
02:06:50.160
So I read that and I said, wow. I said, I don't have to become like an experimental
link |
02:06:55.840
neuroscientist. I could just look at all those facts and try to become a theoretician and try
link |
02:07:02.000
to figure it out. And I said, that, I felt like it was something I would be good at. I said,
link |
02:07:07.360
I wouldn't be a good experimentalist. I don't have the patience for it. But I'm a good thinker
link |
02:07:12.320
and I love puzzles. And this is like the biggest puzzle in the world. This is the biggest puzzle
link |
02:07:16.560
of all time. And I got all the puzzle pieces in front of me. Damn, that was exciting.
link |
02:07:21.520
And there's something obviously you can't convert into words. They just kind of
link |
02:07:25.920
sparked this passion. And I have that a few times in my life, just something
link |
02:07:32.000
just, just like you, it grabs you. Yeah. I thought it was something that was both important
link |
02:07:38.160
and that I could make a contribution to. And so all of a sudden it felt like, oh, it gave me purpose
link |
02:07:42.640
in life, you know? I honestly don't think it has to be as big as one of those four questions.
link |
02:07:48.800
You can find those things in the smallest. Oh, absolutely. David Foster Wallace said,
link |
02:07:54.400
like, the key to life is to be unboreable. I think it's very possible to find that
link |
02:08:00.480
intensity of joy in the smallest thing. Absolutely. I'm just, you asked me my story.
link |
02:08:04.640
Yeah. No, but I'm actually speaking to the audience. It doesn't have to be those four.
link |
02:08:08.960
You happen to get excited by one of the bigger questions in the universe. But
link |
02:08:16.160
even the smallest things and watching the Olympics now, just giving yourself life,
link |
02:08:22.480
giving your life over to the study and the mastery of a particular sport is fascinating.
link |
02:08:27.600
And if it sparks joy and passion, you're able to, in the case of the Olympics,
link |
02:08:33.680
basically suffer for like a couple of decades to achieve. I mean, you can find joy and passion
link |
02:08:37.920
just being a parent. I mean, yeah, the parenting one is funny. So I was not always, but for a long
link |
02:08:44.400
time, wanted kids and get married and stuff. And especially it has to do with the fact that
link |
02:08:49.600
I've seen a lot of people that I respect get a whole other level of joy from kids. And at,
link |
02:08:57.920
you know, at first is like, you're thinking is, well, like, I don't have enough time in the day,
link |
02:09:05.040
right? If I have this passion to solve intelligence. Which is true.
link |
02:09:09.280
But like, if I want to solve intelligence, how's this kid situation going to help me?
link |
02:09:14.080
But then you realize that, you know, like you said, the things that sparks joy and it's very
link |
02:09:22.640
possible that kids can provide even a greater or deeper, more meaningful joy than those bigger
link |
02:09:29.280
questions when they enrich each other. And that seemed like a, obviously, when I was younger,
link |
02:09:34.560
it's probably a counterintuitive notion because there's only so many hours in the day. But then
link |
02:09:39.120
life is finite and you have to pick the things that give you joy.
link |
02:09:44.720
But you also understand you can be patient too. I mean, it's finite, but we do have, you know,
link |
02:09:50.800
whatever, 50 years or something. It's us alone. Yeah. So in my case, you know, in my case,
link |
02:09:55.920
I had to give up on my dream of the neuroscience because I was a graduate student at Berkeley
link |
02:10:00.560
and they told me I couldn't do this and I couldn't get funded. And, you know, and,
link |
02:10:04.480
and so I went back in, and went back into the computing industry for a number of years. I
link |
02:10:09.200
thought it would be four, but it turned out to be more. But I said, but I said, I'll come back.
link |
02:10:13.440
You know, I definitely, I'm definitely going to come back. I know I'm going to do this computer
link |
02:10:16.400
stuff for a while, but I'm definitely coming back. Everyone knows that. And it's the same
link |
02:10:19.920
as raising kids. Well, yeah, you still, you have to spend a lot of time with your kids. It's fun,
link |
02:10:23.600
enjoyable. But that doesn't mean you have to give up on other dreams. It just means that you have
link |
02:10:29.120
to wait a week or two to work on that next idea. You talk about the darker side of me,
link |
02:10:37.680
disappointing sides of human nature that we're hoping to overcome so that we don't destroy
link |
02:10:42.880
ourselves. I tend to put a lot of value in the broad general concept of love, of the human capacity
link |
02:10:52.480
to, of compassion towards each other, of just kindness, whatever that longing of like just the
link |
02:10:59.520
human, human to human connection, it connects back to our initial discussion. I tend to see a lot
link |
02:11:05.600
of value in this collective intelligence aspect. I think some of the magic of human civilization
link |
02:11:10.080
happens when there's a party is not as fun when you're alone. I totally agree with you on these
link |
02:11:16.720
issues. Do you think from a New York Cortex perspective, what role does love play in the
link |
02:11:23.840
human condition? Well, those are two separate things from the New York Cortex. I don't think it
link |
02:11:29.040
doesn't impact our thinking about the New York Cortex. From a human condition point of view,
link |
02:11:33.680
I think it's core. I mean, we get so much pleasure out of loving people and helping people.
link |
02:11:45.600
I'll rack it up to old brain stuff, and maybe we can throw it under the bus of evolution,
link |
02:11:50.320
if you want. That's fine. It doesn't impact how we think about how we model the world,
link |
02:11:58.320
but from a humanity point of view, I think it's essential.
link |
02:12:00.800
Well, I tend to give it to the new brain, and also I tend to think the sum of aspects of that
link |
02:12:06.480
need to be engineered into AI systems, both in their ability to have compassion for other humans
link |
02:12:16.080
and their ability to maximize love in the world between humans. I'm more thinking about the social
link |
02:12:24.560
network. Whenever there's a deep integration between AI systems and humans, there's specific
link |
02:12:29.680
applications where it's AI and humans. I think that's something that's often not talked about in
link |
02:12:36.640
terms of metrics over which you try to maximize, which metric to maximize in a system. It seems
link |
02:12:47.120
like one of the most powerful things in societies is the capacity to love.
link |
02:12:55.120
It's a great way of thinking about it. I have been thinking more of these fundamental
link |
02:13:01.840
mechanisms in the brain as opposed to the social interaction between humans and AI systems in
link |
02:13:07.600
the future. If you think about that, you're absolutely right, but that's a complex system.
link |
02:13:14.080
I can have intelligent systems that don't have that component, but they're not interacting
link |
02:13:17.360
with people. They're just running something or building a building someplace or something,
link |
02:13:21.200
I don't know. If you think about interacting with humans, it has to be engineered in there.
link |
02:13:28.080
I don't think it's going to appear on its own. That's a good question.
link |
02:13:35.040
In terms of, from a reinforcement learning perspective, whether the darker
link |
02:13:43.840
size of human nature or the better angels of our nature win out, statistically speaking,
link |
02:13:49.520
I don't know. I tend to be optimistic and hope that love wins out in the end.
link |
02:13:54.480
You've done a lot of incredible stuff. Your book is driving towards this fourth question
link |
02:14:02.880
that you started with on the nature of intelligence. What do you hope your legacy
link |
02:14:08.880
for people reading 100 years from now? How do you hope they remember your work?
link |
02:14:14.880
How do you hope they remember this book? Well, I think as an entrepreneur or a scientist or
link |
02:14:21.520
any human who's trying to accomplish some things, I have a view that really all you can do is
link |
02:14:28.320
accelerate the inevitable. It's like, if we didn't figure out, if we didn't study the brain,
link |
02:14:34.640
someone else would study the brain. If Elon just didn't make electric cars, someone else would do
link |
02:14:38.560
it eventually. If Thomas Edison didn't invent a light bulb, we wouldn't be using candles today.
link |
02:14:44.800
What you can do as an individual is you can accelerate something that's beneficial and make
link |
02:14:51.200
it happen sooner than whatever. That's really it. That's all you can do. You can't create a new
link |
02:14:56.240
reality that it wasn't going to happen. From that perspective, I would hope that our work,
link |
02:15:04.720
not just me, but our work in general, people would look back and said, hey, they really helped make
link |
02:15:10.480
this better future happen sooner. They helped us understand the nature of false beliefs sooner
link |
02:15:17.680
than what I made up. Now, we're so happy that we have these intelligent machines doing these things,
link |
02:15:21.920
helping us that maybe that solved the climate change problem, and they made it happen sooner.
link |
02:15:28.240
I think that's the best I would hope for. Some would say, those guys just moved the needle forward
link |
02:15:33.760
a little bit in time. Well, it feels like the progress of human civilization is not,
link |
02:15:42.000
there's a lot of trajectories. If you have individuals that accelerate towards one direction
link |
02:15:50.000
that helps steer human civilization. I think in those long stretch of time, all trajectories will
link |
02:15:56.960
be traveled, but I think it's nice for this particular civilization on earth to travel down
link |
02:16:01.920
one that's not. Yeah. Well, I think you're right. We have to take the whole period of World War II
link |
02:16:06.320
and Nazism or something like that. Well, that was a bad side step. We've been on with that for a
link |
02:16:10.400
while, but there is the optimistic view about life that ultimately it does converge in a positive
link |
02:16:17.360
way. It progresses ultimately, even if we have years of darkness. I think you could perhaps,
link |
02:16:26.080
that's accelerating the positive. It could also mean eliminating some bad missteps along the way,
link |
02:16:31.040
too. But I'm an optimistic in that way. Despite we talked about the end of civilization,
link |
02:16:39.200
I think we're going to live for a long time. I hope we are. I think our society in the future
link |
02:16:43.680
is going to be better. We're going to have less discord. We're going to have less people killing
link |
02:16:46.400
each other. We'll make them live in some way that's compatible with the carrying capacity of the
link |
02:16:52.080
earth. I'm optimistic these things will happen. All we can do is try to get there sooner.
link |
02:16:57.760
At the very least, if we do destroy ourselves, we'll have a few satellites that will tell alien
link |
02:17:06.400
civilization that we were once here. Or maybe our future inhabitants of earth. Imagine the
link |
02:17:14.160
planet of the ape scenario. We kill ourselves a million years from now or a billion years from
link |
02:17:18.000
now. There's another species on the planet. There's curious creatures who are once here.
link |
02:17:22.560
Jeff, thank you so much for your work and thank you so much for talking to me once again.
link |
02:17:27.440
Well, it's great. I love what you do. I love your podcast. You have those interesting people
link |
02:17:31.440
me aside. It's a real service I think you do for a very broader sense for humanity, I think.
link |
02:17:40.320
Thanks, Jeff. All right. It's a pleasure. Thanks for listening to this conversation with Jeff
link |
02:17:44.880
Hawkins. And thank you to Code Academy, Bio Optimizers, ExpressVPN, A Sleep, and Blinkist.
link |
02:17:52.880
Check them out in the description to support this podcast. And now let me leave you with some words
link |
02:17:58.720
from Albert Camus. An intellectual is someone whose mind watches itself. I like this because I'm
link |
02:18:07.120
happy to be both halves, the watcher and the watched. Can they be brought together? This
link |
02:18:13.600
is a practical question we must try to answer. Thank you for listening. I hope to see you next time.