back to index

Jeff Hawkins: The Thousand Brains Theory of Intelligence | Lex Fridman Podcast #208


small model | large model

link |
00:00:00.000
The following is a conversation with Jeff Hawkins, a neuroscientist seeking to understand
link |
00:00:05.440
the structure, function, and origin of intelligence in the human brain.
link |
00:00:10.080
He previously wrote a seminal book on the subject titled On Intelligence, and recently a new book
link |
00:00:16.720
called A Thousand Brains, which presents a new theory of intelligence that Richard Dawkins,
link |
00:00:22.560
for example, has been raving about, calling the book quote brilliant and exhilarating.
link |
00:00:28.400
I can't read those two words and not think of him saying it in his British accent.
link |
00:00:34.160
Quick mention of our sponsors, Codecademy, Biooptimizers, ExpressVPN, Asleep, and Blinkist.
link |
00:00:41.280
Check them out in the description to support this podcast.
link |
00:00:44.560
As a side note, let me say that one small but powerful idea that Jeff Hawkins mentions
link |
00:00:49.360
in his new book is that if human civilization were to destroy itself, all of knowledge,
link |
00:00:54.960
all our creations will go with us. He proposes that we should think about how to save that
link |
00:01:00.880
knowledge in a way that long outlives us, whether that's on Earth, in orbit around Earth,
link |
00:01:07.040
or in deep space, and then to send messages that advertise this backup of human knowledge
link |
00:01:13.040
to other intelligent alien civilizations. The main message of this advertisement is not that
link |
00:01:19.600
we are here, but that we were once here. This little difference somehow was deeply humbling
link |
00:01:28.240
to me, that we may, with some nonzero likelihood, destroy ourselves, and that an alien civilization
link |
00:01:34.960
thousands or millions of years from now may come across this knowledge store, and they
link |
00:01:40.240
would only with some low probability even notice it, not to mention be able to interpret it.
link |
00:01:45.360
And the deeper question here for me is what information in all of human knowledge is even
link |
00:01:49.840
essential? Does Wikipedia capture it or not at all? This thought experiment forces me
link |
00:01:55.600
to wonder what are the things we've accomplished and are hoping to still accomplish that will
link |
00:02:00.400
outlive us? Is it things like complex buildings, bridges, cars, rockets? Is it ideas like science,
link |
00:02:08.560
physics, and mathematics? Is it music and art? Is it computers, computational systems,
link |
00:02:15.440
or even artificial intelligence systems? I personally can't imagine that aliens wouldn't
link |
00:02:20.800
already have all of these things, in fact much more and much better. To me, the only
link |
00:02:27.120
unique thing we may have is consciousness itself, and the actual subjective experience
link |
00:02:32.560
and the actual subjective experience of suffering, of happiness, of hatred, of love. If we can
link |
00:02:39.200
record these experiences in the highest resolution directly from the human brain, such that aliens
link |
00:02:44.000
will be able to replay them, that is what we should store and send as a message. Not
link |
00:02:49.760
Wikipedia, but the extremes of conscious experiences, the most important of which, of course, is
link |
00:02:56.640
love. This is the Lex Friedman podcast, and here is my conversation with Jeff Hawkins.
link |
00:03:04.080
We previously talked over two years ago. Do you think there's still neurons in your brain
link |
00:03:09.760
that remember that conversation, that remember me and got excited? Like there's a Lex neuron
link |
00:03:15.600
in your brain that just like finally has a purpose? I do remember our conversation. I
link |
00:03:19.920
have some memories of it, and I formed additional memories of you in the meantime. I wouldn't
link |
00:03:26.480
say there's a neuron or neurons in my brain that know you. There are synapses in my brain
link |
00:03:31.360
that have formed that reflect my knowledge of you and the model I have of you in the
link |
00:03:36.800
world. Whether the exact same synapses were formed two years ago, it's hard to say because
link |
00:03:41.520
these things come and go all the time. One of the things to know about brains is that
link |
00:03:46.480
when you think of things, you often erase the memory and rewrite it again. Yes, but I have
link |
00:03:50.400
a memory of you, and that's instantiated in synapses. There's a simpler way to think about
link |
00:03:55.360
it. We have a model of the world in your head, and that model is continually being updated.
link |
00:04:02.400
I updated this morning. You offered me this water. You said it was from the refrigerator.
link |
00:04:07.200
I remember these things. The model includes where we live, the places we know, the words,
link |
00:04:12.960
the objects in the world. It's a monstrous model, and it's constantly being updated.
link |
00:04:17.600
People are just part of that model. Our animals, our other physical objects, our events we've
link |
00:04:23.360
done. In my mind, it's no special place for the memories of humans. Obviously, I know a lot about
link |
00:04:33.440
my wife and friends and so on, but it's not like a special place for humans or over here.
link |
00:04:41.920
We model everything, and we model other people's behaviors too. If I said there's a copy of your
link |
00:04:46.640
mind in my mind, it's just because I've learned how humans behave, and I've learned some things
link |
00:04:53.280
about you, and that's part of my world model. Well, I just also mean the collective intelligence
link |
00:05:00.560
of the human species. I wonder if there's something fundamental to the brain that enables that,
link |
00:05:08.480
so modeling other humans with their ideas. You're actually jumping into a lot of big
link |
00:05:13.600
topics. Collective intelligence is a separate topic that a lot of people like to talk about.
link |
00:05:17.440
We could talk about that. That's interesting. We're not just individuals. We live in society
link |
00:05:24.640
and so on. From our research point of view, again, let's just talk. We studied the neocortex.
link |
00:05:30.960
It's a sheet of neural tissue. It's about 75% of your brain. It runs on this very repetitive
link |
00:05:37.040
algorithm. It's a very repetitive circuit. You can apply that algorithm to lots of different
link |
00:05:44.000
problems, but underneath, it's the same thing. We're just building this model. From our point
link |
00:05:48.640
of view, we wouldn't look for these special circuits someplace buried in your brain that
link |
00:05:52.720
might be related to understanding other humans. It's more like, how do we build a model of
link |
00:05:58.640
anything? How do we understand anything in the world? Humans are just another part of
link |
00:06:02.080
the things we understand. There's nothing to the brain that knows the
link |
00:06:08.720
emergent phenomena of collective intelligence. Well, I certainly know about that. I've heard
link |
00:06:13.120
the terms, I've read. No, but that's as an idea.
link |
00:06:16.800
Well, I think we have language, which is built into our brains. That's a key part of collective
link |
00:06:21.920
intelligence. There are some prior assumptions about the world we're going to live in. When
link |
00:06:27.680
we're born, we're not just a blank slate. Did we evolve to take advantage of those situations?
link |
00:06:35.520
Yes. Again, we study only part of the brain, the neocortex. There's other parts of the
link |
00:06:39.040
brain that are very much involved in societal interactions and human emotions and how we
link |
00:06:45.600
interact and even societal issues about how we interact with other people, when we support
link |
00:06:53.280
them, when we're greedy and things like that. Certainly, the brain is a great place
link |
00:07:00.160
where to study intelligence. I wonder if it's the fundamental atom of intelligence.
link |
00:07:06.720
Well, I would say it's absolutely in a central component, even if you believe in collective
link |
00:07:12.000
intelligence as, hey, that's where it's all happening. That's what we need to study,
link |
00:07:16.320
which I don't believe that, by the way. I think it's really important, but I don't think that
link |
00:07:19.120
is the thing. Even if you do believe that, then you have to understand how the brain works in
link |
00:07:26.080
doing that. It's more like we are intelligent individuals and together, we are much more
link |
00:07:32.880
magnified, our intelligence. We can do things that we couldn't do individually, but even as
link |
00:07:37.200
individuals, we're pretty damn smart and we can model things and understand the world and interact
link |
00:07:42.000
with it. To me, if you're going to start someplace, you need to start with the brain. Then you could
link |
00:07:48.160
say, well, how do brains interact with each other? What is the nature of language? How do we share
link |
00:07:53.760
models that I've learned something about the world, how do I share it with you? Which is really
link |
00:07:56.960
what sort of communal intelligence is. I know something, you know something. We've had different
link |
00:08:02.800
experiences in the world. I've learned something about brains. Maybe I can impart that to you. You've
link |
00:08:06.800
learned something about physics and you can impart that to me. Even just the epistemological
link |
00:08:15.200
question of, well, what is knowledge and how do you represent it in the brain? That's where it's
link |
00:08:20.880
going to reside for in our writings. It's obvious that human collaboration, human interaction
link |
00:08:27.280
is how we build societies. But some of the things you talk about and work on,
link |
00:08:34.560
some of those elements of what makes up an intelligent entity is there with a single person.
link |
00:08:40.560
Absolutely. I mean, we can't deny that the brain is the core element here. At least I think it's
link |
00:08:47.040
obvious. The brain is the core element in all theories of intelligence. It's where knowledge
link |
00:08:51.920
is represented. It's where knowledge is created. We interact, we share, we build upon each other's
link |
00:08:58.080
work. But without a brain, you'd have nothing. There would be no intelligence without brains.
link |
00:09:03.920
And so that's where we start. I got into this field because I just was curious as to who I am.
link |
00:09:11.520
How do I think? What's going on in my head when I'm thinking? What does it mean to know something?
link |
00:09:16.560
I can ask what it means for me to know something independent of how I learned it from you or from
link |
00:09:21.200
someone else or from society. What does it mean for me to know that I have a model of you in my
link |
00:09:25.600
head? What does it mean to know I know what this microphone does and how it works physically,
link |
00:09:28.880
even when I can't see right now? How do I know that? What does it mean? How the neurons do that
link |
00:09:34.480
at the fundamental level of neurons and synapses and so on? Those are really fascinating questions.
link |
00:09:40.240
And I'm happy to be just happy to understand those if I could.
link |
00:09:44.400
So in your new book, you talk about our brain, our mind as being made up of many brains.
link |
00:09:55.920
So the book is called A Thousand Brain Theory of Intelligence. What is the key idea of this book?
link |
00:10:02.720
The book has three sections and it has sort of maybe three big ideas. So the first section is
link |
00:10:09.360
all about what we've learned about the neocortex and that's the thousand brains theory. Just to
link |
00:10:13.760
complete the picture, the second section is all about AI and the third section is about the future
link |
00:10:16.960
of humanity. So the thousand brains theory, the big idea there, if I had to summarize into one
link |
00:10:27.440
big idea, is that we think of the brain, the neocortex as learning this model of the world.
link |
00:10:33.440
But what we learned is actually there's tens of thousands of independent modeling systems going
link |
00:10:38.560
on. And so each, we call the column in the cortex is about 150,000 of them, is a complete modeling
link |
00:10:44.560
system. So it's a collective intelligence in your head in some sense. So the thousand brains theory
link |
00:10:50.320
says, well, where do I have knowledge about this coffee cup or where's the model of this cell phone?
link |
00:10:55.760
It's not in one place. It's in thousands of separate models that are complimentary and
link |
00:10:59.360
they communicate with each other through voting. So this idea that we feel like we're one person,
link |
00:11:04.240
that's our experience. We can explain that. But reality, there's lots of these, it's almost like
link |
00:11:09.920
little brains, but they're sophisticated modeling systems, about 150,000 of them in each human
link |
00:11:16.320
brain. And that's a total different way of thinking about how the neocortex is structured
link |
00:11:21.280
than we or anyone else thought of even just five years ago. So you mentioned you started
link |
00:11:27.280
this journey just looking in the mirror and trying to understand who you are.
link |
00:11:31.840
So if you have many brains, who are you then? So it's interesting. We have a singular perception,
link |
00:11:38.080
right? We think, oh, I'm just here. I'm looking at you. But it's composed of all these things,
link |
00:11:42.560
like there's sounds and there's vision and there's touch and all kinds of inputs. Yeah,
link |
00:11:48.080
we have the singular perception. And what the thousand brain theory says, we have these models
link |
00:11:51.920
that are visual models. We have a lot of models that are auditory models, models that talk to
link |
00:11:55.040
models and so on, but they vote. And so these things in the cortex, you can think about these
link |
00:12:01.200
columns as like little grains of rice, 150,000 stacked next to each other. And each one is its
link |
00:12:07.920
own little modeling system, but they have these long range connections that go between them.
link |
00:12:12.640
And we call those voting connections or voting neurons. And so the different columns try to
link |
00:12:20.080
reach a consensus. Like, what am I looking at? Okay. Each one has some ambiguity, but they come
link |
00:12:24.640
to a consensus. Oh, there's a water bottle I'm looking at. We are only consciously able to
link |
00:12:30.640
perceive the voting. We're not able to perceive anything that goes on under the hood. So the
link |
00:12:35.680
voting is what we're aware of. The results of the vote.
link |
00:12:39.920
Yeah. Well, you can imagine it this way. We were just talking about eye movements a moment ago. So
link |
00:12:44.560
as I'm looking at something, my eyes are moving about three times a second. And with each movement,
link |
00:12:49.120
a completely new input is coming into the brain. It's not repetitive. It's not shifting it around.
link |
00:12:54.480
I'm totally unaware of it. I can't perceive it. But yet if I looked at the neurons in your brain,
link |
00:12:58.960
they're going on and off, on and off, on and off, on and off. But the voting neurons are not.
link |
00:13:03.040
The voting neurons are saying, we all agree, even though I'm looking at different parts of this,
link |
00:13:06.240
this is a water bottle right now. And that's not changing. And it's in some position and
link |
00:13:11.360
pose relative to me. So I have this perception of the water bottle about two feet away from me
link |
00:13:15.520
at a certain pose to me. That is not changing. That's the only part I'm aware of. I can't be
link |
00:13:20.480
aware of the fact that the inputs from the eyes are moving and changing and all this other tapping.
link |
00:13:25.040
So these long range connections are the part we can be conscious of. The individual activity in
link |
00:13:31.200
each column doesn't go anywhere else. It doesn't get shared anywhere else. There's no way to extract
link |
00:13:37.840
it and talk about it or extract it and even remember it to say, oh, yes, I can recall that.
link |
00:13:45.200
But these long range connections are the things that are accessible to language and to our,
link |
00:13:50.160
like the hippocampus, our memories, our short term memory systems and so on. So we're not aware of
link |
00:13:56.640
95% or maybe it's even 98% of what's going on in your brain. We're only aware of this sort of
link |
00:14:02.960
stable, somewhat stable voting outcome of all these things that are going on underneath the hood.
link |
00:14:09.920
So what would you say is the basic element in the thousand brains theory of intelligence
link |
00:14:15.520
of intelligence? Like what's the atom of intelligence when you think about it? Is it
link |
00:14:21.040
the individual brains and then what is a brain? Well, let's, let's, can we just talk about what
link |
00:14:25.920
intelligence is first and then, and then we can talk about the elements are. So in my, in my book,
link |
00:14:31.440
intelligence is the ability to learn a model of the world, to build internal to your head,
link |
00:14:38.560
a model that represents the structure of everything, you know, to know what this is a
link |
00:14:42.720
table and that's a coffee cup and this is a gooseneck lamp and all this to know these things.
link |
00:14:47.200
I have to have a model of it in my head. I just don't look at them and go, what is that?
link |
00:14:50.720
I already have internal representations of these things in my head and I had to learn them. I wasn't
link |
00:14:55.680
born of any of that knowledge. You were, you know, we have some lights in the room here. I, you know,
link |
00:15:00.320
that's not part of my evolutionary heritage, right? It's not in my genes. So, um, we have this
link |
00:15:05.360
incredible model and the model includes not only what things look like and feel like, but where
link |
00:15:09.040
they are relative to each other and how they behave. I've never picked up this water bottle
link |
00:15:12.800
before, but I know that if I took my hand on that blue thing and I turn it, it'll probably make a
link |
00:15:16.240
funny little sound as the little plastic things detach and then it'll rotate and it'll rotate a
link |
00:15:20.720
certain way and it'll come off. How do I know that? Because I have this model in my head.
link |
00:15:24.480
So the essence of intelligence as our ability to learn a model and the more sophisticated our
link |
00:15:29.360
model is, the smarter we are. Uh, not that there is a single intelligence, because you can know
link |
00:15:34.880
about, you know, a lot about things that I don't know. And I know about things you don't know.
link |
00:15:37.680
And we can both be very smart, but we both learned a model of the world through interacting with it.
link |
00:15:42.080
So that is the essence of intelligence. Then we can ask ourselves, what are the mechanisms in the
link |
00:15:46.320
brain that allow us to do that? And what are the mechanisms of learning, not just the neural
link |
00:15:50.560
mechanisms, what are the general process by how we learn a model? So that was a big insight for us.
link |
00:15:54.800
It's like, what are the, what is the actual things that, how do you learn this stuff? It turns out
link |
00:15:59.840
you have to learn it through movement. Um, you can't learn it just by that's how we learn. We
link |
00:16:04.000
learn through movement. We learn. Um, so you build up this model by observing things and
link |
00:16:07.840
touching them and moving them and walking around the world and so on. So either you move or the
link |
00:16:11.680
thing moves somehow. Yeah. You obviously can learn things just by reading a book, something like that.
link |
00:16:16.960
But think about if I were to say, oh, here's a new house. I want you to learn, you know,
link |
00:16:21.120
what do you do? You have to walk, you have to walk from room to the room. You have to open the doors,
link |
00:16:25.440
look around, see what's on the left, what's on the right. As you do this, you're building a model in
link |
00:16:29.680
your head. It's just, that's what you're doing. You can't just sit there and say, I'm going to grok
link |
00:16:34.000
the house. No. You know, or you can do it. You don't even want to just sit down and read some
link |
00:16:37.360
description of it, right? Yeah. You literally physically interact. The same with like a smartphone.
link |
00:16:41.600
If I'm going to learn a new app, I touch it and I move things around. I see what happens when I,
link |
00:16:45.760
when I do things with it. So that's the basic way we learn in the world. And by the way,
link |
00:16:49.600
when you say model, you mean something that can be used for prediction in the future.
link |
00:16:54.720
It's used for prediction and for behavior and planning. Right. And does a pretty good job
link |
00:17:02.000
doing so. Yeah. Here's the way to think about the model. A lot of people get hung up on this. So
link |
00:17:08.320
you can imagine an architect making a model of a house, right? So there's a physical model that's
link |
00:17:13.360
small. And why do they do that? Well, we do that because you can imagine what it would look like
link |
00:17:17.520
from different angles. Okay. Look from here, look from there. And you can also say, well,
link |
00:17:21.200
how, how far to get from the garage to the, to the swimming pool or something like that. Right. You
link |
00:17:25.760
can imagine looking at this and you can say, what would be the view from this location? So we build
link |
00:17:29.120
these physical models to let you imagine the future and imagine behaviors. Now we can take
link |
00:17:34.720
that same model and put it in a computer. So we now, today they'll build models of houses in a
link |
00:17:39.840
computer and they, and they do that using a set of, we'll come back to this term in a moment,
link |
00:17:45.840
reference frames, but basically you assign a reference frame for the palace and you assign
link |
00:17:49.680
different things for the house in different locations. And then the computer can generate
link |
00:17:53.280
an image and say, okay, this is what it looks like in this direction. The brain is doing something
link |
00:17:56.960
remarkably similar to this surprising. It's using reference frames. It's building these,
link |
00:18:02.160
it's similar to a model on a computer, which has the same benefits of building a physical model.
link |
00:18:06.160
It allows me to say, what would this thing look like if it was in this orientation? What would
link |
00:18:10.480
likely happen if I push this button? I've never pushed this button before, or how would I accomplish
link |
00:18:15.120
something? I want to, I want to convey a new idea I've learned. How would I do that? I can imagine
link |
00:18:21.520
in my head, well, I could talk about it. I could write a book. I could do some podcasts. I could,
link |
00:18:28.400
you know, maybe tell my neighbor, you know, and I can imagine the outcomes of all these things
link |
00:18:32.720
before I do any of them. That's what the model lets you do. It lets us plan the future and
link |
00:18:36.880
imagine the consequences of our actions. Prediction, you asked about prediction. Prediction
link |
00:18:42.720
is not the goal of the model. Prediction is an inherent property of it, and it's how the model
link |
00:18:48.800
corrects itself. So prediction is fundamental to intelligence. It's fundamental to building a model,
link |
00:18:55.600
and the model's intelligent. And let me go back and be very precise about this. Prediction,
link |
00:19:00.000
you can think of prediction two ways. One is like, hey, what would happen if I did this? That's a
link |
00:19:03.520
type of prediction. That's a key part of intelligence. But using prediction is like, oh,
link |
00:19:07.920
what's this water bottle going to feel like when I pick it up, you know? And that doesn't seem very
link |
00:19:13.120
intelligent. But one way to think about prediction is it's a way for us to learn where our model is
link |
00:19:20.400
wrong. So if I picked up this water bottle and it felt hot, I'd be very surprised. Or if I picked
link |
00:19:26.080
it up and it was very light, I'd be surprised. Or if I turned this top and I had to turn it the other
link |
00:19:32.720
way, I'd be surprised. And so all those might have a prediction like, okay, I'm going to do it. I'll
link |
00:19:38.480
drink some water. I'm okay. Okay, I do this. There it is. I feel opening, right? What if I had to turn
link |
00:19:42.720
it the other way? Or what if it's split in two? Then I say, oh my gosh, I misunderstood this. I
link |
00:19:47.360
didn't have the right model of this thing. My attention would be drawn to it. I'd be looking at
link |
00:19:50.400
it going, well, how the hell did that happen? Why did it open up that way? And I would update my
link |
00:19:55.360
model by doing it. Just by looking at it and playing around with that update and say, this is
link |
00:19:58.400
a new type of water bottle. So you're talking about sort of complicated things like a water bottle,
link |
00:20:05.040
but this also applies for just basic vision, just like seeing things. It's almost like a
link |
00:20:10.640
precondition of just perceiving the world is predicting it. So just everything that you see
link |
00:20:18.000
is first passed through your prediction. Everything you see and feel. In fact,
link |
00:20:23.600
this was the insight I had back in the early 80s. And I know that people have reached the same idea
link |
00:20:31.680
is that every sensory input you get, not just vision, but touch and hearing, you have an
link |
00:20:37.600
expectation about it and a prediction. Sometimes you can predict very accurately. Sometimes you
link |
00:20:43.440
can't. I can't predict what next word is going to come out of your mouth. But as you start talking,
link |
00:20:47.520
I'll get better and better predictions. And if you talk about some topics, I'd be very surprised.
link |
00:20:51.440
So I have this sort of background prediction that's going on all the time for all of my senses.
link |
00:20:58.160
Again, the way I think about that is this is how we learn. It's more about how we learn.
link |
00:21:04.960
It's a test of our understanding. Our predictions are a test. Is this really a water bottle? If it
link |
00:21:10.000
is, I shouldn't see a little finger sticking out the side. And if I saw a little finger sticking
link |
00:21:14.960
out, I was like, oh, what the hell's going on? That's not normal. I mean, that's fascinating
link |
00:21:20.480
that... Let me linger on this for a second. It really honestly feels that prediction is
link |
00:21:27.760
fundamental to everything, to the way our mind operates, to intelligence. So it's just a different
link |
00:21:35.760
way to see intelligence, which is like everything starts a prediction. And prediction requires a
link |
00:21:41.040
model. You can't predict something unless you have a model of it. Right. But the action is
link |
00:21:46.880
prediction. So the thing the model does is prediction. But it also... Yeah. But you can
link |
00:21:53.280
then extend it to things like, oh, what would happen if I took this today? I went and did this.
link |
00:21:59.600
What would be likely? Or how... You can extend prediction to like, oh, I want to get a promotion
link |
00:22:04.320
at work. What action should I take? And you can say, if I did this, I predict what might happen.
link |
00:22:09.280
If I spoke to someone, I predict what might happen. So it's not just low level predictions.
link |
00:22:13.360
Yeah. It's all predictions. It's all predictions. It's like this black box so you can ask basically
link |
00:22:17.440
any question, low level or high level. So we started off with that observation. It's
link |
00:22:21.920
this nonstop prediction. And I write about this in the book. And then we asked, how do neurons
link |
00:22:27.120
actually make predictions physically? Like what does the neuron do when it makes a prediction?
link |
00:22:32.400
Or the neural tissue does when it makes a prediction. And then we asked, what are the
link |
00:22:35.760
mechanisms by how we build a model that allows you to make predictions? So we started with prediction
link |
00:22:40.400
as sort of the fundamental research agenda, if in some sense. And say, well, we understand how
link |
00:22:47.520
the brain makes predictions. We'll understand how it builds these models and how it learns.
link |
00:22:51.360
And that's the core of intelligence. So it was the key that got us in the door
link |
00:22:55.680
to say, that is our research agenda. Understand predictions.
link |
00:22:59.360
So in this whole process, where does intelligence originate, would you say?
link |
00:23:05.200
So if we look at things that are much less intelligence to humans and you start to build
link |
00:23:12.560
up a human through the process of evolution, where's this magic thing that has a prediction
link |
00:23:19.920
model or a model that's able to predict that starts to look a lot more like intelligence?
link |
00:23:24.720
Is there a place where Richard Dawkins wrote an introduction to your book, an excellent
link |
00:23:30.960
introduction? I mean, it's, it puts a lot of things into context and it's funny just looking
link |
00:23:36.320
at parallels for your book and Darwin's Origin of Species. So Darwin wrote about the origin
link |
00:23:42.640
of species. So what is the origin of intelligence?
link |
00:23:47.760
Well, we have a theory about it and it's just that, it's a theory. The theory goes as follows.
link |
00:23:53.200
As soon as living things started to move, they're not just floating in sea, they're not just a
link |
00:23:58.720
plant, you know, grounded someplace. As soon as they started to move, there was an advantage to
link |
00:24:03.920
moving intelligently, to moving in certain ways. And there's some very simple things you can do,
link |
00:24:08.960
you know, bacteria or single cell organisms can move towards the source of gradient of
link |
00:24:14.480
food or something like that. But an animal that might know where it is and know where it's been
link |
00:24:19.280
and how to get back to that place, or an animal that might say, oh, there was a source of food
link |
00:24:23.520
someplace, how do I get to it? Or there was a danger, how do I get to it? There was a mate, how
link |
00:24:29.040
do I get to them? There was a big evolutionary advantage to that. So early on, there was a
link |
00:24:34.480
pressure to start understanding your environment, like where am I and where have I been? And what
link |
00:24:40.640
happened in those different places? So we still have this neural mechanism in our brains. In the
link |
00:24:49.600
mammals, it's in the hippocampus and entorhinal cortex, these are older parts of the brain.
link |
00:24:55.520
And these are very well studied. We build a map of the of our environment. So these neurons in
link |
00:25:02.240
these parts of the brain know where I am in this room, and where the door was and things like that.
link |
00:25:07.360
So a lot of other mammals have this?
link |
00:25:09.360
All mammals have this, right? And almost any animal that knows where it is, and get around
link |
00:25:15.600
must have some mapping system, must have some way of saying, I've learned a map of my environment,
link |
00:25:21.360
I have hummingbirds in my backyard. And they go to the same places all the time. They must know
link |
00:25:26.640
where they are. They just know where they are when they're not just randomly flying around. They
link |
00:25:30.000
know. They know particular flowers they come back to. So we all have this. And it turns out it's
link |
00:25:36.160
very tricky to get neurons to do this, to build a map of an environment. And so we now know,
link |
00:25:42.320
there's these famous studies that are still very active about place cells and grid cells and these
link |
00:25:47.440
other types of cells in the older parts of the brain, and how they build these maps of the world.
link |
00:25:51.920
It's really clever. It's obviously been under a lot of evolutionary pressure over a long period
link |
00:25:55.920
of time to get good at this. So animals now know where they are. What we think has happened,
link |
00:26:01.920
and there's a lot of evidence to suggest this, is that that mechanism we learned to map,
link |
00:26:06.080
like a space, was repackaged. The same type of neurons was repackaged into a more compact form.
link |
00:26:17.840
And that became the cortical column. And it was in some sense, genericized, if that's a word. It
link |
00:26:23.760
was turned into a very specific thing about learning maps of environments to learning maps
link |
00:26:28.800
of anything, learning a model of anything, not just your space, but coffee cups and so on. And
link |
00:26:34.960
it got sort of repackaged into a more compact version, a more universal version,
link |
00:26:41.280
and then replicated. So the reason we're so flexible is we have a very generic version of
link |
00:26:46.800
this mapping algorithm, and we have 150,000 copies of it. Sounds a lot like the progress
link |
00:26:52.800
of deep learning. How so? So take neural networks that seem to work well for a specific task,
link |
00:27:00.480
compress them, and multiply it by a lot. And then you just stack them on top of it. It's like the
link |
00:27:07.680
story of transformers in natural language processing. Yeah. But in deep learning networks,
link |
00:27:12.640
they end up, you're replicating an element, but you still need the entire network to do anything.
link |
00:27:18.160
Right. Here, what's going on, each individual element is a complete learning system. This is
link |
00:27:24.240
why I can take a human brain, cut it in half, and it still works. It's the same thing.
link |
00:27:29.680
It's pretty amazing. It's fundamentally distributed. It's fundamentally distributed,
link |
00:27:34.000
complete modeling systems. But that's our story we like to tell. I would guess it's likely largely
link |
00:27:42.560
right. But there's a lot of evidence supporting that story, this evolutionary story. The thing
link |
00:27:50.080
which brought me to this idea is that the human brain got big very quickly. So that led to the
link |
00:27:58.720
proposal a long time ago that, well, there's this common element just instead of creating
link |
00:28:02.640
new things, it just replicated something. We also are extremely flexible. We can learn things that
link |
00:28:07.680
we had no history about. And that tells it that the learning algorithm is very generic. It's very
link |
00:28:15.360
kind of universal because it doesn't assume any prior knowledge about what it's learning.
link |
00:28:20.960
And so you combine those things together and you say, okay, well, how did that come about? Where
link |
00:28:26.160
did that universal algorithm come from? It had to come from something that wasn't universal. It
link |
00:28:29.760
came from something that was more specific. So anyway, this led to our hypothesis that
link |
00:28:34.000
you would find grid cells and place cell equivalents in the neocortex. And when we
link |
00:28:38.960
first published our first papers on this theory, we didn't know of evidence for that. It turns out
link |
00:28:43.760
there was some, but we didn't know about it. So then we became aware of evidence for grid
link |
00:28:48.960
cells in parts of the neocortex. And then now there's been new evidence coming out. There's some
link |
00:28:53.200
interesting papers that came out just January of this year. So one of our predictions was if this
link |
00:28:59.360
evolutionary hypothesis is correct, we would see grid cell place cell equivalents, cells that work
link |
00:29:04.000
like them through every column in the neocortex. And that's starting to be seen. What does it mean
link |
00:29:08.640
that, why is it important that they're present? Because it tells us, well, we're asking about the
link |
00:29:13.920
evolutionary origin of intelligence, right? So our theory is that these columns in the cortex
link |
00:29:19.120
are working on the same principles, they're modeling systems. And it's hard to imagine how
link |
00:29:25.120
neurons do this. And so we said, hey, it's really hard to imagine how neurons could learn these
link |
00:29:30.240
models of things. We can talk about the details of that if you want. But there's this other part
link |
00:29:36.480
of the brain, we know that learns models of environments. So could that mechanism to learn
link |
00:29:41.840
to model this room be used to learn to model the water bottle? Is it the same mechanism? So we said
link |
00:29:47.280
it's much more likely the brain's using the same mechanism, which case it would have these equivalent
link |
00:29:52.400
cell types. So it's basically the whole theory is built on the idea that these columns have
link |
00:29:57.920
reference frames and they're learning these models and these grid cells create these reference frames.
link |
00:30:02.640
So it's basically the major, in some sense, the major predictive part of this theory is that we
link |
00:30:09.200
will find these equivalent mechanisms in each column in the neocortex, which tells us that
link |
00:30:14.560
that's what they're doing. They're learning these sensory motor models of the world. So we're pretty
link |
00:30:21.600
confident that would happen, but now we're seeing the evidence. So the evolutionary process, nature
link |
00:30:26.000
does a lot of copy and paste and see what happens. Yeah. Yeah. There's no direction to it. But it
link |
00:30:31.920
just found out like, hey, if I took these elements and made more of them, what happens? And let's hook
link |
00:30:37.600
them up to the eyes and let's hook them to ears. And that seems to work pretty well for us. Again,
link |
00:30:43.600
just to take a quick step back to our conversation of collective intelligence.
link |
00:30:48.960
Do you sometimes see that as just another copy and paste aspect is copying and pasting
link |
00:30:56.160
these brains and humans and making a lot of them and then creating social structures that then
link |
00:31:04.080
almost operate as a single brain? I wouldn't have said that, but you said it sounded pretty good.
link |
00:31:08.320
So to you, the brain is its own thing.
link |
00:31:15.440
I mean, our goal is to understand how the neocortex works. We can argue how essential
link |
00:31:20.560
that is to understand the human brain because it's not the entire human brain. You can argue
link |
00:31:25.200
how essential that is to understanding human intelligence. You can argue how essential this
link |
00:31:29.680
is to sort of communal intelligence. Our goal was to understand the neocortex.
link |
00:31:38.640
Yeah. So what is the neocortex and where does it fit
link |
00:31:41.680
in the various aspects of what the brain does? Like how important is it to you?
link |
00:31:46.480
Well, obviously, again, I mentioned again in the beginning, it's about 70 to 75% of the volume of
link |
00:31:53.680
the human brain. So it dominates our brain in terms of size. Not in terms of number of neurons,
link |
00:31:58.640
but in terms of size.
link |
00:32:00.640
Size isn't everything, Jeff.
link |
00:32:02.400
I know, but it's not that. We know that all high level vision,
link |
00:32:09.040
hearing, and touch happens in the neocortex. We know that all language occurs and is understood
link |
00:32:13.920
in the neocortex, whether that's spoken language, written language, sign language,
link |
00:32:17.280
whether it's language of mathematics, language of physics, music. We know that all high level
link |
00:32:23.360
planning and thinking occurs in the neocortex. If I were to say, what part of your brain designed
link |
00:32:27.840
a computer and understands programming and creates music? It's all the neocortex.
link |
00:32:33.040
So then that's an undeniable fact. But then there's other parts of our brain are important too,
link |
00:32:39.920
right? Our emotional states, our body regulating our body. So the way I like to look at it is,
link |
00:32:48.400
can you understand the neocortex without the rest of the brain? And some people say you can't,
link |
00:32:53.200
and I think absolutely you can. It's not that they're not interacting, but you can understand.
link |
00:32:58.480
Can you understand the neocortex without understanding the emotions of fear? Yes,
link |
00:33:01.920
you can. You can understand how the system works. It's just a modeling system. I make the analogy
link |
00:33:06.480
in the book that it's like a map of the world, and how that map is used depends on who's using it.
link |
00:33:12.720
So how our map of our world in our neocortex, how we manifest as a human depends on the rest of our
link |
00:33:19.680
brain. What are our motivations? What are my desires? Am I a nice guy or not a nice guy?
link |
00:33:23.760
Am I a cheater or not a cheater? How important different things are in my life?
link |
00:33:33.840
But the neocortex can be understood on its own. And I say that as a neuroscientist,
link |
00:33:39.760
I know there's all these interactions, and I don't want to say I don't know them and we
link |
00:33:43.840
don't think about them. But from a layperson's point of view, you can say it's a modeling system.
link |
00:33:47.840
I don't generally think too much about the communal aspect of intelligence, which you brought up a
link |
00:33:51.680
number of times already. So that's not really been my concern.
link |
00:33:55.040
I just wonder if there's a continuum from the origin of the universe, like
link |
00:34:00.320
this pockets of complexities that form living organisms. I wonder if we're just,
link |
00:34:08.480
if you look at humans, we feel like we're at the top. And I wonder if there's like just,
link |
00:34:13.120
I wonder if there's like just where everybody probably every living type pocket of complexity
link |
00:34:20.800
probably thinks they're the, pardon the French, they're the shit. They're at the top of the
link |
00:34:26.240
pyramid. Well, if they're thinking. Well, then what is thinking? In this sense,
link |
00:34:32.240
the whole point is in their sense of the world, their sense is that they're at the top of it.
link |
00:34:40.560
I think what is a turtle, but you're, you're, you're bringing up, you know,
link |
00:34:44.320
the problems of complexity and complexity theory are, you know, it's a huge,
link |
00:34:48.880
interesting problem in science. Um, and you know, I think we've made surprisingly little progress
link |
00:34:55.280
and understanding complex systems in general. Um, and so, you know, the Santa Fe Institute was
link |
00:35:01.120
founded to study this and even the scientists there will say, it's really hard. We haven't
link |
00:35:05.200
really been able to figure out exactly, you know, that science hasn't really congealed yet. We're
link |
00:35:10.560
still trying to figure out the basic elements of that science. Uh, what, you know, where does
link |
00:35:15.360
complexity come from and what is it and how you define it, whether it's DNA creating bodies or
link |
00:35:20.000
phenotypes or it's individuals creating societies or ants and, you know, markets and so on. It's,
link |
00:35:26.480
it's a very complex thing. I'm not a complexity theorist person, right? Um, and I, I think you
link |
00:35:32.800
should ask, well, the brain itself is a complex system. So can we understand that? Um, I think
link |
00:35:38.000
we've made a lot of progress understanding how the brain works. So, uh, but I haven't
link |
00:35:42.640
brought it out to like, oh, well, where are we on the complexity spectrum? You know, it's like,
link |
00:35:47.520
um, it's a great question. I'd prefer for that answer to be we're not special. It seems like
link |
00:35:55.680
if we're honest, most likely we're not special. So if there is a spectrum or probably not in some
link |
00:36:01.680
kind of significant place, there's one thing we could say that we are special. And again,
link |
00:36:06.480
only here on earth, I'm not saying is that if we think about knowledge, what we know,
link |
00:36:14.080
um, we clearly human brains have, um, the only brains that have a certain types of knowledge.
link |
00:36:21.040
We're the only brains on this earth to understand, uh, what the earth is, how old it is,
link |
00:36:25.920
that the universe is a picture as a whole with the only organisms understand DNA and
link |
00:36:30.400
the origins of, you know, of species. Uh, no other species on, on this planet has that knowledge.
link |
00:36:37.200
So if we think about, I like to think about, you know, one of the endeavors of humanity is to
link |
00:36:43.440
understand the universe as much as we can. Um, I think our species is further along in that
link |
00:36:49.920
undeniably, um, whether our theories are right or wrong, we can debate, but at least we have
link |
00:36:54.480
theories. You know, we, we know that what the sun is and how its fusion is and how what black holes
link |
00:36:59.760
are and, you know, we know general theory of relativity and no other animal has any of this
link |
00:37:04.400
knowledge. So in that sense that we're special, uh, are we special in terms of the hierarchy of
link |
00:37:10.800
complexity in the universe? Probably not. Can we look at a neuron? Yeah. You say that prediction
link |
00:37:20.960
happens in the neuron. What does that mean? So the neuron traditionally is seen as the
link |
00:37:24.960
basic element of the brain. So we, I mentioned this earlier that prediction was our research agenda.
link |
00:37:31.760
Yeah. We said, okay, um, how does the brain make a prediction? Like I I'm about to grab this water
link |
00:37:37.840
bottle and my brain is predicting what I'm going to feel on, on all my parts of my fingers. If I
link |
00:37:42.720
felt something really odd on any part here, I'd notice it. So my brain is predicting what it's
link |
00:37:46.560
going to feel as I grab this thing. So what does that, how does that manifest itself in neural
link |
00:37:51.360
tissue? Right. We got brains made of neurons and there's chemicals and there's neurons and there's
link |
00:37:57.360
spikes and the connect, you know, where, where is the prediction going on? And one argument could be
link |
00:38:03.600
that, well, when I'm predicting something, um, a neuron must be firing in advance. It's like, okay,
link |
00:38:09.360
this neuron represents what you're going to feel and it's firing. It's sending a spike.
link |
00:38:13.600
And certainly that happens to some extent, but our predictions are so ubiquitous
link |
00:38:17.760
that we're making so many of them, which we're totally unaware of just the vast majority of me
link |
00:38:21.360
have no idea that you're doing this. Um, that it, there wasn't really, we were trying to figure,
link |
00:38:27.120
how could this be? Where, where are these, where are these happening? Right. And I won't walk you
link |
00:38:31.920
through the whole story unless you insist upon it. But we came to the realization that most of your
link |
00:38:38.880
predictions are occurring inside individual neurons, especially these, the most common
link |
00:38:43.440
are in the parameter cells. And there are, there's a property of neurons. We, everyone knows,
link |
00:38:49.120
or most people know that a neuron is a cell and it has this spike called an action potential,
link |
00:38:53.280
and it sends information. But we now know that there's these spikes internal to the neuron,
link |
00:38:58.160
they're called dendritic spikes. They travel along the branches of the neuron and they don't leave
link |
00:39:03.200
the neuron. They're just internal only. There's far more dendritic spikes than there are action
link |
00:39:08.240
potentials, far more. They're happening all the time. And what we came to understand that those
link |
00:39:14.240
dendritic spikes, the ones that are occurring are actually a form of prediction. They're telling the
link |
00:39:18.880
neuron, the neuron is saying, I expect that I might become active shortly. And that internal,
link |
00:39:25.360
so the internal spike is a way of saying, you're going to, you might be generating external spikes
link |
00:39:30.240
soon. I predicted you're going to become active. And, and we've, we've, we wrote a paper in 2016
link |
00:39:36.640
which explained how this manifests itself in neural tissue and how it is that this all works
link |
00:39:42.480
together. But the vast majority, we think it's, there's a lot of evidence supporting it. So we,
link |
00:39:48.320
that's where we think that most of these predictions are internal. That's why you can't
link |
00:39:51.360
be, they're internal to the neuron, you can't perceive them.
link |
00:39:54.160
Well, from understanding the prediction mechanism of a single neuron, do you think there's deep
link |
00:40:00.080
insights to be gained about the prediction capabilities of the mini brains of the neural
link |
00:40:05.520
brain? Of the mini brains and then the bigger brain and the brain?
link |
00:40:08.160
Oh yeah. Yeah. Yeah. So having a prediction side of their individual neuron is not that useful.
link |
00:40:12.720
So what? The way it manifests itself in neural tissue is that when a neuron, a neuron emits these
link |
00:40:22.320
spikes are a very singular type event. If a neuron is predicting that it's going to be active, it
link |
00:40:27.440
emits its spike very, a little bit sooner, just a few milliseconds sooner than it would have
link |
00:40:31.840
been. It's like, I give the analogy of the book is like a sprinter on a, on a starting blocks in a,
link |
00:40:36.240
in a race. And if someone says, get ready, set, you get up and you're ready to go. And then when
link |
00:40:42.480
your race starts, you get a little bit earlier start. So that it's that, that ready set is like
link |
00:40:46.320
the prediction and the neurons like ready to go quicker. And what happens is when you have a whole
link |
00:40:50.800
bunch of neurons together and they're all getting these inputs, the ones that are in the predictive
link |
00:40:55.520
state, the ones that are anticipating to become active, if they do become active, they, they
link |
00:40:59.920
sooner, they disable everything else. And it leads to different representations in the brain. So
link |
00:41:04.240
you have to, it's not isolated just to the neuron, the prediction occurs with the neuron,
link |
00:41:09.600
but the network behavior changes. So what happens under different predictions, different inputs
link |
00:41:14.880
have different representations. So how I, what I predict is going to be different under different
link |
00:41:20.800
contexts, you know, what my input will be is different under different contexts. So this is,
link |
00:41:24.960
this is a key to the whole theory, how this works. So the theory of the thousand brains,
link |
00:41:30.560
if you were to count the number of brains, how would you do it? The thousand brain theory says
link |
00:41:35.920
that basically every cortical column in the, in your, in your cortex is a complete modeling system.
link |
00:41:42.320
And that when I ask, where do I have a model of something like a coffee cup? It's not in one of
link |
00:41:46.800
those models. It's in thousands of those models. There's thousands of models of coffee cups. That's
link |
00:41:51.040
what the thousand brains, then there's a voting mechanism, which you lead, which you're, which is
link |
00:41:56.160
the thing you're, which you're conscious of, which leads to your singular perception. That's why you,
link |
00:42:01.520
you perceive something. So that's the thousand brains theory. The details, how we got to that
link |
00:42:07.200
theory are complicated. It wasn't, we just thought of it one day. And one of those details that we
link |
00:42:13.440
had to ask, how does a model make predictions? And we've talked about just these predictive neurons.
link |
00:42:18.160
That's part of this theory. It's like saying, Oh, it's a detail, but it was like a crack in the
link |
00:42:22.320
door. It's like, how are we going to figure out how these neurons built through this? You know,
link |
00:42:24.960
what is going on here? So we just looked at prediction as like, well, we know that's ubiquitous.
link |
00:42:30.080
We know that every part of the cortex is making predictions. Therefore, whatever the predictive
link |
00:42:34.400
system is, it's going to be everywhere. We know there's a gazillion predictions happening at once.
link |
00:42:39.040
So this is where we can start teasing apart, you know, ask questions about, you know, how could
link |
00:42:44.000
neurons be making these predictions? And that sort of built up to now what we have this thousand
link |
00:42:48.640
brains theory, which is complex. You know, it's just, I can state it simply, but we just didn't
link |
00:42:53.200
think of it. We had to get there step by step, very, it took years to get there.
link |
00:42:59.200
And where does reference frames fit in? So, yeah.
link |
00:43:04.560
Okay. So again, a reference frame, I mentioned earlier about the model of a house. And I said,
link |
00:43:11.200
if you're going to build a model of a house in a computer, they have a reference frame. And you
link |
00:43:14.560
can think of reference frame like Cartesian coordinates, like X, Y, and Z axes. So I could
link |
00:43:19.600
say, oh, I'm going to design a house. I can say, well, the front door is at this location, X, Y,
link |
00:43:24.400
Z, and the roof is at this location, X, Y, Z, and so on. That's a type of reference frame.
link |
00:43:29.440
So it turns out for you to make a prediction, and I walk you through the thought experiment in the
link |
00:43:33.600
book where I was predicting what my finger was going to feel when I touched a coffee cup.
link |
00:43:37.360
It was a ceramic coffee cup, but this one will do. And what I realized is that to make a prediction
link |
00:43:45.200
of what my finger's going to feel, like it's going to feel different than this, what's it feel
link |
00:43:48.240
different if I touch the hole or this thing on the bottom, make that prediction. The cortex needs to
link |
00:43:53.280
know where the finger is, the tip of the finger, relative to the coffee cup. And exactly relative
link |
00:43:59.360
to the coffee cup. And to do that, I have to have a reference frame for the coffee cup. It has to
link |
00:44:03.440
have a way of representing the location of my finger to the coffee cup. And then we realized,
link |
00:44:08.160
of course, every part of your skin has to have a reference frame relative to things that touch.
link |
00:44:11.360
And then we did the same thing with vision. So the idea that a reference frame is necessary
link |
00:44:16.240
to make a prediction when you're touching something or when you're seeing something
link |
00:44:20.080
and you're moving your eyes or you're moving your fingers, it's just a requirement
link |
00:44:24.000
to predict. If I have a structure, I'm going to make a prediction. I have to know where it is I'm
link |
00:44:29.200
looking or touching it. So then we said, well, how do neurons make reference frames? It's not obvious.
link |
00:44:36.160
X, Y, Z coordinates don't exist in the brain. It's just not the way it works. So that's when we
link |
00:44:40.480
looked at the older part of the brain, the hippocampus and the anterior cortex, where we knew
link |
00:44:45.120
that in that part of the brain, there's a reference frame for a room or a reference frame for an
link |
00:44:49.920
environment. Remember, I talked earlier about how you could make a map of this room. So we said,
link |
00:44:55.200
oh, they are implementing reference frames there. So we knew that reference frames needed to exist
link |
00:45:01.440
in every quarter of a column. And so that was a deductive thing. We just deduced it. It has to
link |
00:45:07.680
exist. So you take the old mammalian ability to know where you are in a particular space
link |
00:45:15.920
and you start applying that to higher and higher levels.
link |
00:45:18.320
Yeah. First you apply it to like where your finger is. So here's what I think about it.
link |
00:45:22.560
The old part of the brain says, where's my body in this room? The new part of the brain says,
link |
00:45:26.720
where's my finger relative to this object? Where is a section of my retina relative to
link |
00:45:34.720
this object? I'm looking at one little corner. Where is that relative to this patch of my retina?
link |
00:45:40.800
And then we take the same thing and apply it to concepts, mathematics, physics, humanity,
link |
00:45:47.280
whatever you want to think about. And eventually you're pondering your own mortality.
link |
00:45:50.240
Well, whatever. But the point is when we think about the world, when we have knowledge about
link |
00:45:55.520
the world, how is that knowledge organized, Lex? Where is it in your head? The answer is it's in
link |
00:46:00.560
reference frames. So the way I learned the structure of this water bottle where the
link |
00:46:05.920
features are relative to each other, when I think about history or democracy or mathematics,
link |
00:46:11.200
the same basic underlying structure is happening. There's reference frames for where the knowledge
link |
00:46:15.680
that you're assigning things to. So in the book, I go through examples like mathematics
link |
00:46:19.200
and language and politics. But the evidence is very clear in the neuroscience. The same mechanism
link |
00:46:25.920
that we use to model this coffee cup, we're going to use to model high level thoughts.
link |
00:46:30.160
Your demise of humanity, whatever you want to think about.
link |
00:46:34.160
It's interesting to think about how different are the representations of those higher dimensional
link |
00:46:38.960
concepts, higher level concepts, how different the representation there is in terms of reference
link |
00:46:45.680
frames versus spatial. But the interesting thing, it's a different application, but it's the exact
link |
00:46:52.080
same mechanism. But isn't there some aspect to higher level concepts that they seem to be
link |
00:46:59.680
hierarchical? Like they just seem to integrate a lot of information into them. So is our physical
link |
00:47:05.600
objects. So take this water bottle. I'm not particular to this brand, but this is a Fiji
link |
00:47:12.160
water bottle and it has a logo on it. I use this example in my book, our company's coffee cup has
link |
00:47:18.880
a logo on it. But this object is hierarchical. It's got like a cylinder and a cap, but then it
link |
00:47:25.520
has this logo on it and the logo has a word, the word has letters, the letters have different
link |
00:47:29.360
features. And so I don't have to remember, I don't have to think about this. So I say,
link |
00:47:33.840
oh, there's a Fiji logo on this water bottle. I don't have to go through and say, oh, what is the
link |
00:47:37.920
Fiji logo? It's the F and I and the J and I, and there's a hibiscus flower. And, oh, it has the
link |
00:47:43.920
statement on it. I don't have to do that. I just incorporate all of that in some sort of hierarchical
link |
00:47:47.760
representation. I say, put this logo on this water bottle. And then the logo has a word
link |
00:47:55.040
and the word has letters, all hierarchical. All that stuff is big. It's amazing that the
link |
00:47:59.520
brain instantly just does all that. The idea that there's water, it's liquid and the idea that you
link |
00:48:04.960
can drink it when you're thirsty, the idea that there's brands and then there's like all of that
link |
00:48:11.920
information is instantly like built into the whole thing once you proceed. So I wanted to
link |
00:48:17.120
get back to your point about hierarchical representation. The world itself is hierarchical,
link |
00:48:21.680
right? And I can take this microphone in front of me. I know inside there's going to be some
link |
00:48:25.200
electronics. I know there's going to be some wires and I know there's going to be a little
link |
00:48:28.080
diaphragm that moves back and forth. I don't see that, but I know it. So everything in the world
link |
00:48:33.920
is hierarchical. You just go into a room. It's composed of other components. The kitchen has a
link |
00:48:37.840
refrigerator. The refrigerator has a door. The door has a hinge. The hinge has screws and pin.
link |
00:48:43.200
So anyway, the modeling system that exists in every cortical column learns the hierarchical
link |
00:48:49.360
structure of objects. So it's a very sophisticated modeling system in this grain of rice. It's hard
link |
00:48:54.720
to imagine, but this grain of rice can do really sophisticated things. It's got 100,000 neurons in
link |
00:48:58.800
it. It's very sophisticated. So that same mechanism that can model a water bottle or a coffee cup
link |
00:49:07.440
can model conceptual objects as well. That's the beauty of this discovery that this guy,
link |
00:49:13.600
Vernon Malmkastel, made many, many years ago, which is that there's a single cortical algorithm
link |
00:49:18.720
underlying everything we're doing. So common sense concepts and higher
link |
00:49:23.840
level concepts are all represented in the same way?
link |
00:49:26.720
They're set in the same mechanisms, yeah. It's a little bit like computers. All computers are
link |
00:49:31.520
universal Turing machines. Even the little teeny one that's in my toaster and the big one that's
link |
00:49:37.520
running some cloud server someplace. They're all running on the same principle. They can
link |
00:49:41.680
apply different things. So the brain is all built on the same principle. It's all about
link |
00:49:46.080
learning these structured models using movement and reference frames. And it can be applied to
link |
00:49:53.120
something as simple as a water bottle and a coffee cup. And it can be applied to thinking
link |
00:49:56.400
what's the future of humanity and why do you have a hedgehog on your desk? I don't know.
link |
00:50:02.800
Nobody knows. Well, I think it's a hedgehog. That's right. It's a hedgehog in the fog.
link |
00:50:09.280
It's a Russian reference. Does it give you any inclination or hope about how difficult
link |
00:50:16.240
it is to engineer common sense reasoning? So how complicated is this whole process?
link |
00:50:21.840
So looking at the brain, is this a marvel of engineering or is it pretty dumb stuff
link |
00:50:28.640
stuck on top of each other over? Can it be both? Can it be both, right?
link |
00:50:35.600
I don't know if it can be both because if it's an incredible engineering job, that means it's
link |
00:50:43.040
so evolution did a lot of work. Yeah, but then it just copied that.
link |
00:50:48.320
Yeah. Right. So as I said earlier, figuring out how to model something like a space is really hard
link |
00:50:55.760
and evolution had to go through a lot of trick. And these cells I was talking about,
link |
00:50:59.760
these grid cells and place cells, they're really complicated. This is not simple stuff.
link |
00:51:03.040
This neural tissue works on these really unexpected, weird mechanisms.
link |
00:51:08.720
But it did it. It figured it out. But now you could just make lots of copies of it.
link |
00:51:13.120
But then finding, yeah, so it's a very interesting idea that's a lot of copies
link |
00:51:18.320
of a basic mini brain. But the question is how difficult it is to find that mini brain
link |
00:51:25.360
that you can copy and paste effectively. Today, we know enough to build this.
link |
00:51:33.920
I'm sitting here with, I know the steps we have to go. There's still some engineering problems
link |
00:51:37.920
to solve, but we know enough. And this is not like, oh, this is an interesting idea. We have
link |
00:51:43.760
to go think about it for another few decades. No, we actually understand it pretty well in details.
link |
00:51:48.160
So not all the details, but most of them. So it's complicated, but it is an engineering problem.
link |
00:51:55.360
So in my company, we are working on that. We are basically a roadmap of how we do this.
link |
00:52:01.360
It's not going to take decades. It's a matter of a few years optimistically,
link |
00:52:06.880
but I think that's possible. It's, you know, complex things. If you understand them,
link |
00:52:11.840
you can build them. So in which domain do you think it's best to build them?
link |
00:52:17.200
Are we talking about robotics, like entities that operate in the physical world that are
link |
00:52:23.440
able to interact with that world? Are we talking about entities that operate in the digital world?
link |
00:52:27.920
Are we talking about something more like more specific, like it's done in the machine learning
link |
00:52:33.840
community where you look at natural language or computer vision? Where do you think is easiest?
link |
00:52:41.120
It's the first, it's the first two more than the third one, I would say.
link |
00:52:46.560
Again, let's just use computers as an analogy. The pioneers in computing, people like John
link |
00:52:52.320
Van Norman and Alan Turing, they created this thing, you know, we now call the universal
link |
00:52:56.800
Turing machine, which is a computer, right? Did they know how it was going to be applied?
link |
00:53:00.800
Where it was going to be used? Could they envision any of the future? No. They just said,
link |
00:53:04.960
this is like a really interesting computational idea about algorithms and how you can implement
link |
00:53:11.120
them in a machine. And we're doing something similar to that today. Like we are building this
link |
00:53:18.400
sort of universal learning principle that can be applied to many, many different things.
link |
00:53:24.480
But the robotics piece of that, the interactive...
link |
00:53:27.600
Okay. All right. Let's be just specific. You can think of this cortical column as
link |
00:53:31.360
what we call a sensory motor learning system. It has the idea that there's a sensor
link |
00:53:35.120
and then it's moving. That sensor can be physical. It could be like my finger
link |
00:53:39.520
and it's moving in the world. It could be like my eye and it's physically moving.
link |
00:53:43.440
It can also be virtual. So, it could be, an example would be, I could have a system that
link |
00:53:50.160
lives in the internet that actually samples information on the internet and moves by
link |
00:53:55.360
following links. That's a sensory motor system. Something that echoes the process of a finger
link |
00:54:02.240
moving along a cortical... But in a very, very loose sense. It's like,
link |
00:54:06.640
again, learning is inherently about discovering the structure of the world and discover the
link |
00:54:10.720
structure of the world, you have to move through the world. Even if it's a virtual world, even if
link |
00:54:14.720
it's a conceptual world, you have to move through it. It doesn't exist in one... It has some structure
link |
00:54:20.480
to it. So, here's a couple of predictions at getting what you're talking about.
link |
00:54:27.040
In humans, the same algorithm does robotics. It moves my arms, my eyes, my body.
link |
00:54:34.560
And so, in the future, to me, robotics and AI will merge. They're not going to be separate fields
link |
00:54:40.000
because the algorithms for really controlling robots are going to be the same algorithms we
link |
00:54:45.200
have in our brain, these sensory motor algorithms. Today, we're not there, but I think that's going
link |
00:54:50.000
to happen. But not all AI systems will have to be robotics. You can have systems that have very
link |
00:54:58.880
different types of embodiments. Some will have physical movements, some will not have physical
link |
00:55:02.480
movements. It's a very generic learning system. Again, it's like computers. The Turing machine,
link |
00:55:08.560
it doesn't say how it's supposed to be implemented, it doesn't tell you how big it is,
link |
00:55:11.680
it doesn't tell you what you can apply it to, but it's a computational principle.
link |
00:55:15.440
The cortical column equivalent is a computational principle about learning. It's about how you
link |
00:55:20.640
learn and it can be applied to a gazillion things. I think this impact of AI is going to be as large,
link |
00:55:27.440
if not larger, than computing has been in the last century, by far, because it's getting at
link |
00:55:33.200
a fundamental thing. It's not a vision system or a learning system. It's not a vision system or
link |
00:55:37.600
a hearing system. It is a learning system. It's a fundamental principle, how you learn the structure
link |
00:55:41.600
in the world, how you can gain knowledge and be intelligent. That's what the thousand brains says
link |
00:55:46.400
was going on. We have a particular implementation in our head, but it doesn't have to be like that
link |
00:55:49.680
at all. Do you think there's going to be some kind of impact? Okay, let me ask it another way.
link |
00:55:56.800
What do increasingly intelligent AI systems do with us humans in the following way? How hard is
link |
00:56:05.360
the human in the loop problem? How hard is it to interact? The finger on the coffee cup equivalent
link |
00:56:13.040
of having a conversation with a human being. How hard is it to fit into our little human world?
link |
00:56:20.880
I think it's a lot of engineering problems. I don't think it's a fundamental problem.
link |
00:56:25.200
I could ask you the same question. How hard is it for computers to fit into a human world?
link |
00:56:28.880
Right. That's essentially what I'm asking. How elitist are we as humans? We try to keep out
link |
00:56:40.720
systems. I don't know. I'm not sure that's the right question. Let's look at computers as an
link |
00:56:48.240
analogy. Computers are a million times faster than us. They do things we can't understand.
link |
00:56:52.480
Most people have no idea what's going on when they use computers. How do we integrate them
link |
00:56:57.120
in our society? Well, we don't think of them as their own entity. They're not living things.
link |
00:57:04.160
We don't afford them rights. We rely on them. Our survival as seven billion people or something
link |
00:57:12.800
like that is relying on computers now. Don't you think that's a fundamental problem
link |
00:57:18.320
that we see them as something we don't give rights to?
link |
00:57:22.480
Computers? Yeah, computers. Robots,
link |
00:57:25.600
computers, intelligence systems. It feels like for them to operate successfully,
link |
00:57:29.920
they would need to have a lot of the elements that we would start having to think about.
link |
00:57:37.760
Should this entity have rights? I don't think so. I think
link |
00:57:42.560
it's tempting to think that way. First of all, hardly anyone thinks that for computers today.
link |
00:57:47.680
No one says, oh, this thing needs a right. I shouldn't be able to turn it off. If I throw it
link |
00:57:52.320
in the trash can and hit it with a sledgehammer, it might form a criminal act. No one thinks that.
link |
00:57:59.360
Now we think about intelligent machines, which is where you're going.
link |
00:58:05.600
All of a sudden, you're like, well, now we can't do that. I think the basic problem we have here
link |
00:58:10.080
is that people think intelligent machines will be like us. They're going to have the same emotions
link |
00:58:14.000
as we do, the same feelings as we do. What if I can build an intelligent machine that absolutely
link |
00:58:19.120
could care less about whether it was on or off or destroyed or not? It just doesn't care. It's
link |
00:58:23.040
just like a map. It's just a modeling system. There's no desires to live. Nothing.
link |
00:58:28.400
Is it possible to create a system that can model the world deeply and not care
link |
00:58:35.280
about whether it lives or dies? Absolutely. No question about it.
link |
00:58:38.640
To me, that's not 100% obvious. It's obvious to me. We can debate it if we want.
link |
00:58:43.920
Where does your desire to live come from? It's an old evolutionary design. We could argue,
link |
00:58:52.560
does it really matter if we live or not? Objectively, no. We're all going to die eventually.
link |
00:59:00.720
Evolution makes us want to live. Evolution makes us want to fight to live. Evolution makes us want
link |
00:59:05.840
to care and love one another and to care for our children and our relatives and our family and so
link |
00:59:11.840
on. Those are all good things. They come about not because we're smart, because we're animals
link |
00:59:18.880
that grew up. The hummingbird in my backyard cares about its offspring. Every living thing
link |
00:59:25.280
in some sense cares about surviving. When we talk about creating intelligent machines,
link |
00:59:30.720
we're not creating life. We're not creating evolving creatures. We're not creating living
link |
00:59:35.360
things. We're just creating a machine that can learn really sophisticated stuff. That machine,
link |
00:59:40.400
it may even be able to talk to us. It's not going to have a desire to live unless somehow we put it
link |
00:59:47.120
into that system. Well, there's learning, right? The thing is... But you don't learn to want to
link |
00:59:52.720
live. It's built into you. It's part of your DNA. People like Ernest Becker argue,
link |
00:59:59.600
there's the fact of finiteness of life. The way we think about it is something we learned,
link |
01:00:06.000
perhaps. Okay. Yeah. Some people decide they don't want to live. Some people decide the desire to
link |
01:00:13.120
live is built in DNA, right? But I think what I'm trying to get to is in order to accomplish goals,
link |
01:00:18.880
it's useful to have the urgency of mortality. It's what the Stoics talked about,
link |
01:00:23.200
is meditating in your mortality. It might be a very useful thing to do to die and have the urgency
link |
01:00:31.600
of death and to realize that to conceive yourself as an entity that operates in this world that
link |
01:00:38.400
eventually will no longer be a part of this world and actually conceive of yourself as a conscious
link |
01:00:43.280
entity might be very useful for you to be a system that makes sense of the world. Otherwise,
link |
01:00:49.760
you might get lazy. Well, okay. We're going to build these machines, right? So we're talking
link |
01:00:55.360
about building AIs. But we're building the equivalent of the cortical columns.
link |
01:01:03.360
The neocortex. The neocortex. And the question is, where do they arrive at? Because we're not
link |
01:01:11.120
hard coding everything in. Well, in terms of if you build the neocortex equivalent,
link |
01:01:17.360
it will not have any of these desires or emotional states. Now, you can argue that
link |
01:01:22.640
that neocortex won't be useful unless I give it some agency, unless I give it some desire,
link |
01:01:28.240
unless I give it some motivation. Otherwise, you'll be just lazy and do nothing, right?
link |
01:01:31.600
You could argue that. But on its own, it's not going to do those things. It's just not going
link |
01:01:37.040
to sit there and say, I understand the world. Therefore, I care to live. No, it's not going
link |
01:01:41.120
to do that. It's just going to say, I understand the world. Why is that obvious to you? Do you think
link |
01:01:46.240
it's possible? Okay, let me ask it this way. Do you think it's possible it will at least assign to
link |
01:01:52.960
itself agency and perceive itself in this world as being a conscious entity as a useful way to
link |
01:02:04.240
operate in the world and to make sense of the world? I think an intelligent machine can be
link |
01:02:08.640
conscious, but that does not, again, imply any of these desires and goals that you're worried about.
link |
01:02:18.160
We can talk about what it means for a machine to be conscious.
link |
01:02:20.560
By the way, not worry about, but get excited about. It's not necessary that we should worry
link |
01:02:24.640
about it. I think there's a legitimate problem or not problem, a question asked,
link |
01:02:29.200
if you build this modeling system, what's it going to model? What's its desire? What's its
link |
01:02:35.600
goal? What are we applying it to? That's an interesting question. One thing, and it depends
link |
01:02:42.720
on the application, it's not something that inherent to the modeling system. It's something
link |
01:02:46.800
we apply to the modeling system in a particular way. If I wanted to make a really smart car,
link |
01:02:52.320
it would have to know about driving and cars and what's important in driving and cars.
link |
01:02:58.320
It's not going to figure that on its own. It's not going to sit there and say, I've understood
link |
01:03:01.760
the world and I've decided, no, no, no, no, we're going to have to tell it. We're going to have to
link |
01:03:06.000
say, so I imagine I make this car really smart. It learns about your driving habits. It learns
link |
01:03:10.880
about the world. Is it one day going to wake up and say, you know what? I'm tired of driving
link |
01:03:17.760
and doing what you want. I think I have better ideas about how to spend my time.
link |
01:03:22.080
Okay. No, it's not going to do that. Well, part of me is playing a little bit of devil's advocate,
link |
01:03:26.160
but part of me is also trying to think through this because I've studied cars quite a bit and
link |
01:03:32.560
I studied pedestrians and cyclists quite a bit. And there's part of me that thinks
link |
01:03:38.560
that there needs to be more intelligence than we realize in order to drive successfully.
link |
01:03:46.160
That game theory of human interaction seems to require some deep understanding of human nature
link |
01:03:54.720
that, okay. When a pedestrian crosses the street, there's some sense. They look at a car usually,
link |
01:04:04.880
and then they look away. There's some sense in which they say, I believe that you're not going
link |
01:04:10.960
to murder me. You don't have the guts to murder me. This is the little dance of pedestrian car
link |
01:04:16.320
interaction is saying, I'm going to look away and I'm going to put my life in your hands because
link |
01:04:22.960
I think you're human. You're not going to kill me. And then the car in order to successfully
link |
01:04:28.240
operate in like Manhattan streets has to say, no, no, no, no. I am going to kill you like a little
link |
01:04:34.400
bit. There's a little bit of this weird inkling of mutual murder. And that's a dance and somehow
link |
01:04:40.480
successfully operate through that. Do you think you were born of that? Did you learn that social
link |
01:04:44.160
interaction? I think it might have a lot of the same elements that you're talking about,
link |
01:04:50.800
which is we're leveraging things we were born with and applying them in the context that.
link |
01:04:57.600
All right. I would have said that that kind of interaction is learned because people in different
link |
01:05:03.440
cultures to have different interactions like that. If you cross the street in different cities and
link |
01:05:06.880
different parts of the world, they have different ways of interacting. I would say that's learned.
link |
01:05:10.400
And I would say an intelligent system can learn that too, but that does not lead. And the intelligent
link |
01:05:15.360
system can understand humans. It could understand that just like I can study an animal and learn
link |
01:05:24.320
something about that animal. I could study apes and learn something about their culture and so on.
link |
01:05:28.640
I don't have to be an ape to know that. I may not be completely, but I can understand something.
link |
01:05:34.160
So intelligent machine can model that. That's just part of the world. It's just part of the
link |
01:05:37.360
interactions. The question we're trying to get at, will the intelligent machine have its own personal
link |
01:05:42.640
agency that's beyond what we assign to it or its own personal goals or will it evolve and create
link |
01:05:49.440
these things? My confidence comes from understanding the mechanisms I'm talking about creating.
link |
01:05:55.920
This is not hand wavy stuff. It's down in the details. I'm going to build it. And I know what
link |
01:06:00.880
it's going to look like. And I know what it's going to behave. I know what the kind of things
link |
01:06:03.760
it could do and the kind of things it can't do. Just like when I build a computer, I know it's
link |
01:06:08.000
not going to, on its own, decide to put another register inside of it. It can't do that. No way.
link |
01:06:13.440
No matter what your software does, it can't add a register to the computer.
link |
01:06:17.440
So in this way, when we build AI systems, we have to make choices about how we embed them.
link |
01:06:26.560
So I talk about this in the book. I said intelligent system is not just the neocortex
link |
01:06:30.880
equivalent. You have to have that. But it has to have some kind of embodiment, physical or virtual.
link |
01:06:36.800
It has to have some sort of goals. It has to have some sort of ideas about dangers,
link |
01:06:41.040
about things it shouldn't do. We build in safeguards into systems. We have them in our
link |
01:06:47.360
bodies. We put them into cars. My car follows my directions until the day it sees I'm about to hit
link |
01:06:53.440
something and it ignores my directions and puts the brakes on. So we can build those things in.
link |
01:06:58.240
So that's a very interesting problem, how to build those in. I think my differing opinion about the
link |
01:07:06.480
risks of AI for most people is that people assume that somehow those things will disappear
link |
01:07:11.440
automatically and evolve. And intelligence itself begets that stuff or requires it.
link |
01:07:17.600
But it's not. Intelligence of the neocortex equipment doesn't require this. The neocortex
link |
01:07:21.120
equipment just says, I'm a learning system. Tell me what you want me to learn and ask me questions
link |
01:07:26.880
and I'll tell you the answers. And that, again, it's again like a map. A map has no intent about
link |
01:07:33.920
things, but you can use it to solve problems. Okay. So the building, engineering the neocortex
link |
01:07:41.920
in itself is just creating an intelligent prediction system.
link |
01:07:45.840
Modeling system. Sorry, modeling system. You can use it to then make predictions.
link |
01:07:52.480
But you can also put it inside a thing that's actually acting in this world.
link |
01:07:56.800
You have to put it inside something. Again, think of the map analogy, right? A map on its own doesn't
link |
01:08:02.160
do anything. It's just inert. It can learn, but it's just inert. So we have to embed it somehow
link |
01:08:07.920
in something to do something. So what's your intuition here? You had a conversation with
link |
01:08:13.360
Sam Harris recently that was sort of, you've had a bit of a disagreement and you're sticking on
link |
01:08:20.320
this point. Elon Musk, Stuart Russell kind of have us worry existential threats of AI.
link |
01:08:29.520
What's your intuition? Why, if we engineer increasingly intelligent neocortex type of system
link |
01:08:36.720
in the computer, why that shouldn't be a thing that we...
link |
01:08:40.240
It was interesting to use the word intuition and Sam Harris used the word intuition too.
link |
01:08:44.240
And we didn't use that intuition, that word. I immediately stopped and said,
link |
01:08:47.840
oh, that's the crux of the problem. He's using intuition. I'm not speaking about my intuition.
link |
01:08:52.960
I'm speaking about something I understand, something I'm going to build, something I am
link |
01:08:56.080
building, something I understand completely, or at least well enough to know what... I'm guessing,
link |
01:09:01.840
I know what this thing's going to do. And I think most people who are worried, they have trouble
link |
01:09:08.160
separating out... They don't have the knowledge or the understanding about what is intelligence,
link |
01:09:13.280
how's it manifest in the brain, how's it separate from these other functions in the brain.
link |
01:09:17.280
And so they imagine it's going to be human like or animal like. It's going to have the same sort of
link |
01:09:21.680
drives and emotions we have, but there's no reason for that. That's just because there's an unknown.
link |
01:09:27.680
If the unknown is like, oh my God, I don't know what this is going to do. We have to be careful.
link |
01:09:31.520
It could be like us, but really smarter. I'm saying, no, it won't be like us. It'll be really
link |
01:09:35.680
smarter, but it won't be like us at all. But I'm coming from that, not because I'm just guessing,
link |
01:09:42.080
I'm not using intuition. I'm basing it on like, okay, I understand this thing works. This is what
link |
01:09:46.640
it does. It makes money to you. Okay. But to push back, so I also disagree with the intuitions that
link |
01:09:54.400
Sam has, but I also disagree with what you just said, which, you know, what's a good analogy. So
link |
01:10:02.080
if you look at the Twitter algorithm in the early days, just recommender systems, you can understand
link |
01:10:08.720
how recommender systems work. What you can't understand in the early days is when you apply
link |
01:10:14.640
that recommender system at scale to thousands and millions of people, how that can change societies.
link |
01:10:20.400
Yeah. So the question is, yes, you're just saying this is how an engineer in your cortex works,
link |
01:10:27.840
but the, like when you have a very useful, uh, TikTok type of service that goes viral when your
link |
01:10:35.040
neural cortex goes viral and then millions of people start using it, can that destroy the world?
link |
01:10:40.160
No. Uh, well, first of all, this is back. One thing I want to say is that, um, AI is a dangerous
link |
01:10:44.880
technology. I don't, I'm not denying that. All technology is dangerous. Well, and AI,
link |
01:10:48.880
maybe particularly so. Okay. So, um, am I worried about it? Yeah, I'm totally worried about it.
link |
01:10:54.400
The thing where the narrow component we're talking about now is the existential risk of AI, right?
link |
01:11:00.320
Yeah. So I want to make that distinction because I think AI can be applied poorly. It can be applied
link |
01:11:05.360
in ways that, you know, people are going to understand the consequences of it. Um, these are
link |
01:11:11.200
all potentially very bad things, but they're not the AI system creating this existential risk on
link |
01:11:18.400
its own. And that's the only place that I disagree with other people. Right. So I, I think the
link |
01:11:23.440
existential risk thing is, um, humans are really damn good at surviving. So to kill off the human
link |
01:11:29.360
race, it'd be very, very difficult. Yes, but you can even, I'll go further. I don't think AI systems
link |
01:11:36.000
are ever going to try to, I don't think AI systems are ever going to like say, I'm going to ignore
link |
01:11:40.720
you. I'm going to do what I think is best. Um, I don't think that's going to happen, at least not
link |
01:11:46.480
in the way I'm talking about it. So you, the Twitter recommendation algorithm is an interesting
link |
01:11:52.720
example. Let's, let's use computers as an analogy again, right? I build a computer. It's a universal
link |
01:11:59.600
computing machine. I can't predict what people are going to use it for. They can build all kinds of
link |
01:12:03.440
things. They can, they can even create computer viruses. It's, you know, all kinds of stuff. So
link |
01:12:09.040
there's some unknown about its utility and about where it's going to go. But on the other hand,
link |
01:12:13.360
I pointed out that once I build a computer, it's not going to fundamentally change how it computes.
link |
01:12:18.960
It's like, I use the example of a register, which is a part, internal part of a computer. Um, you
link |
01:12:23.520
know, I say it can't just sit there because computers don't evolve. They don't replicate,
link |
01:12:27.600
they don't evolve. They don't, you know, the physical manifestation of the computer itself
link |
01:12:31.120
is not going to, there's certain things that can't do right. So we can break into things like things
link |
01:12:36.320
that are possible to happen. We can't predict and things that are just impossible to happen.
link |
01:12:40.400
Unless we go out of our way to make them happen, they're not going to happen unless somebody makes
link |
01:12:44.320
them happen. Yeah. So there's, there's a bunch of things to say. One is the physical aspect,
link |
01:12:49.120
which you're absolutely right. We have to build a thing for it to operate in the physical world
link |
01:12:54.640
and you can just stop building them. Uh, you know, the moment they're not doing the thing you want
link |
01:13:01.280
them to do or just change the design or change the design. The question is, I mean, there's,
link |
01:13:05.760
uh, it's possible in the physical world. This is probably longer term is you automate the building.
link |
01:13:10.640
It makes, it makes a lot of sense to automate the building. There's a lot of factories that
link |
01:13:14.000
are doing more and more and more automation to go from raw resources to the final product.
link |
01:13:19.360
It's possible to imagine that obviously much more efficient to keep, to create a factory that's
link |
01:13:25.040
creating robots that do something, uh, you know, that do something extremely useful for society.
link |
01:13:30.880
It could be a personal assistance. It could be, uh, it could, it could be your toaster, but a
link |
01:13:35.840
toaster as much as deeper knowledge of your culinary preferences. Yeah. And that could,
link |
01:13:41.680
uh, I think now you've hit on the right thing. The real thing we need to be worried about is
link |
01:13:46.000
self replication. Right. That is the thing that we're in the physical world or even the virtual
link |
01:13:51.440
world self replication because self replication is dangerous. It's probably more likely to be
link |
01:13:56.560
killed by a virus, you know, or a human hand veneered virus. Anybody can create a, you know,
link |
01:14:01.760
there's the technology is getting so almost anybody, but not anybody, but a lot of people
link |
01:14:05.680
could create a human engineered virus that could wipe out humanity. That is really dangerous. No
link |
01:14:11.360
intelligence required, just self replication. So, um, so we need to be careful about that.
link |
01:14:18.480
So when I think about, you know, AI, I'm not thinking about robots, building robots. Don't
link |
01:14:24.240
do that. Don't build a, you know, just, well, that's because you're interesting creating
link |
01:14:28.320
intelligence. It seems like self replication is a good way to make a lot of money. Well,
link |
01:14:35.360
fine. But so is, you know, maybe editing viruses is a good way too. I don't know. The point is,
link |
01:14:41.120
if as a society, when we want to look at existential risks, the existential risks we face
link |
01:14:46.880
that we can control almost all evolve around self replication. Yes. The question is, I don't see a
link |
01:14:54.880
good, uh, way to make a lot of money by engineering viruses and deploying them on the world. There
link |
01:15:00.240
could be, there could be applications that are useful, but let's separate out, let's separate out.
link |
01:15:04.880
I mean, you don't need to, you only need some, you know, terrorists who wants to do it. Cause
link |
01:15:08.000
it doesn't take a lot of money to make viruses. Um, let's just separate out what's risky and what's
link |
01:15:13.520
not risky. I'm arguing that the intelligence side of this equation is not risky. It's not risky at
link |
01:15:18.560
all. It's the self replication side of the equation that's risky. And I'm arguing that
link |
01:15:23.520
it's not risky. And I'm not dismissing that. I'm scared as hell. It's like the paperclip
link |
01:15:28.880
maximizer thing. Yeah. Those are often like talked about in the same conversation.
link |
01:15:35.200
Um, I think you're right. Like creating ultra intelligent, super intelligent systems
link |
01:15:42.000
is not necessarily coupled with a self replicating arbitrarily self replicating systems. Yeah. And
link |
01:15:47.600
you don't get evolution unless you're self replicating. Yeah. And so I think that's the gist
link |
01:15:52.560
of this argument that people have trouble separating those two out. They just think,
link |
01:15:56.720
Oh yeah, intelligence looks like us. And look how, look at the damage we've done to this planet,
link |
01:16:00.960
like how we've, you know, destroyed all these other species. Yeah. Well we replicate,
link |
01:16:04.640
which the 8 billion of us are 7 billion of us now. So, um, I think the idea is that the,
link |
01:16:10.400
the more intelligent we're able to build systems, the more tempting it becomes from a capitalist
link |
01:16:17.120
perspective of creating products, the more tempting it becomes to create self, uh, reproducing
link |
01:16:21.920
systems. All right. So let's say that's true. So does that mean we don't build intelligent systems?
link |
01:16:26.720
No, that means we regulate, we, we understand the risks. Uh, we regulate them. Uh, you know,
link |
01:16:33.760
look, there's a lot of things we could do as society, which have some sort of financial
link |
01:16:37.200
benefit to someone, which could do a lot of harm. And we have to learn how to regulate those things.
link |
01:16:42.560
We have to learn how to deal with those things. I will argue this. I would say the opposite. Like I
link |
01:16:46.400
would say having intelligent machines at our disposal will actually help us in the end more,
link |
01:16:52.000
because it'll help us understand these risks better. It'll help us mitigate these risks
link |
01:16:55.040
better. It might be ways of saying, oh, well, how do we solve climate change problems? You know,
link |
01:16:59.040
how do we do this? Or how do we do that? Um, that just like computers are dangerous in the hands of
link |
01:17:05.600
the wrong people, but they've been so great for so many other things. We live with those dangers.
link |
01:17:09.840
And I think we have to do the same with intelligent machines. We just, but we have to be
link |
01:17:13.520
constantly vigilant about this idea of a bad actors doing bad things with them and be,
link |
01:17:19.360
um, don't ever, ever create a self replicating system. Um, uh, and, and by the way, I don't even
link |
01:17:25.440
know if you could create a self replicating system that uses a factory. That's really dangerous.
link |
01:17:30.320
You know, nature's way of self replicating is so amazing. Um, you know, it doesn't require
link |
01:17:36.000
anything. It just, you know, the thing and resources and it goes right. Um, if I said to
link |
01:17:41.680
you, you know what we have to build, uh, our goal is to build a factory that can make that builds
link |
01:17:46.880
new factories and it has to end to end supply chain. It has to bind the resources, get the
link |
01:17:54.000
energy. I mean, that's really hard. It's, you know, no one's doing that in the next, you know,
link |
01:18:00.000
a hundred years. I've been extremely impressed by the efforts of Elon Musk and Tesla to try to do
link |
01:18:06.400
exactly that. Not, not from raw resource. Well, he actually, I think states the goal is to go from
link |
01:18:12.720
raw resource to the, uh, the final car in one factory. Yeah. That's the main goal. Of course,
link |
01:18:19.440
it's not currently possible, but they're taking huge leaps. Well, he's not the only one to do
link |
01:18:23.600
that. This has been a goal for many industries for a long, long time. Um, it's difficult to do.
link |
01:18:28.720
Well, a lot of people, what they do is instead they have like a million suppliers and then they
link |
01:18:34.480
like there's everybody's, they all co locate them and they, and they tie the systems together.
link |
01:18:40.480
It's a fundamental, I think that's, that also is not getting at the issue I was just talking about,
link |
01:18:45.840
um, which is self replication. It's, um, I mean, self replication means there's no
link |
01:18:53.840
entity involved other than the entity that's replicating. Um, right. And so if there are
link |
01:18:58.800
humans in this, in the loop, that's not really self replicating, right? It's unless somehow we're
link |
01:19:04.400
duped into doing it. But it's also, I don't necessarily
link |
01:19:11.920
agree with you because you've kind of mentioned that AI will not say no to us.
link |
01:19:16.480
I just think they will. Yeah. Yeah. So like, uh, I think it's a useful feature to build in. I'm
link |
01:19:23.520
just trying to like, uh, put myself in the mind of engineers to sometimes say no, you know, if you,
link |
01:19:32.480
I gave the example earlier, right? I gave the example of my car, right? My car turns the wheel
link |
01:19:38.000
and, and applies the accelerator and the brake as I say, until it decides there's something dangerous.
link |
01:19:43.760
Yes. And then it doesn't do that. Now that was something it didn't decide to do. It's something
link |
01:19:50.240
we programmed into the car. And so good. It was a good idea, right? The question again, isn't like
link |
01:19:57.600
if we create an intelligent system, will it ever ignore our commands? Of course it will. And
link |
01:20:02.640
sometimes is it going to do it because it came up, came up with its own goals that serve its purposes
link |
01:20:08.560
and it doesn't care about our purposes? No, I don't think that's going to happen.
link |
01:20:12.480
Okay. So let me ask you about these, uh, super intelligent cortical systems that we engineer
link |
01:20:16.960
and us humans, do you think, uh, with these entities operating out there in the world,
link |
01:20:24.320
what is the future most promising future look like? Is it us merging with them or is it us?
link |
01:20:33.040
Like, how do we keep us humans around when you have increasingly intelligent beings? Is it, uh,
link |
01:20:38.880
one of the dreams is to upload our minds in the digital space. So can we just
link |
01:20:42.960
give our minds to these, uh, systems so they can operate on them? Is there some kind of more
link |
01:20:48.400
interesting merger or is there more, more communication? I talked about all these
link |
01:20:52.240
scenarios and let me just walk through them. Sure. Um, the uploading the mind one. Yes. Extremely,
link |
01:21:00.560
really difficult to do. Like, like, we have no idea how to do this even remotely right now. Um,
link |
01:21:06.480
so it would be a very long way away, but I make the argument you wouldn't like the result.
link |
01:21:11.280
Um, and you wouldn't be pleased with the result. It's really not what you think it's going to be.
link |
01:21:16.080
Um, imagine I could upload your brain into a, into a computer right now. And now the computer
link |
01:21:20.000
sitting there going, Hey, I'm over here. Great. Get rid of that old bio person. I don't need them.
link |
01:21:24.160
You're still sitting here. Yeah. What are you going to do? No, no, that's not me. I'm here.
link |
01:21:28.560
Right. Are you going to feel satisfied then? Then you, but people imagine, look, I'm on my deathbed
link |
01:21:33.600
and I'm about to, you know, expire and I pushed the button and now I'm uploaded. But think about
link |
01:21:38.240
it a little differently. And, and so I don't think it's going to be a thing because people,
link |
01:21:42.640
by the time we're able to do this, if ever, because you have to replicate the entire body,
link |
01:21:47.760
not just the brain. It's, it's really, it's, I walked through the issues. It's really substantial.
link |
01:21:52.240
Um, do you have a sense of what makes us us? Is there, is there a shortcut to what can only save
link |
01:21:59.520
a certain part that makes us truly ours? No, but I think that machine would feel like it's you too.
link |
01:22:04.720
Right. Right. You have two people, just like I have a child, I have a child, right? I have two
link |
01:22:08.400
daughters. They're independent people. I created them. Well, partly. Yeah. And, um, uh, I don't,
link |
01:22:16.160
just because they're somewhat like me, I don't feel on them and they don't feel like I'm me. So
link |
01:22:20.400
if you split apart, you have two people. So we can tell them, come back to what, what makes,
link |
01:22:24.080
what consciousness do you want? We can talk about that, but we don't have like remote consciousness.
link |
01:22:28.400
I'm not sitting there going, Oh, I'm conscious of that. You know, I mean, that system of,
link |
01:22:32.000
so let's say, let's, let's stay on our topic. One was uploading a brand. Yep. It ain't gonna happen
link |
01:22:38.480
in a hundred years, maybe a thousand, but I don't think people are going to want to do it. The
link |
01:22:44.080
merging your mind with, uh, you know, the neural link thing, right? Like again, really, really
link |
01:22:50.240
difficult. It's, it's one thing to make progress, to control a prosthetic arm. It's another to have
link |
01:22:54.720
like a billion or several billion, you know, things and understanding what those signals
link |
01:22:58.960
mean. Like it's the one thing that like, okay, I can learn to think some patterns to make something
link |
01:23:03.680
happen. It's quite another thing to have a system, a computer, which actually knows exactly what
link |
01:23:08.800
cells it's talking to and how it's talking to them and interacting in a way like that. Very,
link |
01:23:12.960
very difficult. We're not getting anywhere closer to that. Um, interesting. Can I, can I, uh, can
link |
01:23:18.160
I ask a question here? What, so for me, what makes that merger very difficult practically in the next
link |
01:23:24.880
10, 20, 50 years is like literally the biology side of it, which is like, it's just hard to do
link |
01:23:32.000
that kind of surgery in a safe way. But your intuition is even the machine learning part of it,
link |
01:23:38.640
where the machine has to learn what the heck it's talking to. That's even hard. I think it's even
link |
01:23:43.280
harder. And it's not, it's, it's easy to do when you're talking about hundreds of signals. It's,
link |
01:23:49.200
it's a totally different thing to say, talking about billions of years. It's, it's a totally
link |
01:23:53.840
different thing to say, talking about billions of signals. So you don't think it's the raw,
link |
01:23:57.440
the it's a machine learning problem. You don't think it could be learned? Well, I'm just saying,
link |
01:24:01.360
no, I think you'd have to have detailed knowledge. You'd have to know exactly what the types of
link |
01:24:05.440
neurons you're connecting to. I mean, in the brain, there's these, there are all different
link |
01:24:09.440
types of things. It's not like a neural network. It's a very complex organism system up here. We
link |
01:24:13.520
talked about the grid cells or the place cells, you know, you have to know what kind of cells
link |
01:24:16.640
you're talking to and what they're doing and how their timing works and all, all this stuff,
link |
01:24:20.640
which you can't today. There's no way of doing that. Right. But I think it's, I think it's a,
link |
01:24:24.960
I think the problem you're right. That the biological aspect of like who wants to have
link |
01:24:28.400
a surgery and have this stuff inserted in your brain. That's a problem. But this is when we
link |
01:24:32.640
solve that problem. I think the, the information coding aspect is much worse. I think that's much
link |
01:24:38.080
worse. It's not like what they're doing today. Today. It's simple machine learning stuff
link |
01:24:42.240
because you're doing simple things. But if you want to merge your brain, like I'm thinking on
link |
01:24:46.720
the internet, I'm merged my brain with the machine and we're both doing, that's a totally different
link |
01:24:51.440
issue. That's interesting. I tend to think if the, okay. If you have a super clean signal
link |
01:24:57.760
from a bunch of neurons at the start, you don't know what those neurons are. I think that's much
link |
01:25:04.400
easier than the getting of the clean signal. I think if you think about today's machine learning,
link |
01:25:10.880
that's what you would conclude. Right. I'm thinking about what's going on in the brain
link |
01:25:14.960
and I don't reach that conclusion. So we'll have to see. Sure. But I don't think even, even then,
link |
01:25:20.080
I think this kind of a sad future. Like, you know, do I, do I have to like plug my brain
link |
01:25:26.240
into a computer? I'm still a biological organism. I assume I'm still going to die.
link |
01:25:30.000
So what have I achieved? Right. You know, what have I achieved? Oh, I disagree that we don't
link |
01:25:36.640
know what those are, but it seems like there could be a lot of different applications. It's
link |
01:25:40.320
like virtual reality is to expand your brain's capability to, to like, to read Wikipedia.
link |
01:25:47.280
Yeah. But, but fine. But, but you're still a biological organism.
link |
01:25:50.080
Yes. Yes. You know, you're still, you're still mortal. All right. So,
link |
01:25:53.280
so what are you accomplishing? You're making your life in this short period of time better. Right.
link |
01:25:58.000
Just like having the internet made our life better. Yeah. Yeah. Okay. So I think that's of,
link |
01:26:03.760
of, if I think about all the possible gains we can have here, that's a marginal one.
link |
01:26:08.080
It's an individual, Hey, I'm better, you know, I'm smarter. But you know, fine. I'm not against it.
link |
01:26:15.280
I just don't think it's earth changing. I, but, but it w so this is the true of the internet.
link |
01:26:20.240
When each of us individuals are smarter, we get a chance to then share our smartness.
link |
01:26:24.800
We get smarter and smarter together as like, as a collective, this is kind of like this
link |
01:26:28.560
ant colony. Why don't I just create an intelligent machine that doesn't have any of this biological
link |
01:26:32.480
nonsense that has all the same. It's everything except don't burden it with my brain. Yeah.
link |
01:26:39.360
Right. It has a brain. It is smart. It's like my child, but it's much, much smarter than me.
link |
01:26:43.680
So I have a choice between doing some implant, doing some hybrid, weird, you know, biological
link |
01:26:48.320
thing that bleeding and all these problems and limited by my brain or creating a system,
link |
01:26:53.760
which is super smart that I can talk to. Um, that helps me understand the world that can
link |
01:26:58.240
read the internet, you know, read Wikipedia and talk to me. I guess my, the open questions there
link |
01:27:03.600
are what does the men manifestation of super intelligence look like? So like, what are we
link |
01:27:10.000
going to, you, you talked about why do I want to merge with AI? Like what, what's the actual
link |
01:27:14.880
marginal benefit here? If I, if we have a super intelligent system, how will it make our life
link |
01:27:23.680
better? So let's, let's, that's a great question, but let's break it down to little pieces. All
link |
01:27:28.240
right. On the one hand, it can make our life better in lots of simple ways. You mentioned
link |
01:27:32.400
like a care robot or something that helps me do things. It cooks. I don't know what it does. Right.
link |
01:27:36.960
Little things like that. We have super better, smarter cars. We can have, you know, better agents
link |
01:27:42.640
aids helping us in our work environment and things like that. To me, that's like the easy stuff, the
link |
01:27:47.360
simple stuff in the beginning. Um, um, and so in the same way that computers made our lives better
link |
01:27:53.200
in ways, many, many ways, I will have those kinds of things. To me, the really exciting thing about AI
link |
01:28:00.560
is the sort of it's transcendent, transcendent quality in terms of humanity. We're still
link |
01:28:05.760
biological organisms. We're still stuck here on earth. It's going to be hard for us to live
link |
01:28:09.760
anywhere else. Uh, I don't think you and I are going to want to live on Mars anytime soon. Um,
link |
01:28:14.960
um, and, um, and we're flawed, you know, we may end up destroying ourselves. It's totally possible.
link |
01:28:23.440
Uh, we, if not completely, we could destroy our civilizations. You know, it's this face the fact
link |
01:28:28.320
we have issues here, but we can create intelligent machines that can help us in various ways. For
link |
01:28:33.680
example, one example I gave, and that sounds a little sci fi, but I believe this. If we really
link |
01:28:38.160
wanted to live on Mars, we'd have to have intelligent systems that go there and build
link |
01:28:42.560
the habitat for us, not humans. Humans are never going to do this. It's just too hard. Um, but could
link |
01:28:48.240
we have a thousand or 10,000, you know, engineer workers up there doing this stuff, building things,
link |
01:28:53.120
terraforming Mars? Sure. Maybe we can move Mars. But then if we want to, if we want to go around
link |
01:28:57.840
the universe, should I send my children around the universe or should I send some intelligent machine,
link |
01:29:02.400
which is like a child that represents me and understands our needs here on earth that could
link |
01:29:07.520
travel through space. Um, so it's sort of, it, in some sense, intelligence allows us to transcend
link |
01:29:13.280
our, the limitations of our biology, uh, with, and, and don't think of it as a negative thing.
link |
01:29:19.920
It's in some sense, my children transcend my, the, my biology too, cause they, they live beyond me.
link |
01:29:26.000
Yeah. Um, and we impart, they represent me and they also have their own knowledge and I can
link |
01:29:30.480
impart knowledge to them. So intelligent machines will be like that too, but not limited like us.
link |
01:29:34.400
I mean, but the question is, um, there's so many ways that transcendence can happen
link |
01:29:40.320
and the merger with AI and humans is one of those ways. So you said intelligent,
link |
01:29:46.960
basically beings or systems propagating throughout the universe, representing us humans.
link |
01:29:53.280
They represent us humans in the sense they represent our knowledge and our history,
link |
01:29:56.560
not us individually. Right. Right. But I mean, the question is, is it just a database
link |
01:30:04.960
with, uh, with the really damn good, uh, model of the world?
link |
01:30:09.600
It's conscious, it's conscious just like us. Okay. But just different?
link |
01:30:12.800
They're different. Uh, just like my children are different. They're like me, but they're
link |
01:30:16.560
different. Um, these are more different. I guess maybe I've already, I kind of,
link |
01:30:22.560
I take a very broad view of our life here on earth. I say, you know, why are we living here?
link |
01:30:28.320
Are we just living because we live? Is it, are we surviving because we can survive? Are we fighting
link |
01:30:32.960
just because we want to just keep going? What's the point of it? Right. So to me, the point,
link |
01:30:38.880
if I asked myself, what's the point of life is what's transcends that ephemeral sort of biological
link |
01:30:46.000
experience is to me, this is my answer is the acquisition of knowledge to understand more about
link |
01:30:53.520
the universe, uh, and to explore. And that's partly to learn more. Right. Um, I don't view it as
link |
01:31:01.920
a terrible thing. If the ultimate outcome of humanity is we create systems that are intelligent
link |
01:31:09.040
that are offspring, but they're not like us at all. And we stay, we stay here and live on earth
link |
01:31:13.680
as long as we can, which won't be forever, but as long as we can and, but that would be a great
link |
01:31:20.960
thing to do. It's not a, it's not like a negative thing. Well, would, uh, you be okay then if, uh,
link |
01:31:29.760
the human species vanishes, but our knowledge is preserved and keeps being expanded by intelligence
link |
01:31:37.440
systems. I want our knowledge to be preserved and expanded. Yeah. Am I okay with humans dying? No,
link |
01:31:44.960
I don't want that to happen. But if it, if it does happen, what if we were sitting here and this is
link |
01:31:50.400
all the real, the last two people on earth and we're saying, Lex, we blew it. It's all over.
link |
01:31:53.920
Right. Wouldn't I feel better if I knew that our knowledge was preserved and that we had agents
link |
01:32:00.080
that knew about that, that were trans, you know, there were that left earth. I wouldn't want that.
link |
01:32:04.800
Mm. It's better than not having that, you know, I make the analogy of like, you know,
link |
01:32:08.240
the dinosaurs, the poor dinosaurs, they live for, you know, tens of millions of years.
link |
01:32:11.520
They raised their kids. They, you know, they, they fought to survive. They were hungry. They,
link |
01:32:15.840
they did everything we do. And then they're all gone. Yeah. Like, you know, and, and if we didn't
link |
01:32:20.960
discover their bones, nobody would ever know that they ever existed. Right. Do we want to be like
link |
01:32:27.600
that? I don't want to be like that. There's a sad aspect to it. And it's kind of, it's jarring to
link |
01:32:32.720
think about that. It's possible that a human like intelligence civilization has previously existed
link |
01:32:39.600
on earth. The reason I say this is like, it is jarring to think that we would not, if they went
link |
01:32:46.640
extinct, we wouldn't be able to find evidence of them after a sufficient amount of time. Of course,
link |
01:32:53.040
there's like, like basically humans, like if we destroy ourselves now, the human civilization
link |
01:32:58.800
destroyed ourselves. Now, after a sufficient amount of time, we would not be, we'd find evidence of
link |
01:33:03.280
the dinosaurs would not find evidence of humans. Yeah. That's kind of an odd thing to think about.
link |
01:33:08.640
Although I'm not sure if we have enough knowledge about species going back for billions of years,
link |
01:33:14.880
but we could, we could, we might be able to eliminate that possibility, but it's an interesting
link |
01:33:18.960
question. Of course, this is a similar question to, you know, there were lots of intelligent
link |
01:33:23.200
species throughout our galaxy that have all disappeared. That's super sad that they're,
link |
01:33:30.320
exactly that there may have been much more intelligent alien civilizations in our galaxy
link |
01:33:36.000
that are no longer there. Yeah. You actually talked about this, that humans might destroy
link |
01:33:42.480
ourselves and how we might preserve our knowledge and advertise that knowledge to other. Advertise
link |
01:33:53.920
is a funny word to use. From a PR perspective. There's no financial gain in this.
link |
01:34:00.720
You know, like make it like from a tourism perspective, make it interesting. Can you
link |
01:34:04.480
describe how you think about this problem? Well, there's a couple things. I broke it down
link |
01:34:07.600
into two parts, actually three parts. One is, you know, there's a lot of things we know that,
link |
01:34:14.960
what if, what if we were, what if we ended, what if our civilization collapsed? Yeah. I'm not
link |
01:34:19.280
talking tomorrow. Yeah. We could be a thousand years from now, like, so, you know, we don't
link |
01:34:22.400
really know, but, but historically it would be likely at some point. Time flies when you're
link |
01:34:26.720
having fun. Yeah. That's a good way to put it. You know, could we, and then intelligent life
link |
01:34:33.200
evolved again on this planet. Wouldn't they want to know a lot about us and what we knew? Wouldn't
link |
01:34:37.680
they wouldn't be able to ask us questions? So one very simple thing I said, how would we archive
link |
01:34:42.000
what we know? That was a very simple idea. I said, you know what, that wouldn't be that hard to put
link |
01:34:46.080
a few satellites, you know, going around the sun and we'd upload Wikipedia every day and that kind
link |
01:34:51.200
of thing. So, you know, if we end up killing ourselves, well, it's up there and the next intelligent
link |
01:34:55.600
species will find it and learn something. They would like that. They would appreciate that.
link |
01:34:58.720
Um, uh, so that's one thing. The next thing I said, well, what if, you know, how outside,
link |
01:35:05.360
outside of our solar system, we have the SETI program. We're looking for these intelligent
link |
01:35:09.680
signals from everybody. And if you do a little bit of math, which I did in the book, uh, and
link |
01:35:14.320
you say, well, what if intelligent species only live for 10,000 years before, you know,
link |
01:35:18.800
technologically intelligent species, like ones are really able to do the stuff we're just starting
link |
01:35:22.560
to be able to do. Um, well, the chances are we wouldn't be able to see any of them because they
link |
01:35:26.800
would have all been disappeared by now. Um, they would, they've lived for 10,000 years and now
link |
01:35:31.040
they're gone. And so we're not going to find these signals being sent from these people because, um,
link |
01:35:36.080
but I said, what kind of signal could you create that would last a million years or a billion years
link |
01:35:41.120
that someone would say, dammit, someone smart lived there that we know that that would be a
link |
01:35:46.080
life changing event for us to figure that out. Well, what we're looking for today in the study
link |
01:35:49.760
program, isn't that we're looking for very coded signals in some sense. Um, and so I asked myself,
link |
01:35:54.560
what would be a different type of signal one could create? Um, I've always thought about
link |
01:35:58.160
this throughout my life. And in the book, I gave one, one possible suggestion, which was, um, uh,
link |
01:36:04.480
we now detect planets going around other, other suns, uh, other stars, uh, excuse me. And we do
link |
01:36:11.040
that by seeing this, the, the slight dimming of the light as the planets move in front of them.
link |
01:36:14.800
That's how, uh, we detect, uh, planets elsewhere in our galaxy. Um, what if we created something
link |
01:36:21.040
like that, that just rotated around our, our, our, around the sun and it blocked out a little
link |
01:36:26.480
bit of light in a particular pattern that someone said, Hey, that's not a planet. That is a sign
link |
01:36:31.760
that someone was once there. You can say, what if it's beating up pie, you know, three point,
link |
01:36:36.000
whatever. Um, so I did it from a distance. Broadly broadcast takes no continue activation on our
link |
01:36:44.960
part. This is the key, right? No one has to be senior running a computer and supplying it with
link |
01:36:48.320
power. It just goes on. So we go, it's continuous. And, and I argued that part of the study program
link |
01:36:55.200
should be looking for signals like that. And to look for signals like that, you ought to figure
link |
01:36:58.880
out what the, how would we create a signal? Like what would we create that would be like that,
link |
01:37:03.440
that would persist for millions of years that would be broadcast broadly. You could see from
link |
01:37:07.680
a distance that was unequivocal, came from an intelligent species. And so I gave that one
link |
01:37:13.760
example. Um, cause they don't know what I know of actually. And then, and then finally, right.
link |
01:37:19.760
If, if our, ultimately our solar system will die at some point in time, you know, how do we go
link |
01:37:26.640
beyond that? And I think it's possible if it all possible, we'll have to create intelligent machines
link |
01:37:31.600
that travel throughout the, throughout the solar system or the galaxy. And I don't think that's
link |
01:37:36.880
going to be humans. I don't think it's going to be biological organisms. So these are just things to
link |
01:37:41.040
think about, you know, like, what's the old, you know, I don't want to be like the dinosaur. I
link |
01:37:44.560
don't want to just live in, okay, that was it. We're done. You know, well, there is a kind of
link |
01:37:48.400
presumption that we're going to live forever, which, uh, I think it is a bit sad to imagine
link |
01:37:55.280
that the message we send as, as you talk about is that we were once here instead of we are here.
link |
01:38:03.680
Well, it could be, we are still here. Uh, but it's more of a, it's more of an insurance policy
link |
01:38:09.520
in case we're not here, you know? Well, I don't know, but there is something I think about,
link |
01:38:16.080
we as humans don't often think about this, but it's like, like whenever I, um,
link |
01:38:23.680
record a video, I've done this a couple of times in my life. I've recorded a video for my future
link |
01:38:28.160
self, just for personal, just for fun. And it's always just fascinating to think about
link |
01:38:34.400
that preserving yourself for future civilizations. For me, it was preserving myself for a future me,
link |
01:38:41.600
but that's a little, that's a little fun example of archival.
link |
01:38:46.160
Well, these podcasts are, are, are preserving you and I in a way. Yeah. For future,
link |
01:38:51.280
hopefully well after we're gone. But you don't often, we're sitting here talking about this.
link |
01:38:56.640
You are not thinking about the fact that you and I are going to die and there'll be like 10 years
link |
01:39:02.800
after somebody watching this and we're still alive. You know, in some sense I do. I'm here
link |
01:39:09.440
cause I want to talk about ideas and these ideas transcend me and they transcend this time and, and
link |
01:39:16.720
on our planet. Um, we're talking here about ideas that could be around a thousand years from now.
link |
01:39:23.520
Or a million years from now. I, when I wrote my book, I had an audience in mind and one of the
link |
01:39:29.360
clearest audiences was aliens. No. Were people reading this a hundred years from now? Yes.
link |
01:39:35.200
I said to myself, how do I make this book relevant to someone reading this a hundred years from now?
link |
01:39:39.360
What would they want to know that we were thinking back then? What would make it like,
link |
01:39:44.160
that was an interesting, it's still an interesting book. I'm not sure I can achieve that, but that was
link |
01:39:49.360
how I thought about it because these ideas, like especially in the third part of the book, the ones
link |
01:39:53.440
we were just talking about, you know, these crazy, sounds like crazy ideas about, you know,
link |
01:39:56.960
storing our knowledge and, and, you know, merging our brains with computers and, and sending, you
link |
01:40:01.680
know, our machines out into space. It's not going to happen in my lifetime. Um, and they may not
link |
01:40:07.360
have been happening in the next hundred years. They may not happen for a thousand years. Who knows?
link |
01:40:10.640
Uh, but we have the unique opportunity right now. We, you, me, and other people in the world,
link |
01:40:17.440
right now, we, you, me, and other people like this, um, to sort of at least propose the agenda,
link |
01:40:24.640
um, that might impact the future like that. That's a fascinating way to think, uh, both like
link |
01:40:29.840
writing or creating, try to make, try to create ideas, try to create things that, uh, hold up
link |
01:40:38.400
in time. Yeah. You know, when understanding how the brain works, we're going to figure that out
link |
01:40:42.240
once. That's it. It's going to be figured out once. And after that, that's the answer. And
link |
01:40:46.720
people will, people will study that thousands of years now. We still, we still, you know,
link |
01:40:51.600
venerate Newton and, and Einstein and, um, and, you know, because, because ideas are exciting,
link |
01:40:59.040
even well into the future. Well, the interesting thing is like big ideas, even if they're wrong,
link |
01:41:05.520
are still useful. Like, yeah, especially if they're not completely wrong, right? Right.
link |
01:41:12.800
Newton's laws are not wrong. They're just Einstein's they're better. Um, so yeah, I mean,
link |
01:41:19.840
but we're talking with Newton and Einstein, we're talking about physics. I wonder if we'll ever
link |
01:41:23.440
achieve that kind of clarity, but understanding, um, like complex systems and the, this particular
link |
01:41:30.880
manifestation of complex systems, which is the human brain. I'm totally optimistic. We can do
link |
01:41:36.160
that. I mean, we're making progress at it. I don't see any reasons why we can't completely. I mean,
link |
01:41:41.440
completely understand in the sense, um, you know, we don't really completely understand what all
link |
01:41:46.080
the molecules in this water bottle are doing, but, you know, we have laws that sort of capture it
link |
01:41:50.080
pretty good. Um, and, uh, so we'll have that kind of understanding. I mean, it's not like you're
link |
01:41:54.960
gonna have to know what every neuron in your brain is doing. Um, but enough to, um, first of all,
link |
01:42:00.880
to build it. And second of all, to do, you know, do what physics does, which is like have, uh,
link |
01:42:06.400
concrete experiments where we can validate this is happening right now. Like it's not,
link |
01:42:12.400
this is not some future thing. Um, you know, I'm very optimistic about it because I know about our,
link |
01:42:17.760
our work and what we're doing. We'll have to prove it to people. Um, but, um,
link |
01:42:24.480
I, I consider myself a rational person and, um, you know, until fairly recently,
link |
01:42:30.640
I wouldn't have said that, but right now I'm, where I'm sitting right now, I'm saying, you know,
link |
01:42:33.840
we, we could, this is going to happen. There's no big obstacles to it. Um, we finally have a
link |
01:42:39.200
framework for understanding what's going on in the cortex and, um, and that's liberating. It's,
link |
01:42:44.960
it's like, Oh, it's happening. So I can't see why we wouldn't be able to understand it. I just can't.
link |
01:42:50.880
Okay. So, I mean, on that topic, let me ask you to play devil's advocate.
link |
01:42:54.560
Is it possible for you to imagine, look, look a hundred years from now and looking at your book,
link |
01:43:02.320
uh, in which ways might your ideas be wrong? Oh, I worry about this all the time. Um,
link |
01:43:11.840
yeah, it's still useful. Yeah. Yeah.
link |
01:43:15.200
Yeah. I think there's, you know, um, well I can, I can best relate it to like things I'm worried
link |
01:43:24.800
about right now. So we talked about this voting idea, right? It's happening. There's no question.
link |
01:43:29.920
It's happening, but it could be far more, um, um, there's, there's enough things I don't know about
link |
01:43:36.480
it that it might be working into ways differently than I'm thinking about the kind of what's voting,
link |
01:43:41.520
who's voting, you know, where are representations? I talked about, like, you have a thousand models
link |
01:43:45.680
of a coffee cup like that. That could turn out to be wrong. Um, because it may be, maybe there are a
link |
01:43:52.320
thousand models that are sub models, but not really a single model of the coffee cup. Um,
link |
01:43:57.120
I mean, there's things, these are all sort of on the edges, things that I present as like,
link |
01:44:02.000
Oh, it's so simple and clean. Well, it's not that it's always going to be more complex.
link |
01:44:05.440
And, um, and there's parts of the theory, which I don't understand the complexity well. So I think,
link |
01:44:14.640
I think the idea that this brain is a distributed modeling system is not controversial at all. Right.
link |
01:44:19.440
It's not, that's well understood by many people. The question then is,
link |
01:44:22.720
are each quarter of a column an independent modeling system? Um, I could be wrong about that.
link |
01:44:29.040
Um, I don't think so, but I worry about it. My intuition, not even thinking why you could
link |
01:44:35.600
be wrong is the same intuition I have about any sort of physicist, uh, like string theory
link |
01:44:42.480
that we as humans desire for a clean explanation. And, uh, a hundred years from now, uh,
link |
01:44:50.160
intelligent systems might look back at us and laugh at how we try to get rid of the whole mess
link |
01:44:56.560
by having simple explanation when the reality is it's way messier. And in fact, it's impossible
link |
01:45:03.680
to understand. You can only build it. It's like this idea of complex systems and cellular automata
link |
01:45:08.960
is you can only launch the thing. You cannot understand it. Yeah. I think that, you know,
link |
01:45:13.840
the history of science suggests that's not likely to occur. Um, the history of science suggests that
link |
01:45:20.240
as a theorist and we're theorists, you look for simple explanations, right? Fully knowing
link |
01:45:25.920
that whatever simple explanation you're going to come up with is not going to be completely correct.
link |
01:45:30.640
I mean, it can't be, I mean, it's just, it's just more complexity, but that's the role of theorists
link |
01:45:35.840
play. They, they sort of, they give you a framework on which you now can talk about a problem and
link |
01:45:41.600
figure out, okay, now we can start digging more details. The best frameworks stick around while
link |
01:45:46.480
the details change. You know, again, you know, the classic example is Newton and Einstein, right? You
link |
01:45:53.440
know, um, Newton's theories are still used. They're still valuable. They're still practical. They're
link |
01:46:00.000
not like wrong. It's just, they've been refined. Yeah. But that's in physics. It's not obvious,
link |
01:46:05.120
by the way, it's not obvious for physics either that the universe should be such that's amenable
link |
01:46:10.400
to these simple. But it's so far, it appears to be as far as we can tell. Um, yeah. I mean,
link |
01:46:17.920
but as far as we could tell, and, but it's also an open question whether the brain is amenable to
link |
01:46:23.040
such clean theories. That's the, uh, not the brain, but intelligence. Well, I, I, I don't know. I would
link |
01:46:28.960
take intelligence out of it. Just say, you know, um, well, okay. Um, the evidence we have suggests
link |
01:46:37.120
that the human brain is, is a, at the one time extremely messy and complex, but there's some
link |
01:46:42.960
parts that are very regular and structured. That's why we started the neocortex. It's extremely
link |
01:46:48.240
regular in its structure. Yeah. And unbelievably so. And then I mentioned earlier, the other thing is
link |
01:46:53.440
it's, it's universal abilities. It is so flexible to learn so many things. We don't, we haven't
link |
01:47:00.560
figured out what it can't learn yet. We don't know, but we haven't figured it out yet, but it
link |
01:47:03.440
can learn things that it never was evolved to learn. So those give us hope. Um, that's why I
link |
01:47:09.040
went into this field because I said, you know, this regular structure, it's doing this amazing
link |
01:47:14.880
number of things. There's gotta be some underlying principles that are, that are common and other,
link |
01:47:19.680
other scientists have come up with the same conclusions. Um, and so it's promising and,
link |
01:47:25.600
um, and that's, and whether the theories play out exactly this way or not, that is the role that
link |
01:47:32.400
theorists play. And so far it's worked out well, even though, you know, maybe, you know, we don't
link |
01:47:38.080
understand all the laws of physics, but so far it's been pretty damn useful. The ones we have
link |
01:47:42.000
are our theories are pretty useful. You mentioned that, uh, we should not necessarily be,
link |
01:47:49.840
at least to the degree that we are worried about the existential risks of artificial intelligence
link |
01:47:55.200
relative to, uh, human risks from human nature being existential risk.
link |
01:48:02.720
What aspect of human nature worries you the most in terms of the survival of the human species?
link |
01:48:07.600
I mean, I'm disappointed in humanity, humans. I mean, all of us, I'm one. So I'm disappointed
link |
01:48:15.440
myself too. Um, it's kind of a sad state. There's two things that disappoint me. One is
link |
01:48:24.880
how it's difficult for us to separate our rational component of ourselves from our evolutionary
link |
01:48:30.640
heritage, which is, you know, not always pretty, you know, um, uh, rape is a, is an evolutionary
link |
01:48:38.800
good strategy for reproduction. Murder can be at times too, you know, making other people miserable
link |
01:48:45.760
at times is a good strategy for reproduction. It's just, and it's just, and, and so now that
link |
01:48:50.640
we know that, and yet we have this sort of, you know, we, you and I can have this very rational
link |
01:48:54.640
discussion talking about, you know, intelligence and brains and life and so on. So many, it seems
link |
01:48:59.680
like it's so hard. It's just a big, big transition to get humans, all humans to, to make the
link |
01:49:05.520
transition from be like, let's pay no attention to all that ugly stuff over here. Let's just focus
link |
01:49:11.360
on the interesting. What's unique about humanity is our knowledge and our intellect. But the fact
link |
01:49:16.720
that we're striving is in itself amazing, right? The fact that we're able to overcome that part.
link |
01:49:22.480
And it seems like we are more and more becoming successful at overcoming that part. That is the
link |
01:49:28.720
optimistic view. And I agree with you, but I worry about it. I'm not saying I'm worrying about it. I
link |
01:49:33.760
think that was your question. I still worry about it. Yes. You know, we could be in tomorrow because
link |
01:49:38.320
some terrorists could get nuclear bombs and, you know, blow us all up. Who knows? Right. The other
link |
01:49:43.200
thing I think I'm disappointed is, and it's just, I understand it. It's, I guess you can't really
link |
01:49:47.760
be disappointed. It's just a fact is that we're so prone to false beliefs that we, you know, we have
link |
01:49:53.120
a model in our head, the things we can interact with directly, physical objects, people, that
link |
01:50:00.080
model is pretty good. And we can test it all the time, right? I touch something, I look at it,
link |
01:50:04.800
talk to you, see if my model is correct. But so much of what we know is stuff I can't directly
link |
01:50:09.760
interact with. I only know because someone told me about it. And so we're prone, inherently prone
link |
01:50:16.560
to having false beliefs because if I'm told something, how am I going to know it's right
link |
01:50:20.560
or wrong? Right. And so then we have the scientific process, which says we are inherently flawed.
link |
01:50:26.800
So the only way we can get closer to the truth is by looking for contrary evidence.
link |
01:50:34.800
Yeah. Like this conspiracy theory, this theory that scientists keep telling me about that the
link |
01:50:41.600
earth is round. As far as I can tell, when I look out, it looks pretty flat.
link |
01:50:46.960
Yeah. So, yeah, there is a tension, but it's also, I tend to believe that we haven't figured
link |
01:50:55.440
out most of this thing, right? Most of nature around us is a mystery. And so it...
link |
01:51:02.240
But that doesn't, does that worry you? I mean, it's like, oh, that's like a pleasure,
link |
01:51:06.080
more to figure out, right? Yeah. That's exciting. But I'm saying like
link |
01:51:09.760
there's going to be a lot of quote unquote, wrong ideas. I mean, I've been thinking a lot about
link |
01:51:16.320
engineering systems like social networks and so on. And I've been worried about censorship
link |
01:51:21.760
and thinking through all that kind of stuff, because there's a lot of wrong ideas. There's a
link |
01:51:25.520
lot of dangerous ideas, but then I also read a history, read history and see when you censor
link |
01:51:33.360
ideas that are wrong. Now this could be a small scale censorship, like a young grad student who
link |
01:51:39.760
comes up, who like raises their hand and says some crazy idea. A form of censorship could be,
link |
01:51:46.320
I shouldn't use the word censorship, but like de incentivize them from no, no, no, no,
link |
01:51:52.000
this is the way it's been done. Yeah. Yeah. You're a foolish kid. Don't
link |
01:51:54.800
think that's it. Yeah. You're foolish. So in some sense,
link |
01:51:59.760
those wrong ideas, most of the time end up being wrong, but sometimes end up being
link |
01:52:05.520
I agree with you. So I don't like the word censorship. Um, at the very end of the book, I,
link |
01:52:11.280
I ended up with a sort of a, um, a plea or a recommended force of action. Um, the best way I
link |
01:52:20.000
could, I know how to deal with this issue that you bring up is if everybody understood as part of
link |
01:52:26.240
your upbringing in life, something about how your brain works, that it builds a model of the world,
link |
01:52:31.120
uh, how it works, you know, how basically it builds that model of the world and that the model
link |
01:52:34.960
is not the real world. It's just a model and it's never going to reflect the entire world. And it
link |
01:52:39.760
can be wrong and it's easy to be wrong. And here's all the ways you can get a wrong model in your
link |
01:52:44.320
head. Right? It's not prescribed what's right or wrong. Just understand that process. If we all
link |
01:52:50.960
understood the processes and I got together and you say, I disagree with you, Jeff. And I said,
link |
01:52:54.720
Lex, I disagree with you that at least we understand that we're both trying to model
link |
01:52:59.680
something. We both have different information, which leads to our different models. And therefore
link |
01:53:03.760
I shouldn't hold it against you and you shouldn't hold it against me. And we can at least agree that,
link |
01:53:07.760
well, what can we look for in that's common ground to test our, our beliefs, as opposed to so much,
link |
01:53:13.600
uh, as we raise our kids on dogma, which is this is a fact, this is a fact, and these people are
link |
01:53:20.080
bad. And, and, and, you know, where every, if everyone knew just to, to be skeptical of every
link |
01:53:31.120
belief and why, and how their brains do that, I think we might have a better world.
link |
01:53:36.560
Do you think the human mind is able to comprehend reality? So you talk about this creating models
link |
01:53:45.600
how close do you think we get to, uh, to reality? There's so the wildest ideas is like Donald
link |
01:53:51.440
Hoffman saying, we're very far away from reality. Do you think we're getting close to reality?
link |
01:53:56.560
Well, it depends on what you define reality. Uh, we are getting, we have a model of the world
link |
01:54:02.000
that's very useful, right? For, for basic goals. Well, for our survival and our pleasure right
link |
01:54:10.000
now. Right. Um, so that's useful. Um, I mean, it's really useful. Oh, we can build planes. We can build computers. We can do these things. Right.
link |
01:54:17.200
Uh, I don't think, I don't know the answer to that question. Um, I think that's part of the
link |
01:54:24.080
question we're trying to figure out, right? Like, you know, obviously if you end up with a theory of
link |
01:54:27.920
everything that really is a theory of everything and all of a sudden everything comes into play
link |
01:54:32.960
and there's no room for something else, then you might feel like we have a good model of the world.
link |
01:54:37.120
Yeah. But if we have a theory of everything and somehow, first of all, you'll never be able to
link |
01:54:41.440
really conclusively say it's a theory of everything, but say somehow we are very damn sure it's a theory
link |
01:54:46.480
of everything. We understand what happened at the big bang and how just the entirety of the
link |
01:54:51.680
physical process. I'm still not sure that gives us an understanding of, uh, the next
link |
01:54:58.240
many layers of the hierarchy of abstractions that form. Well, also what if string theory
link |
01:55:03.600
turns out to be true? And then you say, well, we have no reality, no modeling what's going on in
link |
01:55:09.360
those other dimensions that are wrapped into it on each other. Right. Or, or the multiverse,
link |
01:55:14.880
you know, I honestly don't know how for us, for human interaction, for ideas of intelligence,
link |
01:55:21.600
how it helps us to understand that we're made up of vibrating strings that are
link |
01:55:26.800
like 10 to the whatever times smaller than us. I don't, you know, you could probably build better
link |
01:55:33.040
weapons, a better rockets, but you're not going to be able to understand intelligence. I guess,
link |
01:55:37.200
I guess maybe better computers. No, you won't be. I think it's just more purely knowledge.
link |
01:55:41.680
You might lead to a better understanding of the, of the beginning of the universe,
link |
01:55:46.240
right? It might lead to a better understanding of, uh, I don't know. I guess I think the acquisition
link |
01:55:52.720
of knowledge has always been one where you, you pursue it for its own pleasure. Um, and you don't
link |
01:56:01.200
always know what is going to make a difference. Yeah. Uh, you're pleasantly surprised by the,
link |
01:56:06.400
the weird things you find. Do you think, uh, for the, for the neocortex in general, do you,
link |
01:56:11.760
do you think there's a lot of innovation to be done on the machine side? You know,
link |
01:56:16.960
you use the computer as a metaphor quite a bit. Is there different types of computer that would
link |
01:56:21.600
help us build intelligence manifestations of intelligent machines? Yeah. Or is it, oh no,
link |
01:56:26.880
it's going to be totally crazy. Uh, we have no idea how this is going to look out yet.
link |
01:56:32.720
You can already see this. Um, today we've, of course, we model these things on traditional
link |
01:56:37.760
computers and now, now GPUs are really popular with, with, uh, you know, neural networks and so
link |
01:56:43.840
on. Um, but there are companies coming up with fundamentally new physical substrates, um, that
link |
01:56:50.640
are just really cool. I don't know if they're going to work or not. Um, but I think there'll
link |
01:56:55.840
be decades of innovation here. Yeah. Totally. Do you think the final thing will be messy,
link |
01:57:01.360
like our biology is messy? Or do you think, uh, it's, it's the, it's the old bird versus
link |
01:57:07.360
airplane question, or do you think we could just, um, build airplanes that, that fly way better
link |
01:57:16.320
than birds in the same way we could build, uh, uh, electrical neocortex? Yeah. You know,
link |
01:57:23.280
can I, can I, can I riff on the bird thing a bit? Because I think that's interesting.
link |
01:57:27.040
People really misunderstand this. The Wright brothers, um, the problem they were trying to
link |
01:57:33.120
solve was controlled flight, how to turn an airplane, not how to propel an airplane.
link |
01:57:38.320
They weren't worried about that. Interesting. Yeah. They already had, at that time,
link |
01:57:41.600
there was already wing shapes, which they had from studying birds. There was already gliders
link |
01:57:45.520
that carry people. The problem was if you put a rudder on the back of a glider and you turn it,
link |
01:57:49.440
the plane falls out of the sky. So the problem was how do you control flight? And they studied
link |
01:57:55.680
birds and they actually had birds in captivity. They watched birds in wind tunnels. They observed
link |
01:58:00.240
them in the wild and they discovered the secret was the birds twist their wings when they turn.
link |
01:58:05.200
And so that's what they did on the Wright brothers flyer. They had these sticks that
link |
01:58:07.840
you would twist the wing. And that was the, that was their innovation, not the propeller.
link |
01:58:12.320
And today airplanes still twist their wings. We don't twist the entire wing. We just twist
link |
01:58:16.720
the tail end of it, the flaps, which is the same thing. So today's airplanes fly on the
link |
01:58:22.000
same principles as birds would observe. So everyone get that analogy wrong, but let's
link |
01:58:26.960
step back from that. Once you understand the principles of flight, you can choose
link |
01:58:32.640
how to implement them. No one's going to use bones and feathers and muscles, but they do have wings
link |
01:58:39.120
and we don't flap them. We have propellers. So when we have the principles of computation that
link |
01:58:45.040
goes on to modeling the world in a brain, we understand those principles very clearly.
link |
01:58:50.160
We have choices on how to implement them. And some of them will be biological like and some won't.
link |
01:58:54.400
And, but I do think there's going to be a huge amount of innovation here.
link |
01:58:59.600
Just think about the innovation when in the computer, they had to invent the transistor,
link |
01:59:03.920
they invented the Silicon chip. They had to invent, you know, then this software. I mean,
link |
01:59:09.200
it's millions of things they had to do, memory systems. We're going to do, it's going to be
link |
01:59:13.360
similar. Well, it's interesting that the deep learning, the effectiveness of deep learning for
link |
01:59:19.760
specific tasks is driving a lot of innovation in the hardware, which may have effects for actually
link |
01:59:27.120
allowing us to discover intelligence systems that operate very differently or at least much
link |
01:59:31.760
bigger than deep learning. Yeah. Interesting. So ultimately it's good to have an application
link |
01:59:37.040
that's making our life better now because the capitalist process, if you can make money.
link |
01:59:42.960
Yeah. Yeah. That works. I mean, the other way, I mean, Neil deGrasse Tyson writes about this
link |
01:59:48.080
is the other way we fund science, of course, is through military. So like, yeah. Conquests.
link |
01:59:53.360
So here's an interesting thing we're doing on this regard. So we've decided, we used to have
link |
01:59:57.920
a series of these biological principles and we can see how to build these intelligent machines,
link |
02:00:01.920
but we've decided to apply some of these principles to today's machine learning techniques.
link |
02:00:07.280
So one of the, we didn't talk about this principle. One is a sparsity in the brain,
link |
02:00:11.600
um, most of the neurons are active at any point in time. It's sparse and the connectivity is sparse
link |
02:00:15.440
and that's different than deep learning networks. Um, so we've already shown that we can speed up
link |
02:00:20.800
existing deep learning networks, uh, anywhere from 10 to a factor of a hundred. I mean,
link |
02:00:26.400
literally a hundred, um, and make a more robust at the same time. So this is commercially very,
link |
02:00:31.760
very valuable. Um, and so, you know, if we can prove this actually in the largest systems that
link |
02:00:38.960
are commercially applied today, there's a big commercial desire to do this. Well,
link |
02:00:44.240
sparsity is something that doesn't run really well on existing hardware. It doesn't really run
link |
02:00:50.640
really well, um, on, um, GPUs, um, and on CPUs. And so that would be a way of sort of bringing more,
link |
02:00:59.520
more brain principles into the existing system on a, on a commercially valuable basis.
link |
02:01:03.920
Another thing we can think we can do is we're going to use these dendrites,
link |
02:01:06.960
um, models that we, uh, I talked earlier about the prediction occurring inside a neuron
link |
02:01:13.200
that that basic property can be applied to existing neural networks and allow them to
link |
02:01:18.000
learn continuously, which is something they don't do today. And so the dendritic spikes that you
link |
02:01:22.960
were talking about. Yeah. Well, we wouldn't model the spikes, but the idea that you have
link |
02:01:26.960
that neuron today's neural networks have this company called the point neurons is a very simple
link |
02:01:30.640
model of a neuron. And, uh, by adding dendrites to them at just one more level of complexity,
link |
02:01:36.000
uh, that's in biological systems, you can solve problems in continuous learning, um,
link |
02:01:41.520
and rapid learning. So we're trying to take, we're trying to bring the existing field,
link |
02:01:47.760
and we'll see if we can do it. We're trying to bring the existing field of machine learning,
link |
02:01:51.360
um, commercially along with us, you brought up this idea of keeping, you know,
link |
02:01:55.040
paying for it commercially along with us as we move towards the ultimate goal of a true AI system.
link |
02:02:00.320
Even small innovations on your own networks are really, really exciting.
link |
02:02:04.000
Yeah.
link |
02:02:04.480
Is it seems like such a trivial model of the brain and applying different insights
link |
02:02:11.920
that just even, like you said, continuous, uh, learning or, uh, making it more asynchronous
link |
02:02:19.360
or maybe making more dynamic or like, uh, incentivizing, making it robust and making it
link |
02:02:28.720
somehow much better incentivizing sparsity, uh, somehow. Yeah. Well, if you can make things a
link |
02:02:35.840
hundred times faster, then there's plenty of incentive. That's true. People, people are
link |
02:02:40.480
spending millions of dollars, you know, just training some of these networks. Now these, uh,
link |
02:02:44.400
these transforming networks, let me ask you the big question for young people listening to this
link |
02:02:51.520
today in high school and college, what advice would you give them in terms of, uh, which career
link |
02:02:57.280
path to take and, um, maybe just about life in general? Well, in my case, um, I didn't start
link |
02:03:06.720
life with any kind of goals. I was, when I was going to college, it's like, Oh, what do I study?
link |
02:03:11.040
Well, maybe I'll do this electrical engineering stuff, you know? Um, it wasn't like, you know,
link |
02:03:15.840
today you see some of these young kids are so motivated, like I'm changing the world. I was
link |
02:03:18.720
like, you know, whatever. And, um, but then I did fall in love with something besides my wife,
link |
02:03:25.920
but I fell in love with this, like, Oh my God, it would be so cool to understand how the brain works.
link |
02:03:30.800
And then I, I said to myself, that's the most important thing I could work on. I can't imagine
link |
02:03:34.800
anything more important because if we understand how the brains work, you build tells the machines
link |
02:03:38.240
and they could figure out all the other big questions of the world. Right. So, and then I
link |
02:03:42.240
said, but I want to understand how I work. So I fell in love with this idea and I became passionate
link |
02:03:46.320
about it. And this is a trope. People say this, but it was, it's true because I was passionate
link |
02:03:54.160
about it. I was able to put up almost so much crap, you know, you know, I was, I was in that,
link |
02:04:01.040
you know, I was like person said, you can't do this. I was, I was a graduate student at Berkeley
link |
02:04:05.200
when they said, you can't study this problem, you know, no one's can solve this or you can't get
link |
02:04:09.040
funded for it. You know, then I went into do mobile computing and it was like, people say,
link |
02:04:13.120
you can't do that. You can't build a cell phone, you know? So, but all along I kept being motivated
link |
02:04:18.880
because I wanted to work on this problem. I said, I want to understand the brain works. And I got
link |
02:04:22.720
myself, you know, I got one lifetime. I'm going to figure it out, do the best I can. So by having
link |
02:04:28.160
that, cause you know, it's really, as you pointed out, Lex, it's really hard to do these things.
link |
02:04:33.440
People, it just, there's so many downers along the way. So many ways, obstacles to get in your
link |
02:04:38.320
way. Yeah. I'm sitting here happy all the time, but trust me, it's not always like that.
link |
02:04:42.000
Well, that's, I guess the happiness, the passion is a prerequisite for surviving the whole thing.
link |
02:04:47.520
Yeah, I think so. I think that's right. And so I don't want to sit to someone and say, you know,
link |
02:04:53.120
you need to find a passion and do it. No, maybe you don't. But if you do find something you're
link |
02:04:57.920
passionate about, then you can follow it as far as your passion will let you put up with it.
link |
02:05:04.000
Do you remember how you found it? How the spark happened?
link |
02:05:09.200
Why specifically for me?
link |
02:05:10.800
Yeah. Cause you said it's such an interesting, so like almost like later in life, by later,
link |
02:05:15.200
I mean like not when you were five, you didn't really know. And then all of a sudden you fell
link |
02:05:21.120
in love with that idea. Yeah, yeah. There was two separate events that compounded one another.
link |
02:05:25.600
One, when I was probably a teenager, it might've been 17 or 18, I made a list of the most
link |
02:05:31.520
interesting problems I could think of. First was why does the universe exist? It seems like
link |
02:05:36.960
not existing is more likely. The second one was, well, given it exists, why does it behave the way
link |
02:05:41.120
it does? Laws of physics, why is it equal MC squared, not MC cubed? That's an interesting
link |
02:05:45.680
question. The third one was like, what's the origin of life? And the fourth one was, what's
link |
02:05:51.680
intelligence? And I stopped there. I said, well, that's probably the most interesting one. And I
link |
02:05:56.240
put that aside as a teenager. But then when I was 22 and I was reading the, no, excuse me, it was
link |
02:06:05.680
1979, excuse me, 1979, I was reading, so I was, at that time I was 22, I was reading the September
link |
02:06:13.520
issue of Scientific American, which is all about the brain. And then the final essay was by Francis
link |
02:06:19.440
Crick, who of DNA fame, and he had taken his interest to studying the brain now. And he said,
link |
02:06:25.920
you know, there's something wrong here. He says, we got all this data, all this fact, this is 1979,
link |
02:06:33.600
all these facts about the brain, tons and tons of facts about the brain. Do we need more facts? Or do
link |
02:06:39.360
we just need to think about a way of rearranging the facts we have? Maybe we're just not thinking
link |
02:06:42.800
about the problem correctly. Cause he says, this shouldn't be like this. So I read that and I said,
link |
02:06:51.440
wow. I said, I don't have to become like an experimental neuroscientist. I could just
link |
02:06:57.360
take, look at all those facts and try and become a theoretician and try to figure it out. And I said
link |
02:07:04.320
that I felt like it was something I would be good at. I said, I wouldn't be a good experimentalist.
link |
02:07:08.640
I don't have the patience for it, but I'm a good thinker and I love puzzles. And this is like the
link |
02:07:14.320
biggest puzzle in the world. It's the biggest puzzle of all time. And I got all the puzzle
link |
02:07:18.240
pieces in front of me. Damn, that was exciting. And there's something obviously you can't
link |
02:07:23.360
convert into words that just kind of sparked this passion. And I have that a few times in my life,
link |
02:07:29.440
just something just like you, it grabs you. Yeah. I felt it was something that was both
link |
02:07:37.680
important and that I could make a contribution to. And so all of a sudden it felt like,
link |
02:07:41.680
oh, it gave me purpose in life. I honestly don't think it has to be as big as one of those four
link |
02:07:46.960
questions. I think you can find those things in the smallest. Oh, absolutely. David Foster Wallace
link |
02:07:54.160
said like the key to life is to be unboreable. I think it's very possible to find that intensity
link |
02:08:01.040
of joy in the smallest thing. Absolutely. I'm just, you asked me my story. Yeah. No, but I'm
link |
02:08:06.000
actually speaking to the audience. It doesn't have to be those four. You happen to get excited by one
link |
02:08:10.800
of the bigger questions of in the universe, but even the smallest things and watching the Olympics
link |
02:08:18.320
now, just giving yourself life, giving your life over to the study and the mastery of a particular
link |
02:08:25.920
sport is fascinating. And if it sparks joy and passion, you're able to, in the case of the
link |
02:08:32.720
Olympics, basically suffer for like a couple of decades to achieve. I mean, you can find joy and
link |
02:08:37.520
passion just being a parent. I mean, yeah, the parenting one is funny. So I was, not always,
link |
02:08:43.600
but for a long time, wanted kids and get married and stuff. And especially that has to do with the
link |
02:08:48.720
fact that I've seen a lot of people that I respect get a whole nother level of joy from kids. And
link |
02:08:58.880
at first is like, you're thinking is, well, like I don't have enough time in the day, right? If I
link |
02:09:05.920
have this passion to solve, but like, if I want to solve intelligence, how's this kid situation
link |
02:09:13.200
going to help me? But then you realize that, you know, like you said, the things that sparks joy,
link |
02:09:22.000
and it's very possible that kids can provide even a greater or deeper, more meaningful joy than
link |
02:09:28.640
those bigger questions when they enrich each other. And that seemed like, obviously when I
link |
02:09:34.160
was younger, it's probably a counterintuitive notion because there's only so many hours in the
link |
02:09:37.920
day, but then life is finite and you have to pick the things that give you joy.
link |
02:09:44.160
Yeah. But you also understand you can be patient too. I mean, it's finite, but we do have, you know,
link |
02:09:50.800
whatever, 50 years or something. So in my case, I had to give up on my dream of the neuroscience
link |
02:09:58.480
because I was a graduate student at Berkeley and they told me I couldn't do this and I couldn't
link |
02:10:02.240
get funded. And so I went back in the computing industry for a number of years. I thought it
link |
02:10:09.440
would be four, but it turned out to be more. But I said, I'll come back. I'm definitely going to
link |
02:10:14.880
come back. I know I'm going to do this computer stuff for a while, but I'm definitely coming back.
link |
02:10:17.920
Everyone knows that. And it's like raising kids. Well, yeah, you have to spend a lot of time with
link |
02:10:22.800
your kids. It's fun, enjoyable. But that doesn't mean you have to give up on other dreams. It just
link |
02:10:28.240
means that you may have to wait a week or two to work on that next idea. Well, you talk about the
link |
02:10:36.800
darker side of me, disappointing sides of human nature that we're hoping to overcome so that we
link |
02:10:42.240
don't destroy ourselves. I tend to put a lot of value in the broad general concept of love,
link |
02:10:48.640
of the human capacity of compassion towards each other, of just kindness, whatever that longing of
link |
02:10:58.960
like just the human to human connection. It connects back to our initial discussion. I tend to
link |
02:11:05.120
see a lot of value in this collective intelligence aspect. I think some of the magic of human
link |
02:11:09.360
civilization happens when there's a party is not as fun when you're alone. I totally agree with
link |
02:11:16.080
you on these issues. Do you think from a neocortex perspective, what role does love play in the human
link |
02:11:24.080
condition? Well, those are two separate things from a neocortex point of view. It doesn't impact
link |
02:11:29.600
our thinking about the neocortex. From a human condition point of view, I think it's core.
link |
02:11:34.400
I mean, we get so much pleasure out of loving people and helping people. I'll rack it up to
link |
02:11:44.720
old brain stuff and maybe we can throw it under the bus of evolution if you want. That's fine.
link |
02:11:52.800
It doesn't impact how I think about how we model the world, but from a humanity point of view,
link |
02:11:57.840
I think it's essential. Well, I tend to give it to the new brain and also I tend to give it to
link |
02:12:03.680
the old brain. Also, I tend to think that some aspects of that need to be engineered into AI
link |
02:12:09.120
systems, both in their ability to have compassion for other humans and their ability to maximize
link |
02:12:21.440
love in the world between humans. I'm more thinking about social networks. Whenever there's a deep
link |
02:12:27.760
AI systems in humans, specific applications where it's AI and humans, I think that's something that
link |
02:12:35.120
often not talked about in terms of metrics over which you try to maximize,
link |
02:12:44.480
like which metric to maximize in a system. It seems like one of the most
link |
02:12:48.960
powerful things in societies is the capacity to love.
link |
02:12:55.120
It's fascinating. I think it's a great way of thinking about it. I have been thinking more of
link |
02:13:01.120
these fundamental mechanisms in the brain as opposed to the social interaction between humans
link |
02:13:06.640
and AI systems in the future. If you think about that, you're absolutely right. That's a complex
link |
02:13:13.680
system. I can have intelligent systems that don't have that component, but they're not interacting
link |
02:13:17.360
with people. They're just running something or building some place or something. I don't know.
link |
02:13:21.600
But if you think about interacting with humans, yeah, but it has to be engineered in there. I
link |
02:13:26.640
don't think it's going to appear on its own. That's a good question.
link |
02:13:30.560
Yeah. Well, we could, we'll leave that open. In terms of, from a reinforcement learning
link |
02:13:38.000
perspective, whether the darker sides of human nature or the better angels of our nature win out,
link |
02:13:46.880
statistically speaking, I don't know. I tend to be optimistic and hope that love wins out in the end.
link |
02:13:52.960
You've done a lot of incredible stuff and your book is driving towards this fourth question that
link |
02:14:01.520
you started with on the nature of intelligence. What do you hope your legacy for people reading
link |
02:14:08.880
a hundred years from now? How do you hope they remember your work? How do you hope they remember
link |
02:14:14.560
this book? Well, I think as an entrepreneur or a scientist or any human who's trying to accomplish
link |
02:14:21.920
some things, I have a view that really all you can do is accelerate the inevitable. Yeah. It's like,
link |
02:14:30.960
you know, if we didn't figure out, if we didn't study the brain, someone else will study the
link |
02:14:33.920
brain. If, you know, if Elon didn't make electric cars, someone else would do it eventually.
link |
02:14:38.080
And if, you know, if Thomas Edison didn't invent a light bulb, we wouldn't be using candles today.
link |
02:14:42.400
So, what you can do as an individual is you can accelerate something that's beneficial
link |
02:14:48.880
and make it happen sooner than it would have. That's really it. That's all you can do.
link |
02:14:53.680
You can't create a new reality that it wasn't going to happen. So, from that perspective,
link |
02:15:01.280
I would hope that our work, not just me, but our work in general, people would look back and said,
link |
02:15:07.440
hey, they really helped make this better future happen sooner. They, you know, they helped us
link |
02:15:14.160
understand the nature of false beliefs sooner than they might have. Now we're so happy that
link |
02:15:18.640
we have these intelligent machines doing these things, helping us that maybe that solved the
link |
02:15:22.560
climate change problem and they made it happen sooner. So, I think that's the best I would hope
link |
02:15:28.320
for. Some would say those guys just moved the needle forward a little bit in time.
link |
02:15:33.280
Well, I do. It feels like the progress of human civilization is not, is there's a lot
link |
02:15:40.000
of trajectories. And if you have individuals that accelerate towards one direction that helps steer
link |
02:15:48.480
human civilization. So, I think in those long stretch of time, all trajectories will be traveled.
link |
02:15:55.200
But I think it's nice for this particular civilization on earth to travel down one that's
link |
02:15:59.840
not. Well, I think you're right. We have to take the whole period of, you know, World War II,
link |
02:16:03.440
Nazism or something like that. Well, that was a bad sidestep, right? We've been over there for a
link |
02:16:07.520
while. But, you know, there is the optimistic view about life that ultimately it does converge
link |
02:16:13.680
in a positive way. It progresses ultimately, even if we have years of darkness. So, yeah. So,
link |
02:16:21.920
I think you can perhaps that's accelerating the positive could also mean eliminating some bad
link |
02:16:27.200
missteps along the way, too. But I'm an optimistic in that way. Despite we talked about the end of
link |
02:16:34.560
civilization, you know, I think we're going to live for a long time. I hope we are. I think our
link |
02:16:40.080
society in the future is going to be better. We're going to have less discord. We're going to have
link |
02:16:42.640
less people killing each other. You know, we'll make them live in some sort of way that's compatible
link |
02:16:47.600
with the carrying capacity of the earth. I'm optimistic these things will happen. And all we
link |
02:16:53.520
can do is try to get there sooner. And at the very least, if we do destroy ourselves,
link |
02:16:57.840
we'll have a few satellites orbiting that will tell alien civilization that we were once here.
link |
02:17:05.680
Or maybe our future, you know, future inhabitants of earth. You know, imagine we,
link |
02:17:10.560
you know, the planet of the apes in here. You know, we kill ourselves, you know,
link |
02:17:13.600
a million years from now or a billion years from now. There's another species on the planet.
link |
02:17:16.480
Curious creatures were once here. Jeff, thank you so much for your work. And thank you so much for
link |
02:17:23.200
talking to me once again. Well, actually, it's great. I love what you do. I love your podcast.
link |
02:17:27.040
You have the most interesting people, me aside. So it's a real service, I think you do for,
link |
02:17:35.280
in a very broader sense for humanity, I think. Thanks, Jeff. All right. It's a pleasure.
link |
02:17:40.000
Thanks for listening to this conversation with Jeff Hawkins. And thank you to
link |
02:17:43.360
Codecademy, BioOptimizers, ExpressVPN, Asleep, and Blinkist. Check them out in the description
link |
02:17:50.960
to support this podcast. And now, let me leave you with some words from Albert Camus.
link |
02:17:57.600
An intellectual is someone whose mind watches itself. I like this, because I'm happy to be
link |
02:18:04.240
both halves, the watcher and the watched. Can they be brought together? This is the
link |
02:18:10.880
practical question we must try to answer. Thank you for listening. I hope to see you next time.