back to index

Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind | Lex Fridman Podcast #106


small model | large model

link |
00:00:00.000
The following is a conversation with Matt Potmanick, Director of Neuroscience Research at DeepMind.
link |
00:00:06.800
He's a brilliant cross disciplinary mind navigating effortlessly between cognitive psychology,
link |
00:00:12.400
computational neuroscience, and artificial intelligence.
link |
00:00:16.640
Quick summary of the ads. Two sponsors. The Jordan Harbinger Show and Magic Spoon Serial.
link |
00:00:23.760
Please consider supporting the podcast by going to jordanharbinger.com slash lex and
link |
00:00:30.000
also going to magicspoon.com slash lex and using code lex at checkout after you buy all of their
link |
00:00:37.680
serial. Click the links by the stuff. It's the best way to support this podcast and journey I'm on.
link |
00:00:44.480
If you enjoy this podcast, subscribe on YouTube, review it with 5,000 apple podcast,
link |
00:00:49.760
follow on Spotify, support on Patreon, or connect with me on Twitter at Lex Freedman, spelled
link |
00:00:56.640
surprisingly without the E just F R I D M A N. As usual, I'll do a few minutes of ads now and
link |
00:01:03.840
never any ads in the middle that can break the flow of the conversation. This episode is supported
link |
00:01:09.120
by the Jordan Harbinger Show. Go to jordanharbinger.com slash lex. It's how he knows I sent you.
link |
00:01:16.720
On that page, subscribe to his podcast, an apple podcast, Spotify, and you know where to look.
link |
00:01:24.160
I've been binging on his podcast. Jordan is a great interviewer and even a better human being.
link |
00:01:30.080
I recently listened to his conversation with Jack Barsky, former sleeper agent for the KGB in the
link |
00:01:35.120
80s and author of deep undercover, which is a memoir that paints yet another interesting
link |
00:01:41.040
perspective on the Cold War era. I've been reading a lot about the Stalin and then Gorbachev and
link |
00:01:47.600
Putin eras of Russia, but this conversation made me realize that I need to do a deep dive into the
link |
00:01:52.320
Cold War era to get a complete picture of Russia's recent history. Again, go to jordanharbinger.com
link |
00:01:59.840
slash lex. Subscribe to his podcast. It's how he knows I sent you. It's awesome. You won't regret it.
link |
00:02:05.680
This episode is also supported by Magic Spoon, low carb, keto friendly, super amazingly delicious
link |
00:02:14.240
cereal. I've been on a keto or very low carb diet for a long time now. It helps with my
link |
00:02:20.000
mental performance. It helps with my physical performance, even doing this crazy push up
link |
00:02:25.200
pull up challenge I'm doing, including the running. It just feels great. I used to love cereal.
link |
00:02:31.280
Obviously I can't have it now because most cereals have a crazy amount of sugar,
link |
00:02:36.640
which is terrible for you. So I quit it years ago, but Magic Spoon amazingly somehow is a
link |
00:02:44.720
totally different thing. Zero sugar, 11 grams of protein and only three net grams of carbs.
link |
00:02:50.800
It tastes delicious. It has a lot of flavors, two new ones, including peanut butter. But if you
link |
00:02:56.880
know what's good for you, you'll go with cocoa, my favorite flavor and the flavor of champions.
link |
00:03:04.080
Click the magicspoon.com slash lex link in the description and use code lex at checkout for
link |
00:03:09.760
free shipping and to let them know I sent you. They have agreed to sponsor this podcast for a long
link |
00:03:15.200
time. They're an amazing sponsor and an even better cereal. I highly recommend it. It's delicious.
link |
00:03:22.320
It's good for you. You won't regret it. And now here's my conversation with Matt
link |
00:03:27.520
Botkinick. How much of the human brain do you think we understand?
link |
00:03:33.280
I think we're at a weird moment in the history of neuroscience in the sense that
link |
00:03:45.040
I feel like we understand a lot about the brain at a very high level,
link |
00:03:48.160
but a very coarse level.
link |
00:03:51.840
When you say high level, what are you thinking? Are you thinking functional? Are you thinking
link |
00:03:55.920
structurally? In other words, what is the brain for? What kinds of computation does the brain do?
link |
00:04:04.960
What kinds of behaviors would we have to explain if we were going to look down at the
link |
00:04:13.680
mechanistic level? At that level, I feel like we understand much, much more about the brain
link |
00:04:19.520
than we did when I was in high school. But it's almost like we're seeing it through a fog. It's
link |
00:04:25.280
only at a very coarse level. We don't really understand what the neuronal mechanisms are
link |
00:04:30.080
that underlie these computations. We've gotten better at saying, what are the functions that
link |
00:04:35.680
the brain is computing that we would have to understand if we were going to get down to the
link |
00:04:39.360
neuronal level? At the other end of the spectrum, in the last few years, incredible progress has been
link |
00:04:49.120
made in terms of technologies that allow us to see, actually literally see in some cases,
link |
00:04:57.120
what's going on at the single unit level, even the dendritic level, and then there's this
link |
00:05:03.680
yawning gap in between. That's interesting. At the high level, so that's almost a cognitive
link |
00:05:08.240
science level. And then at the neuronal level, that's neurobiology and neuroscience just studying
link |
00:05:15.120
single neurons, the synaptic connections and all the dopamine, all the kind of neurotransmitters.
link |
00:05:21.360
One blanket statement I should probably make is that as I've gotten older, I have become more and
link |
00:05:29.200
more reluctant to make a distinction between psychology and neuroscience. To me, the point
link |
00:05:35.440
of neuroscience is to study what the brain is for. If you're a nephrologist and you want to
link |
00:05:45.120
learn about the kidney, you start by saying, what is this thing for? Well,
link |
00:05:50.960
it seems to be for taking blood on one side that has metabolites in it that shouldn't be there,
link |
00:05:59.760
sucking them out of the blood while leaving the good stuff behind, and then excreting that in
link |
00:06:06.240
the form of urine. That's what the kidney is for. It's like obvious. So the rest of the work is
link |
00:06:11.280
deciding how it does that. And this, it seems to me, is the right approach to take to the brain.
link |
00:06:16.960
You say, well, what is the brain for? The brain, as far as I can tell, is for producing behavior.
link |
00:06:22.640
It's for going from perceptual inputs to behavioral outputs. And the behavioral outputs
link |
00:06:28.800
should be adaptive. So that's what psychology is about. It's about understanding the structure
link |
00:06:34.720
of that function. And then the rest of neuroscience is about figuring out how those operations are
link |
00:06:40.720
actually carried out at a mechanistic level. That's really interesting. But so unlike the kidney,
link |
00:06:47.920
the brain, the gap between the electrical signal and behavior, you truly see neuroscience as the
link |
00:06:57.520
science that touches behavior, how the brain generates behavior, or how the brain converts
link |
00:07:05.680
raw visual information into understanding. You basically see cognitive science, psychology,
link |
00:07:13.360
and neuroscience as all one science. It's a personal statement. Is that a hopeful or
link |
00:07:21.680
realistic statement? So certainly you will be correct in your feeling in some number of years,
link |
00:07:28.160
but that number of years could be 200, 300 years from now. Oh, well, well, there's a,
link |
00:07:33.280
is that aspirational or is that pragmatic engineering feeling that you have?
link |
00:07:38.320
It's both in the sense that this is what I hope and expect will bear fruit over the coming decades.
link |
00:07:53.200
But it's also pragmatic in the sense that I'm not sure what we're doing in either psychology
link |
00:08:01.440
or neuroscience, if that's not the framing. I don't know what it means to understand the brain
link |
00:08:09.680
if part of the enterprise is not about understanding the behavior that's being produced.
link |
00:08:19.920
I mean, yeah, but I would compare it to maybe astronomers looking at the movement of the
link |
00:08:26.640
planets and the stars and without any interest of the underlying physics, right? And I would argue
link |
00:08:33.680
that at least in the early days, there's some value to just tracing the movement of the planets
link |
00:08:39.120
and the stars without thinking about the physics too much, because it's such a big leap to start
link |
00:08:44.320
thinking about the physics before you even understand even the basic structural elements of...
link |
00:08:49.360
Oh, I agree with that. I agree. But you're saying in the end, the goal should be to deeply understand.
link |
00:08:54.720
Well, right. And I think... So I thought about this a lot when I was in grad school, because a lot
link |
00:08:59.440
of what I studied in grad school was psychology. And I found myself a little bit confused about
link |
00:09:06.880
what it meant to... It seems like what we were talking about a lot of the time were
link |
00:09:12.480
virtual causal mechanisms. Like, oh, well, you know, attentional selection then selects some
link |
00:09:20.080
object in the environment, and that is then passed on to the motor... Information about that is passed
link |
00:09:26.080
on to the motor system. But these are virtual mechanisms. They're metaphors. There's no reduction
link |
00:09:34.080
to... There's no reduction going on in that conversation to some physical mechanism that...
link |
00:09:41.280
Which is really what it would take to fully understand how behavior is rising. But the
link |
00:09:47.040
causal mechanisms are definitely neurons interacting. I'm willing to say that at this
link |
00:09:51.360
point in history. So in psychology, at least for me personally, there was this strange insecurity
link |
00:09:58.240
about trafficking in these metaphors, which we're supposed to explain the function of the mind.
link |
00:10:06.640
If you can't ground them in physical mechanisms, then what is the explanatory validity of these
link |
00:10:14.640
explanations? And I managed to soothe my own nerves by thinking about the history of
link |
00:10:27.360
genetics research. So I'm very far from being an expert on the history of this field. But I know
link |
00:10:34.560
enough to say that Mendelian genetics preceded Watson and Crick. And so there was a significant
link |
00:10:43.840
period of time during which people were productively investigating the structure of inheritance
link |
00:10:54.080
using what was essentially a metaphor, the notion of a gene. And genes do this and genes do that.
link |
00:11:00.000
But we're the genes. They're sort of an explanatory thing that we made up. And we ascribed to them
link |
00:11:07.040
these causal properties. So there's a dominant, there's a recessive, and then they recombine it.
link |
00:11:11.520
And then later, there was a kind of blank there that was filled in with a physical mechanism.
link |
00:11:21.440
That connection was made. But it was worth having that metaphor, because that gave us
link |
00:11:28.640
a good sense of what kind of causal mechanism we were looking for.
link |
00:11:33.520
Right. And the fundamental metaphor of cognition, you said, is the interaction of neurons.
link |
00:11:40.080
Is that what is the metaphor? No, no, the metaphor, the metaphors we use in cognitive psychology are
link |
00:11:50.240
things like attention, the way that memory works. I retrieve something from memory.
link |
00:11:59.280
A memory retrieval occurs. What is that? That's not a physical mechanism that I can examine in
link |
00:12:08.000
its own right. But it's still worth having that metaphorical level.
link |
00:12:13.680
Yeah, I misunderstood, actually. So the higher level abstractions is the metaphor that's most
link |
00:12:18.800
useful. But what about, how does that connect to the idea that that arises from interaction of
link |
00:12:31.760
neurons? Is the interaction of neurons also not a metaphor to you? Or is it literally,
link |
00:12:40.880
that's no longer a metaphor. That's already the lowest level of abstractions that could actually
link |
00:12:46.800
be directly studied. Well, I'm hesitating because I think what I want to say could end up being
link |
00:12:55.840
controversial. So what I want to say is, yes, the interactions of neurons, that's not
link |
00:13:02.240
metaphorical. That's a physical fact. That's where the causal interactions actually occur.
link |
00:13:08.320
Now, I suppose you couldn't say, well, even that is metaphorical relative to the quantum
link |
00:13:13.440
events that underline. I don't want to go down that rabbit hole.
link |
00:13:17.040
It's always turtles on top of turtles. It's all the way down.
link |
00:13:20.080
There is a reduction that you can do. You can say these psychological phenomena
link |
00:13:24.320
are can be explained through a very different kind of causal mechanism, which has to do with
link |
00:13:29.600
neurotransmitter release. And so what we're really trying to do in neuroscience, large,
link |
00:13:36.400
as I say, which for me includes psychology, is to take these psychological phenomena
link |
00:13:43.520
and map them onto neural events. I think remaining forever at the level of
link |
00:13:56.160
description that is natural for psychology, for me personally, would be disappointing.
link |
00:14:02.160
I want to understand how mental activity arises from neural activity. But the converse is also
link |
00:14:11.920
true. Studying neural activity without any sense of what you're trying to explain,
link |
00:14:19.680
to me, feels like at best roping around at random.
link |
00:14:27.200
Now, you've talked about this bridging of the gap between psychology and neuroscience,
link |
00:14:32.640
but do you think it's possible? My love is, I fell in love with psychology and psychiatry in
link |
00:14:38.960
general with Freud when I was really young and I hoped to understand the mind. And for me,
link |
00:14:44.160
understanding the mind at least at a young age before I discovered AI and even neuroscience was
link |
00:14:51.600
is psychology. And do you think it's possible to understand the mind without getting into all
link |
00:14:56.800
the messy details of neuroscience? Like you kind of mentioned, to you, it's appealing to try to
link |
00:15:03.600
understand the mechanisms at the lowest level. But do you think that's needed? That's required
link |
00:15:08.320
to understand how the mind works? That's an important part of the whole picture. But I would
link |
00:15:16.560
be the last person on earth to suggest that that reality renders psychology in its own right
link |
00:15:28.480
unproductive. I trained as a psychologist. I am fond of saying that I have learned much more
link |
00:15:34.880
from psychology than I have from neuroscience. To me, psychology is a hugely important discipline.
link |
00:15:43.680
And one thing that warms in my heart is that
link |
00:15:50.400
ways of investigating behavior that have been native to cognitive psychology since it's
link |
00:15:58.400
dawn in the 60s are starting to become interesting to AI researchers for a variety of reasons.
link |
00:16:09.280
And that's been exciting for me to see.
link |
00:16:11.440
Can you maybe talk a little bit about what you see as beautiful aspects of psychology,
link |
00:16:19.120
maybe limiting aspects of psychology? I mean, maybe just started off as a science, as a field.
link |
00:16:24.960
To me, when I understood what psychology is, analytical psychology, like the way it's
link |
00:16:31.440
actually carried out, it's really disappointing to see two aspects. One is how small the N is,
link |
00:16:39.040
how small the number of subjects is in the studies. And two, it was disappointing to see
link |
00:16:45.200
how controlled the entire, how much it was in the lab. It wasn't studying humans in the wild.
link |
00:16:52.480
There was no mechanism for studying humans in the wild. So that's where I became a little bit
link |
00:16:56.320
disillusioned to psychology. And then the modern world of the internet is so exciting to me,
link |
00:17:02.880
the Twitter data or YouTube data, like data of human behavior on the internet becomes exciting
link |
00:17:08.240
because the N grows and then in the wild grows. But that's just my narrow sense. Do you have
link |
00:17:14.320
a optimistic or pessimistic cynical view of psychology? How do you see the field broadly?
link |
00:17:19.600
When I was in graduate school, it was early enough that there was still a thrill in seeing
link |
00:17:29.200
that there were ways of doing experimental science that provided insight to the structure of the
link |
00:17:39.040
mind. One thing that impressed me most when I was at that stage in my education was
link |
00:17:45.040
neuropsychology, analyzing the behavior of populations who had brain damage of different
link |
00:17:54.560
kinds and trying to understand what the specific deficits were that arose from
link |
00:18:04.480
a lesion in a particular part of the brain. And the kind of experimentation that was done and
link |
00:18:08.880
that's still being done to get answers in that context was so creative and it was so deliberate.
link |
00:18:19.280
It was good science. An experiment answered one question but raised another and somebody
link |
00:18:24.640
would do an experiment that answered that question and you really felt like you were narrowing in on
link |
00:18:29.360
some kind of approximate understanding of what this part of the brain was for.
link |
00:18:34.000
Do you have an example from memory of what kind of aspects of the mind could be studied in this
link |
00:18:40.400
kind of way? Oh, sure. I mean, the very detailed neuropsychological studies of language
link |
00:18:48.480
function, looking at production and reception and the relationship between visual function,
link |
00:18:56.160
you know, reading and auditory and semantic and still are these beautiful models that came
link |
00:19:04.080
out of that kind of research that really made you feel like you understood something that you
link |
00:19:08.560
hadn't understood before about how language processing is organized in the brain. But having
link |
00:19:16.160
said all that, you know, I think you are, I mean, I agree with you that the cost of doing
link |
00:19:28.560
highly controlled experiments is that you, by construction, miss out on the richness and complexity
link |
00:19:37.440
of the real world. One thing that, so I was drawn into science by what in those days was
link |
00:19:43.920
called connectionism, which is, of course, what we now call deep learning. And at that point in
link |
00:19:49.680
history, neural networks were primarily being used in order to model human cognition. They
link |
00:19:56.400
weren't yet really useful for industrial applications. So you always found neural
link |
00:20:01.280
networks in biological form beautiful? Oh, neural networks were very concretely the thing that drew
link |
00:20:07.680
me into science. I was handed, are you familiar with the PDP books from the 80s? I went to
link |
00:20:15.920
medical school before I went into science. Really? Interesting. Wow. I also did a graduate
link |
00:20:22.880
degree in art history, so I'm kind of exploring. Well, art history, I understand. That's just a
link |
00:20:28.960
curious, creative mind. But medical school, with the dream of what, if we could take that
link |
00:20:34.720
slight tangent, what did you want to be a surgeon? I actually was quite interested in surgery. I
link |
00:20:41.680
was interested in surgery and psychiatry, and I thought that I must be the only person on the
link |
00:20:48.960
planet who was torn between those two fields. And I said exactly that to my advisor in medical
link |
00:20:56.160
school who turned out, I found out later to be a famous psychoanalyst. And he said to me,
link |
00:21:03.120
no, no, it's actually not so uncommon to be interested in surgery and psychiatry.
link |
00:21:07.600
And he conjectured that the reason that people develop these two interests is that
link |
00:21:13.120
both fields are about going beneath the surface and kind of getting into the kind of secret.
link |
00:21:18.960
I mean, maybe you understand this as someone who was interested in psychoanalysis.
link |
00:21:24.960
There's a cliche phrase that people use now on like an NPR, the secret life of blankity blank.
link |
00:21:30.480
Right? And that was part of the thrill of surgery was seeing the secret activity that's
link |
00:21:38.240
inside everybody's abdomen and thorax. That's a very poetic way to connect it to
link |
00:21:43.200
disciplines that are very practically speaking different from each other.
link |
00:21:46.400
That's for sure. That's for sure. Yes. So how do we get on to medical school?
link |
00:21:52.240
So I was in medical school and I was doing a psychiatry rotation and my kind of
link |
00:21:58.720
advisor in that rotation asked me what I was interested in. And I said, well, maybe psychiatry.
link |
00:22:07.680
He said, why? And I said, well, I've always been interested in how the brain works.
link |
00:22:12.880
I'm pretty sure that nobody's doing scientific research that addresses my interests, which are,
link |
00:22:20.400
I didn't have a word for it then, but I would have said cognition.
link |
00:22:24.080
And he said, well, you know, I'm not sure that's true. You might be interested in these books.
link |
00:22:29.440
And he pulled down the PDB books from his shelf and they were still shrink wrapped.
link |
00:22:33.840
He hadn't read them, but he handed them to me. He said, you feel free to borrow these.
link |
00:22:38.720
And that was, you know, I went back to my dorm room and I just, you know, read them cover to cover.
link |
00:22:43.280
And what's PDP? Parallel distributed processing, which was one of the original names for deep
link |
00:22:49.520
learning. And so I apologize for the romanticized question, but what, what idea in the space of
link |
00:22:57.520
neural science in the space of the human brain is to you the most beautiful and mysterious,
link |
00:23:02.640
surprising? What, what had always fascinated me, even when I was a pretty young kid, I think,
link |
00:23:11.040
was the paradox that lies in the fact that the brain is so mysterious. And so it seems so distant.
link |
00:23:30.640
But at the same time, it's responsible for the, the, the, the full transparency of everyday life.
link |
00:23:38.400
It's, the brain is literally what makes everything obvious and familiar. And, and,
link |
00:23:45.040
and, and there's always one in the room with you. I used to teach, when I taught at Princeton,
link |
00:23:50.400
I used to teach a cognitive neuroscience course. And the very last thing I would say to the students
link |
00:23:55.520
was, you know, people often, when people think of scientific inspiration,
link |
00:24:03.440
the, the metaphor is often, well, look to the stars, you know, the stars will inspire you to
link |
00:24:10.160
wonder at the universe and, and, you know, think about your place in it and how things work. And,
link |
00:24:16.640
and I'm all for looking at the stars, but I've always been much more inspired and kind of my
link |
00:24:22.320
sense of wonder comes from the, not from the distant mysterious stars, but from the extremely
link |
00:24:30.240
intimately close brain. Yeah. There's something just endlessly fascinating to me about that.
link |
00:24:39.920
The, like Jessica said, the, the, the one is close and yet distant in, in terms of our
link |
00:24:46.800
understanding of it. Do you, are you also captivated by the, the fact that this very
link |
00:24:54.720
conversation is happening because two brains are communicating? Yes. Exactly. The, I guess what I
link |
00:25:01.760
mean is the subjective nature of the experience. If we can take a small tangent into the, the
link |
00:25:07.360
mystical of it, the consciousness, or, or when you're saying you're captivated by the idea
link |
00:25:13.760
of the brain, you're, are you talking about specifically the mechanism of cognition? Or,
link |
00:25:18.400
are you also just like, at least for me, it's almost like paralyzing the beauty and the mystery
link |
00:25:26.480
of the fact that it creates the entirety of the experience, not just the reasoning capability,
link |
00:25:31.600
but the experience. Well, I definitely resonate with that, that latter thought. And I, I often find
link |
00:25:41.840
discussions of artificial intelligence to be disappointingly narrow. You know, speaking
link |
00:25:51.040
as someone who has always had an interest in, in, in art. Right. I was just going to go there
link |
00:25:57.200
because it sounds like somebody who has an interest in art. Yeah. I mean, I, there, there,
link |
00:26:02.160
there, there are many layers to, you know, to full bore human experience. And, and in some ways,
link |
00:26:10.960
it's not enough to say, oh, well, don't worry, you know, we, we're talking about cognition,
link |
00:26:14.800
but we'll add emotion, you know, there's, there's, there's an incredible scope to
link |
00:26:22.480
what humans go through in, in every moment. And, and yes, so it's, that's part of what
link |
00:26:32.560
fascinates me is that, is that our brains are producing that. But at the same time,
link |
00:26:41.280
it's so mysterious to us how we literally, our brains are literally in our heads producing
link |
00:26:49.680
this experience. And yet there, and yet there's, it's so mysterious to us. And so, and the scientific
link |
00:26:56.480
challenge of getting at the, the actual explanation for that is so overwhelming. That's just, I don't
link |
00:27:04.000
know that certain people have fixations on particular questions. And that's always, that's
link |
00:27:10.080
just always been mine. Yeah, I would say the poetry that is fascinating. And I'm, I'm really
link |
00:27:15.040
interested in natural language as well. And when you look at artificial intelligence community,
link |
00:27:19.360
it always saddens me how much when you try to create a benchmark for the community together
link |
00:27:27.200
around how much of the magic of language is lost when you create that benchmark, that there's
link |
00:27:33.520
something we talk about experience, the, the music of the language, the wit, the something
link |
00:27:39.360
that makes a rich experience, something that would be required to pass the spirit of the
link |
00:27:44.800
touring test is lost in these benchmarks. And I wonder how to get it back in because it's very
link |
00:27:50.800
difficult. The moment you try to do like real good rigorous science, you lose some of that magic.
link |
00:27:56.800
When you try to study cognition in a rigorous scientific way, it feels like you're losing
link |
00:28:02.560
some of the magic, the, the seeing cognition in a mechanistic way that AI folk at this stage in
link |
00:28:08.880
our history. Okay, I agree with you. But at the same time, one, one thing that I found
link |
00:28:17.040
really exciting about that first wave of deep learning models in cognition was
link |
00:28:25.840
there was the fact that the people who were building these models were focused on the
link |
00:28:31.600
richness and complexity of human cognition. So an early debate in cognitive science,
link |
00:28:40.000
which I sort of witnessed as a grad student was about something that sounds very dry,
link |
00:28:44.160
which is the formation of the past tense. But there were these two camps. One said, well,
link |
00:28:50.640
the mind encodes certain rules. And it also has a list of exceptions. Because of course,
link |
00:28:58.560
you know, the rule is add ED, but that's not always what you do. So you have to have a list of
link |
00:29:02.720
exceptions. And, and then there were the connectionists who, you know, evolved into the deep learning
link |
00:29:09.200
people who said, well, you know, if you look carefully at the data, if you look at actually
link |
00:29:14.880
look at corpora, like language corpora, it's, it turns out to be very rich because, yes, there are,
link |
00:29:21.680
there are, there's a, you know, the most verbs that, and, you know, you just tack on ED. And then
link |
00:29:27.840
there are exceptions, but there are also, there's also, there are, there are rules that, you know,
link |
00:29:32.800
there's the exceptions aren't just random. There are certain clues to which, which,
link |
00:29:38.880
which verbs should be exceptional. And then there are exceptions to the exceptions. And
link |
00:29:44.240
there was a word that was kind of deployed in order to capture this, which was quasi regular.
link |
00:29:51.040
In other words, there are rules, but it's, it's messy. And there, there's, there's structure even
link |
00:29:56.720
among the exceptions. And, and it would be, yeah, you could try to write down, you could try to write
link |
00:30:01.920
down the structure in some sort of closed form, but really, the right way to understand how the
link |
00:30:07.680
brain is handling all this, and by the way, producing all of this is to build a deep neural
link |
00:30:13.360
network and train it on this data and see how it ends up representing all of this richness. So
link |
00:30:18.560
the way that deep learning was deployed in cognitive psychology was, that was the spirit of it.
link |
00:30:24.240
It was about that richness. And that's something that I always found very, very compelling, still
link |
00:30:30.720
do. Is it, is there something especially interesting and profound to you in terms of our
link |
00:30:36.480
current deep learning neural network, artificial neural network approaches, and the whatever
link |
00:30:43.760
we do understand about the biological neural networks in our brain? Is there, there's some,
link |
00:30:49.120
there's quite a few differences. Are some of them to you either interesting or perhaps
link |
00:30:55.680
profound in terms of, in terms of the gap we might want to try to close in trying to create a human
link |
00:31:04.320
level intelligence?
link |
00:31:06.160
What I would say here is something that a lot of people are saying, which is that
link |
00:31:10.240
one seeming limitation of the systems that we're building now is that they lack the kind of
link |
00:31:19.520
flexibility, the readiness to sort of turn on a dime when the context calls for it. That is so
link |
00:31:28.880
characteristic of human behavior. So is that connected to you to the, like which aspect of the
link |
00:31:35.120
neural networks in our brain is that connected to? Is that closer to the cognitive science level of
link |
00:31:44.080
now again, see like my natural inclination is to separate into three disciplines of
link |
00:31:49.840
neuroscience, cognitive science and psychology. And you've already kind of shut that down by
link |
00:31:55.600
saying you're kind of see them as separate. But just to look at those layers, I guess, where
link |
00:32:01.840
is there something about the lowest layer of the way the neurons interact that is profound to you
link |
00:32:10.320
in terms of this difference to the artificial neural networks? Or is all the key differences
link |
00:32:15.760
at a higher level of abstraction?
link |
00:32:19.200
One thing I often think about is that if you take an introductory computer science course
link |
00:32:25.760
and they are introducing you to the notion of Turing machines, one way of articulating
link |
00:32:35.760
what the significance of a Turing machine is, is that it's a machine emulator. It can emulate any
link |
00:32:43.520
other machine. And that to me, that way of looking at a Turing machine really sticks with me. I
link |
00:32:56.880
think of humans as maybe sharing in some of that character. We're capacity limited. We're not Turing
link |
00:33:05.920
machines, obviously, but we have the ability to adapt behaviors that are very much unlike anything
link |
00:33:14.080
we've done before. But there's some basic mechanism that's implemented in our brain that allows us to
link |
00:33:19.360
run software.
link |
00:33:21.360
But just on that point, you mentioned a Turing machine, but nevertheless, it's fundamentally
link |
00:33:26.240
our brains are just computational devices in your view. Is that what you're getting at?
link |
00:33:30.000
Like, it was a little bit unclear to this line you drew. Is there any magic in there,
link |
00:33:37.600
or is it just basic computation?
link |
00:33:40.480
I'm happy to think of it as just basic computation. But mind you, I won't be satisfied until somebody
link |
00:33:46.720
explains to me what the basic computations are that are leading to the full richness of human
link |
00:33:52.800
cognition. It's not going to be enough for me to understand what the computations are that allow
link |
00:33:59.120
people to do arithmetic or play chess. I want the whole thing.
link |
00:34:06.320
And a small tangent because you kind of mentioned coronavirus, there's group behavior.
link |
00:34:13.360
Is there something interesting to your search of understanding the human mind
link |
00:34:18.640
where behavior of large groups or just behavior of groups is interesting? Seeing that as a
link |
00:34:24.800
collective mind, as a collective intelligence, perhaps seeing the groups of people as a single
link |
00:34:29.840
intelligent organisms, especially looking at the reinforcement learning work you've done recently.
link |
00:34:35.520
Well, yeah, I have the honor of working with a lot of incredibly smart people,
link |
00:34:43.600
and I wouldn't want to take any credit for leading the way on the multiagent work that's
link |
00:34:48.880
come out of my group or deep mind lately. But I do find it fascinating. And I think
link |
00:34:59.680
it can't be debated. Human behavior arises within communities. That just seems to me
link |
00:35:07.440
self evident. But to me, it is self evident, but that seems to be a profound aspects of
link |
00:35:14.800
something that created. That was like, if you look at 2001 Space Odyssey, when the monkeys
link |
00:35:20.080
touched the... Yeah. That's the magical moment, I think Yovar Harari argues that the ability of our
link |
00:35:28.160
large numbers of humans to hold an idea, to converge towards idea together, like he said,
link |
00:35:32.240
shaking hands versus bumping elbows, somehow converge without being in a room altogether,
link |
00:35:40.800
just kind of this distributed convergence towards an idea over a particular period of time seems
link |
00:35:46.880
to be fundamental to just every aspect of our cognition, of our intelligence. Because humans,
link |
00:35:54.880
we'll talk about reward, but it seems like we don't really have a clear objective function under
link |
00:36:00.160
which we operate, but we all kind of converge towards one somehow. And that, to me, has always
link |
00:36:05.600
been a mystery that I think is somehow productive for also understanding AI systems.
link |
00:36:13.520
But I guess that's the next step. The first step is try to understand the mind.
link |
00:36:18.640
Well, I don't know. I mean, I think there's something to the argument that
link |
00:36:24.240
that kind of bottom, like strictly bottom up approach is wrongheaded. In other words,
link |
00:36:30.400
there are basic aspects of human intelligence that
link |
00:36:38.240
can only be understood in the context of groups. I'm perfectly open to that. I've never been
link |
00:36:45.280
particularly convinced by the notion that we should consider intelligence to adhere
link |
00:36:53.920
at the level of communities. I don't know why. I'm sort of stuck on the notion that the basic
link |
00:36:59.520
unit that we want to understand is individual humans. And if we have to understand that in
link |
00:37:05.840
the context of other humans, fine. But for me, intelligence is just... I stubbornly define it
link |
00:37:13.520
as something that is an aspect of an individual human. That's just my...
link |
00:37:19.440
I'm with you, but that could be the reductionist dream of a scientist because you can understand
link |
00:37:24.640
a single human. It also is very possible that intelligence can only arise when there's multiple
link |
00:37:31.600
intelligences. When there's multiple... It's a sad thing if that's true because it's very difficult
link |
00:37:38.960
to study. But if it's just one human, that one human would not be... Homo sapiens would not
link |
00:37:45.120
become that intelligent. That's a possibility. I'm with you. One thing I will say along these lines
link |
00:37:51.920
is that I think a serious effort to understand human intelligence,
link |
00:38:05.440
and maybe to build a human like intelligence, needs to pay just as much attention to the
link |
00:38:11.920
structure of the environment as to the structure of the cognizing system, whether it's a brain
link |
00:38:20.720
or an AI system. That's one thing I took away actually from my early studies with the pioneers
link |
00:38:27.840
of neural network research, people like Jay McClelland and John Cohen. The structure of
link |
00:38:36.080
cognition is really... It's only partly a function of the architecture of the brain
link |
00:38:44.400
and the learning algorithms that it implements. What really shapes it is the
link |
00:38:50.240
interaction of those things with the structure of the world in which those things are embedded,
link |
00:38:56.160
right? And that's especially important for... That's made most clear in reinforcement learning
link |
00:39:00.800
where the simulated environment is... You can only learn as much as you can simulate,
link |
00:39:05.680
and that's what made deep mind made very clear with the other aspect of the environment,
link |
00:39:10.960
which is the self play mechanism of the other agent, of the competitive behavior, which
link |
00:39:17.280
the other agent becomes the environment, essentially. And that's one of the most exciting
link |
00:39:22.480
ideas in AI is the self play mechanism that's able to learn successfully. So there you go.
link |
00:39:28.480
There's a thing where competition is essential for learning, at least in that context.
link |
00:39:34.880
So if we can step back into another sort of beautiful world, which is the actual mechanics,
link |
00:39:41.040
the dirty mess of it of the human brain, is there something for people who might not know?
link |
00:39:49.360
Is there something you can comment on or describe the key parts of the brain that are
link |
00:39:54.080
important for intelligence or just in general, what are the different parts of the brain that
link |
00:39:58.640
you're curious about that you've studied and that are just good to know about when you're
link |
00:40:04.000
thinking about cognition? Well, my area of expertise, if I have one, is prefrontal cortex.
link |
00:40:14.080
So... What's that? Where do we... It depends on who you ask. The technical definition is anatomical.
link |
00:40:25.680
There are parts of your brain that are responsible for motor behavior, and they're
link |
00:40:32.960
very easy to identify. And the region of your cerebral cortex, the sort of outer crust of
link |
00:40:43.040
your brain that lies in front of those is defined as the prefrontal cortex.
link |
00:40:49.200
And when you say anatomical, sorry to interrupt. So that's referring to sort of the geographic
link |
00:40:56.080
region as opposed to some kind of functional definition.
link |
00:41:00.080
Exactly. So this is kind of the coward's way out. I'm telling you what the prefrontal cortex is
link |
00:41:05.920
just in terms of what part of the real estate it occupies.
link |
00:41:09.520
The thing in the front of the brain.
link |
00:41:10.800
Yeah, exactly. And in fact, the early history of the neuroscientific
link |
00:41:20.000
investigation of what this front part of the brain does is sort of funny to read because
link |
00:41:26.080
it was really World War I that started people down this road of trying to figure out what
link |
00:41:35.600
different parts of the brain the human brain do in the sense that there were a lot of people
link |
00:41:41.520
with brain damage who came back from the war with brain damage. And that provided as tragic as that
link |
00:41:46.880
was, it provided an opportunity for scientists to try to identify the functions of different brain
link |
00:41:52.800
regions. And that was actually incredibly productive. But one of the frustrations that
link |
00:41:58.160
neuropsychologists faced was they couldn't really identify exactly what the deficit was
link |
00:42:03.760
that arose from damage to these most kind of frontal parts of the brain. It was just a very
link |
00:42:09.040
difficult thing to pin down. There were a couple of neuropsychologists who identified
link |
00:42:17.920
through a large amount of clinical experience and close observation, they started to
link |
00:42:24.480
put their finger on a syndrome that was associated with frontal damage. Actually,
link |
00:42:27.760
one of them was a Russian neuropsychologist named Luria, who students of cognitive
link |
00:42:33.600
psychology still read. And what he started to figure out was that the frontal cortex was
link |
00:42:42.400
somehow involved in flexibility, in guiding behaviors that required someone to override a
link |
00:42:53.360
habit or to do something unusual or to change what they were doing in every flexible way from
link |
00:43:00.960
one moment to another. So focused on like new experiences. And so the way your brain processes
link |
00:43:08.560
and acts in new experiences. Yeah. What later helped bring this function into
link |
00:43:15.760
better focus was a distinction between controlled and automatic behavior or in other
link |
00:43:21.760
literatures this is referred to as habitual behavior versus goal directed behavior.
link |
00:43:28.160
So it's very, very clear that the human brain has pathways that are dedicated to habits,
link |
00:43:36.400
to things that you do all the time and they need to be automatized so that they don't
link |
00:43:43.280
require you to concentrate too much. So that leaves your cognitive capacity free to do other
link |
00:43:48.400
things. Just think about the difference between driving when you're learning to drive versus
link |
00:43:56.240
driving after you're a fairly expert. There are brain pathways that slowly absorb those
link |
00:44:04.240
frequently performed behaviors so that they can be habits, so that they can be automatic.
link |
00:44:12.160
That's kind of like the purest form of learning. I guess it's happening there,
link |
00:44:16.480
which is why, I mean, this is kind of jumping ahead, which is why that perhaps is the most
link |
00:44:21.680
useful for us to focusing on and trying to see how artificial intelligence systems can learn.
link |
00:44:27.120
Is that the way? It's interesting. I do think about this distinction between controlled and
link |
00:44:30.800
automatic or goal directed and habitual behavior a lot in thinking about where we are in AI research.
link |
00:44:42.800
But just to finish the kind of dissertation here, the role of the prefrontal cortex
link |
00:44:51.280
is generally understood these days sort of in contradistinction to that habitual domain.
link |
00:45:00.240
In other words, the prefrontal cortex is what helps you override those habits.
link |
00:45:06.160
It's what allows you to say, well, what I usually do in this situation is X,
link |
00:45:10.640
but given the context, I probably should do Y. I mean, the elbow bump is a great example.
link |
00:45:18.000
Reaching out and shaking hands is probably a habitual behavior, and it's the prefrontal cortex
link |
00:45:25.120
that allows us to bear in mind that there's something unusual going on right now. In this
link |
00:45:30.560
situation, I need to not do the usual thing. The kind of behaviors that Luria reported,
link |
00:45:38.480
and he built tests for detecting these kinds of things, were exactly like this. In other words,
link |
00:45:44.880
when I stick out my hand, I want you instead to present your elbow. A patient with frontal
link |
00:45:50.640
damage would have a great deal of trouble with that. Somebody proffering their hand would elicit
link |
00:45:56.800
a handshake. The prefrontal cortex is what allows us to say, hold on. That's the usual thing,
link |
00:46:03.760
but I have the ability to bear in mind even very unusual contexts and to reason about
link |
00:46:10.400
what behavior is appropriate there. Just to get a sense, are us humans special in the presence of
link |
00:46:18.240
the prefrontal cortex? Do mice have a prefrontal cortex? Do other mammals that we can study?
link |
00:46:26.400
If no, then how do they integrate new experiences?
link |
00:46:31.280
That's a really tricky question and a very timely question because we have
link |
00:46:38.160
revolutionary new technologies for monitoring, measuring, and also causally influencing neural
link |
00:46:50.880
behavior in mice and fruit flies. These techniques are not fully available even for studying
link |
00:47:01.760
brain function in monkeys, let alone humans. It's a very urgent question whether the kinds
link |
00:47:15.680
of things that we want to understand about human intelligence can be pursued in these
link |
00:47:20.400
other organisms. To put it briefly, there's disagreement. People who study fruit flies
link |
00:47:31.520
will often tell you, hey, fruit flies are smarter than you think. They'll point to experiments
link |
00:47:36.720
where fruit flies were able to learn new behaviors, were able to generalize from one stimulus to
link |
00:47:43.520
another in a way that suggests that they have abstractions that guide their generalization.
link |
00:47:48.880
I've had many conversations in which I will start by recounting some
link |
00:48:02.480
observation about mouse behavior, where it seemed like mice were taking an awfully long time to
link |
00:48:09.120
learn a task that for a human would be profoundly trivial. I will conclude from that that mice
link |
00:48:16.880
really don't have the cognitive flexibility that we want to explain, and that a mouse researcher
link |
00:48:21.040
will say to me, well, hold on. That experiment may not have worked because you asked a mouse to
link |
00:48:29.360
deal with stimuli and behaviors that were very unnatural for the mouse. If instead you
link |
00:48:36.400
kept the logic of the experiment the same, but presented it the information in a way
link |
00:48:44.320
that aligns with what mice are used to dealing with in their natural habitats,
link |
00:48:48.400
you might find that a mouse actually has more intelligence than you think.
link |
00:48:52.400
And then they'll go on to show you videos of mice doing things in their natural habitat,
link |
00:48:57.280
which seem strikingly intelligent, dealing with physical problems. I have to drag this piece
link |
00:49:03.920
of food back to my layer, but there's something in my way, and how do I get rid of that thing?
link |
00:49:09.760
And so I think these are open questions to put it, to sum that up.
link |
00:49:14.880
And then taking a small step back related to that, as you kind of mentioned, we're taking
link |
00:49:20.320
that little shortcut by saying it's a geographic part of the prefrontal cortex is a region of
link |
00:49:27.360
the brain. But what's your sense in a bigger philosophical view, prefrontal cortex and the
link |
00:49:34.800
brain in general? Do you have a sense that it's a set of subsystems in the way we've kind of implied
link |
00:49:41.760
that are pretty distinct? Or to what degree is it that? Or to what degree is it a giant
link |
00:49:48.320
interconnected mess where everything kind of does everything and it's impossible to disentangle them?
link |
00:49:54.800
I think there's overwhelming evidence that there's functional differentiation, that it's
link |
00:50:00.880
clearly not the case that all parts of the brain are doing the same thing. This follows immediately
link |
00:50:09.040
from the kinds of studies of brain damage that we were chatting about before. It's obvious from
link |
00:50:17.360
what you see if you stick an electrode in the brain and measure what's going on at the level of
link |
00:50:22.400
neural activity. Having said that, there are two other things to add which kind of, I don't know,
link |
00:50:32.640
maybe tug in the other direction. One is that when you look carefully at functional differentiation
link |
00:50:41.120
in the brain, what you usually end up concluding, at least this is my observation of the literature,
link |
00:50:48.080
is that the differences between regions are graded rather than being discrete.
link |
00:50:55.200
So, it doesn't seem like it's easy to divide the brain up into true modules that have clear
link |
00:51:05.360
boundaries and that have clear channels of communication between them.
link |
00:51:15.360
And this applies to the prefrontal cortex. Yeah, the prefrontal cortex is made up of a
link |
00:51:20.240
bunch of different subregions, the functions of which are not clearly defined and the borders
link |
00:51:28.880
of which seem to be quite vague. Then there's another thing that's popping up in very recent
link |
00:51:35.200
research which involves application of these new techniques. There are a number of studies that
link |
00:51:46.720
suggest that parts of the brain that we would have previously thought were quite focused
link |
00:51:54.960
in their function are actually carrying signals that we wouldn't have thought would be there.
link |
00:52:01.200
For example, looking in the primary visual cortex, which is classically thought of as
link |
00:52:07.120
basically the first cortical way station for processing visual information, basically what
link |
00:52:11.360
it should care about is where are the edges in this scene that I'm viewing? It turns out that
link |
00:52:18.000
if you have enough data, you can recover information from primary visual cortex about all sorts of
link |
00:52:22.720
things like what behavior the animal is engaged in right now and how much reward is on offer
link |
00:52:29.200
in the task that it's pursuing. It's clear that even regions whose function is pretty well defined
link |
00:52:38.800
at a core screen are nonetheless carrying some information about information from very different
link |
00:52:46.160
domains. The history of neuroscience is this oscillation between the two views that you
link |
00:52:52.800
articulated, the modular view and then the big mush view. I guess we're going to end up somewhere
link |
00:53:01.760
in the middle, which is unfortunate for our understanding because there's something about
link |
00:53:07.600
our conceptual system that finds it's easy to think about a modularized system and easy to
link |
00:53:13.200
think about a completely undifferentiated system, but something that lies in between is confusing,
link |
00:53:18.880
but we're going to have to get used to it, I think. Unless we can understand deeply the lower
link |
00:53:23.680
level mechanism of neuronal communication and so on. On that topic, you mentioned information.
link |
00:53:29.520
Just to get a sense, I imagine something that there's still mystery and disagreement on
link |
00:53:34.560
is how does the brain carry information and signal? What in your sense is the basic
link |
00:53:41.200
mechanism of communication in the brain? Well, I guess I'm old fashioned in that
link |
00:53:50.000
I consider the networks that we use in deep learning research to be a reasonable approximation
link |
00:53:56.960
to the mechanisms that carry information in the brain. The usual way of articulating that is to
link |
00:54:05.360
say, what really matters is a rate code. What matters is how quickly is an individual neuron
link |
00:54:13.120
spiking? What's the frequency at which it's spiking? Is it the timing of the spiking?
link |
00:54:17.760
Yeah. Is it firing fast or slow? Let's put a number on that, and that number is enough to
link |
00:54:23.920
capture what neurons are doing. There's still uncertainty about whether that's an
link |
00:54:31.440
adequate description of how information is transmitted within the brain. There are studies
link |
00:54:42.080
that suggest that the precise timing of spikes matters. There are studies that suggest that
link |
00:54:49.600
there are computations that go on within the dendritic tree, within a neuron, that are quite
link |
00:54:55.840
rich and structured and that really don't equate to anything that we're doing in our artificial
link |
00:55:00.480
neural networks. Having said that, I feel like we're getting somewhere by sticking to this
link |
00:55:10.080
high level of abstraction. By the way, we're talking about the electrical signal. I remember
link |
00:55:16.480
reading some vague paper somewhere recently where the mechanical signal, like the vibrations or
link |
00:55:23.440
something of the neurons also communicate information. I haven't seen that. There's
link |
00:55:30.800
somebody was arguing that the electrical signal, this is in nature paper, something like that,
link |
00:55:37.280
where the electrical signal is actually a side effect of the mechanical signal. I don't think
link |
00:55:44.240
they changed the story, but it's almost an interesting idea that there could be a deeper.
link |
00:55:50.160
It's always in physics with quantum mechanics, there's always a deeper story that could be
link |
00:55:56.080
underlying the whole thing. You think it's basically the rate of spiking that gets us,
link |
00:56:01.040
that's the lowest hanging fruit that can get us really far.
link |
00:56:04.960
This is a classical view. The only way in which this stance would be controversial is
link |
00:56:12.960
in the sense that there are members of the neuroscience community who are interested
link |
00:56:17.760
in alternatives, but this is really a very mainstream view. The way that neurons communicate
link |
00:56:22.880
is that neurotransmitters arrive, they wash up on a neuron. The neuron has receptors for
link |
00:56:34.560
those transmitters. The meeting of the transmitter with these receptors changes the voltage of the
link |
00:56:41.440
neuron. If enough voltage change occurs, then a spike occurs, one of these discrete events.
link |
00:56:49.200
It's that spike that is conducted down the axon and leads to neurotransmitter release.
link |
00:56:54.400
This is just like neuroscience 101. This is the way the brain is supposed to work.
link |
00:57:00.560
What we do when we build artificial neural networks of the kind that are now popular in the AI
link |
00:57:05.920
community is that we don't worry about those individual spikes, we just worry about the
link |
00:57:13.120
frequency at which those spikes are being generated. People talk about that as the
link |
00:57:19.760
activity of a neuron. The activity of units in a deep learning system is broadly analogous to
link |
00:57:30.800
the spike rate of a neuron. There are people who believe that there are other forms of
link |
00:57:37.920
communication in the brain. In fact, I've been involved in some research recently that suggests
link |
00:57:41.840
that the voltage fluctuations that occur in populations of neurons that are below the
link |
00:57:53.040
level of spike production may be important for communication, but I'm still pretty old school
link |
00:58:00.080
in the sense that I think that the things that we're building in AI research constitute reasonable
link |
00:58:06.880
models of how a brain would work. Let me ask just for fun a crazy question,
link |
00:58:13.440
because I can. Do you think it's possible we're completely wrong about the way this basic
link |
00:58:19.440
mechanism of neuronal communication, that the information is stored in some very different
link |
00:58:24.960
kind of way in the brain? Heck yes. I wouldn't be a scientist if I didn't think there was any
link |
00:58:31.200
chance we were wrong. If you look at the history of deep learning research as it's been applied to
link |
00:58:40.480
neuroscience, of course, the vast majority of deep learning research these days isn't about
link |
00:58:44.720
neuroscience, but if you go back to the 1980s, there's an unbroken chain of research in which
link |
00:58:53.520
a particular strategy is taken, which is, hey, let's train a deep learning system. Let's train a
link |
00:59:01.840
multi layer neural network on this task that we trained our rat on or our monkey on or this
link |
00:59:11.040
human being on. Then let's look at what the units deep in the system are doing. Let's ask whether
link |
00:59:19.040
they're doing resembles what we know about what neurons deep in the brain are doing. Over and
link |
00:59:26.160
over and over and over, that strategy works in the sense that the learning algorithms that we
link |
00:59:33.440
have access to, which typically center on back propagation, they give rise to patterns of activity,
link |
00:59:42.080
patterns of response, patterns of neuronal behavior in these artificial models
link |
00:59:48.720
that look hauntingly similar to what you see in the brain. Is that a coincidence?
link |
00:59:57.360
At a certain point, it starts looking like such coincidence is unlikely to not be deeply meaningful.
link |
01:00:04.720
The circumstantial evidence is overwhelmed. But you're always open to total flipping of the
link |
01:00:10.160
table. Of course. You have coauthored several recent papers that weave beautifully between
link |
01:00:17.200
the world of neuroscience and artificial intelligence. Maybe if we could just try to
link |
01:00:25.680
dance around and talk about some of them, maybe try to pick out interesting ideas that jump to
link |
01:00:30.320
your mind from memory. Maybe looking at, we're talking about the prefrontal cortex, the 2018,
link |
01:00:36.800
I believe, paper called the prefrontal cortex is a matter of reinforcement learning system.
link |
01:00:41.760
Yeah. Is there a key idea that you can speak to from that paper?
link |
01:00:47.600
Yeah. The key idea is about meta learning. What is meta learning?
link |
01:00:54.800
Meta learning is, by definition, a situation in which you have a learning algorithm,
link |
01:01:04.640
and the learning algorithm operates in such a way that it gives rise to another learning algorithm.
link |
01:01:14.000
In the earliest applications of this idea, you had one learning algorithm sort of adjusting
link |
01:01:20.240
the parameters on another learning algorithm. But the case that we're interested in this paper is
link |
01:01:25.680
one where you start with just one learning algorithm, and then another learning algorithm kind of
link |
01:01:30.960
emerges out of thin air. I can say more about what I mean by that. I don't mean to be
link |
01:01:38.800
scurrent. But that's the idea of meta learning. It relates to the old idea in psychology of
link |
01:01:46.080
learning to learn. Situations where you have experiences that make you better at learning
link |
01:01:56.640
something new. A familiar example would be learning a foreign language. The first time
link |
01:02:01.760
you learn a foreign language, it may be quite laborious and disorienting and novel. But if
link |
01:02:08.560
let's say you've learned two foreign languages, the third foreign language obviously is going
link |
01:02:14.240
to be much easier to pick up. Why? Because you've learned how to learn. You know how this goes.
link |
01:02:20.080
You know, okay, I'm going to have to learn how to conjugate. I'm going to have to...
link |
01:02:22.480
But that's a simple form of meta learning, in the sense that there's some slow learning
link |
01:02:29.200
mechanism that's helping you kind of update your fast learning mechanism. Does that bring
link |
01:02:35.600
you into focus? From our understanding, from the psychology world, from the neuroscience,
link |
01:02:42.240
our understanding how meta learning might work in the human brain, what lessons can we draw
link |
01:02:49.360
from that that we can bring into the artificial intelligence world? Well, yeah. The origin of
link |
01:02:55.120
that paper was in AI work that we were doing in my group. We were looking at what happens when you
link |
01:03:03.840
train a recurrent neural network using standard reinforcement learning algorithms. But you train
link |
01:03:10.880
that network not just in one task, but you train it in a bunch of interrelated tasks.
link |
01:03:14.720
And then you ask what happens when you give it yet another task in that sort of line of
link |
01:03:21.600
interrelated tasks. And what we started to realize is that a form of meta learning spontaneously
link |
01:03:31.280
happens in recurrent neural networks. And the simplest way to explain it is to say
link |
01:03:37.360
a recurrent neural network has a kind of memory in its activation patterns. It's recurrent by
link |
01:03:46.480
definition in the sense that you have units that connect to other units that connect to other units.
link |
01:03:50.880
So you have sort of loops of connectivity, which allows activity to stick around and be updated
link |
01:03:56.320
over time. In psychology, in neuroscience, we call this working memory. It's like
link |
01:04:00.320
actively holding something in mind. And so that memory gives the recurrent neural network
link |
01:04:11.600
a dynamics. The way that the activity pattern evolves over time is inherent to the connectivity
link |
01:04:19.840
of the recurrent neural network. So that's idea number one. Now, the dynamics of that network
link |
01:04:25.440
are shaped by the connectivity, by the synaptic weights. And those synaptic weights are being
link |
01:04:31.200
shaped by this reinforcement learning algorithm that you're training the network with.
link |
01:04:37.600
So the punchline is, if you train a recurrent neural network with a reinforcement learning
link |
01:04:42.560
algorithm that's adjusting its weights, and you do that for long enough, the activation dynamics
link |
01:04:48.240
will become very interesting. So imagine I give you a task where you have to press one button
link |
01:04:55.360
or another, left button or right button. And there's some probability that I'm going to give you
link |
01:05:01.360
an M&M if you press the left button. And there's some probability I'll give you an M&M if you
link |
01:05:06.240
press the other button. And you have to figure out what those probabilities are just by trying
link |
01:05:10.000
things out. But as I said before, instead of just giving you one of these tasks, I give you a whole
link |
01:05:16.240
sequence. You know, I give you two buttons, and you figure out which one's best. And I go,
link |
01:05:20.080
good job. Here's a new box. Two new buttons. You have to figure out which one's best.
link |
01:05:23.920
Good job. Here's a new box. And every box has its own probabilities, and you have to figure.
link |
01:05:28.160
So if you train a recurrent neural network on that kind of sequence of tasks,
link |
01:05:33.600
what happens, it seemed almost magical to us when we first started kind of
link |
01:05:39.280
realizing what was going on. The slow learning algorithm that's adjusting the synaptic weights,
link |
01:05:46.800
those slow synaptic changes give rise to a network dynamics
link |
01:05:50.160
that the dynamics themselves turn into a learning algorithm. So in other words,
link |
01:05:57.280
you can tell this is happening by just freezing the synaptic weights, saying,
link |
01:06:00.960
okay, no more learning. You're done. Here's a new box. Figure out which button is best.
link |
01:06:07.440
And the recurrent neural network will do this just fine. It figures out which button is best.
link |
01:06:12.960
It kind of transitions from exploring the two buttons to just pressing the one that it likes
link |
01:06:18.000
best in a very rational way. How is that happening? It's happening because the activity dynamics of
link |
01:06:24.640
the network have been shaped by this slow learning process that's occurred over many,
link |
01:06:29.200
many boxes. And so what's happened is that this slow learning algorithm that's slowly adjusting
link |
01:06:36.080
the weights is changing the dynamics of the network, the activity dynamics, into its own
link |
01:06:41.920
learning algorithm. And as we were realizing that this is a thing, it just so happened that the
link |
01:06:52.480
group that was working on this included a bunch of neuroscientists. And it started kind of ringing
link |
01:06:58.240
a bell for us, which is to say that we thought, this sounds a lot like the distinction between
link |
01:07:04.560
synaptic learning and activity, synaptic memory and activity based memory in the brain.
link |
01:07:10.000
And it also reminded us of recurrent connectivity that's very characteristic of
link |
01:07:16.960
prefrontal function. So this is kind of why it's good to have people working on AI
link |
01:07:24.080
that know a little bit about neuroscience and vice versa, because we started thinking about
link |
01:07:29.440
whether we could apply this principle to neuroscience. And that's where the paper came from.
link |
01:07:33.520
So the kind of principle of the recurrence they can see in the prefrontal cortex,
link |
01:07:39.440
then you start to realize that it's possible for something like an idea of a learning to learn
link |
01:07:48.400
emerging from this learning process, as long as you keep
link |
01:07:52.640
varying the environment sufficiently. Exactly. So the kind of metaphorical transition we made
link |
01:07:59.440
to neuroscience was to think, okay, well, we know that the prefrontal cortex is highly recurrent.
link |
01:08:04.880
We know that it's an important locus for working memory for activation based memory.
link |
01:08:11.280
So maybe the prefrontal cortex supports reinforcement learning. In other words,
link |
01:08:18.160
what is reinforcement learning? You take an action, you see how much reward you got,
link |
01:08:21.520
you update your policy of behavior. Maybe the prefrontal cortex is doing that sort of thing
link |
01:08:26.800
strictly in its activation patterns. It's keeping around a memory in its activity patterns of what
link |
01:08:32.720
you did, how much reward you got. And it's using that activity based memory as a basis for updating
link |
01:08:40.080
behavior. But then the question is, well, how did the prefrontal cortex get so smart? In other
link |
01:08:44.720
words, where did these activity dynamics come from? How did that program that's implemented in
link |
01:08:50.720
the recurrent dynamics of the prefrontal cortex arise? And one answer that became evident in this
link |
01:08:56.880
work was, well, maybe the mechanisms that operate on the synaptic level, which we believe are mediated
link |
01:09:04.880
by dopamine, are responsible for shaping those dynamics. So this may be a silly question, but
link |
01:09:12.800
because this kind of several temporal classes of learning are happening and the learning to
link |
01:09:22.080
learn emerges, can you keep building stacks of learning to learn to learn, learning to learn
link |
01:09:30.480
to learn to learn to learn? Because it keeps, I mean, basically abstractions of more powerful
link |
01:09:36.240
abilities to generalize of learning complex rules. Or is this that's overstretching
link |
01:09:44.960
this kind of mechanism? Well, one of the people in AI who
link |
01:09:49.520
who started thinking about meta learning from very early on, Juergen and Schmitthuber,
link |
01:09:57.040
sort of cheekily suggested, I think it may have been in his PhD thesis, that we should think
link |
01:10:05.360
about meta, meta, meta, meta, meta, meta learning. That's really what's going to get us to true
link |
01:10:11.760
intelligence. Certainly, there's a poetic aspect to it. And it seems interesting and correct that
link |
01:10:19.440
that kind of levels of abstraction would be powerful. But is that something you see in the
link |
01:10:22.880
brain? Is it useful to think of learning in these meta, meta, meta way, or is it just meta
link |
01:10:30.800
learning? Well, one thing that really fascinated me about this mechanism that we were starting to
link |
01:10:38.640
look at, and other groups started talking about very similar things at the same time. And then
link |
01:10:45.440
a kind of explosion of interest in meta learning happened in the AI community shortly after that.
link |
01:10:50.400
I don't know if we had anything to do with that. But I was gratified to see that a lot of people
link |
01:10:55.520
started talking about meta learning. One of the things that I liked about the kind of flavor
link |
01:11:01.280
of meta learning that we were studying was that it didn't require anything special. It was just
link |
01:11:06.640
if you took a system that had some form of memory, that the function of which could be shaped by
link |
01:11:13.440
pick your RL algorithm, then this would just happen. I mean, there are a lot of forms of,
link |
01:11:20.800
there are a lot of meta learning algorithms that have been proposed since then that are fascinating
link |
01:11:25.520
and effective in their domains of application. But they're engineered. There are things that
link |
01:11:32.160
somebody had to say, well, gee, if we wanted meta learning to happen, how would we do that?
link |
01:11:35.600
Here's an algorithm that would, but there's something about the kind of meta learning
link |
01:11:39.440
that we were studying that seemed to me special in the sense that it wasn't an algorithm. It was
link |
01:11:45.280
just something that automatically happened if you had a system that had memory and it was
link |
01:11:51.360
trained with a reinforcement learning algorithm. And in that sense, it can be as meta as it wants
link |
01:11:58.480
to be. There's no limit on how abstract the meta learning can get because it's not reliant on
link |
01:12:06.480
a human engineering a particular meta learning algorithm to get there. And that's, I also,
link |
01:12:14.640
I don't know, I guess I hope that that's relevant in the brain. I think there's a kind of beauty
link |
01:12:19.120
in the ability of this emergent. The emergent aspect of it. Yeah, it's something that's
link |
01:12:25.520
engineered. Exactly. It's something that just happens in a sense. In a sense, you can't avoid
link |
01:12:32.560
this happening. If you have a system that has memory, and the function of that memory is
link |
01:12:39.200
shaped by reinforcement learning, and this system is trained in a series of interrelated tasks,
link |
01:12:45.920
this is going to happen. You can't stop it. As long as you have certain properties,
link |
01:12:50.080
maybe like a recurrent structure to. You have to have memory. It actually doesn't have to be
link |
01:12:54.240
a recurrent neural network. A paper that I was honored to be involved with even earlier
link |
01:12:59.680
used a kind of slot based memory. Do you remember the title? It was memory augmented neural
link |
01:13:07.600
networks. I think the title was meta learning in memory augmented neural networks.
link |
01:13:14.560
And it was the same exact story. If you have a system with memory, here it was a different
link |
01:13:21.680
kind of memory. But the function of that memory is shaped by reinforcement learning. Here it was
link |
01:13:31.200
the reads and writes that occurred on this slot based memory. This will just happen.
link |
01:13:39.920
This brings us back to something I was saying earlier about the importance of the environment.
link |
01:13:44.240
This will happen if the system is being trained in a setting where there's a sequence of tasks
link |
01:13:52.960
that all share some abstract structure. Sometimes we talk about task distributions.
link |
01:14:00.000
That's something that's very obviously true of the world that humans inhabit.
link |
01:14:06.000
But if you just think about what you do every day, you never do exactly the same thing that
link |
01:14:16.240
you did the day before. But everything that you do has a family resemblance. It shares
link |
01:14:21.360
structure with something that you did before. And so the real world is saturated with this
link |
01:14:30.240
kind of this property. It's endless variety with endless redundancy. And that's the setting in
link |
01:14:38.400
which this kind of meta learning happens. And it does seem like we're just so good at finding,
link |
01:14:44.880
just like in this emergent phenomenon you described, we're really good at finding that
link |
01:14:49.200
redundancy, finding those similarities, the family resemblance. Some people call it sort of,
link |
01:14:54.640
what is it? Melanie Mitchell was talking about analogies. So we're able to connect concepts
link |
01:15:00.880
together in this kind of way, in this same kind of automated emergent way, which there's so many
link |
01:15:07.760
echoes here of psychology and neuroscience and obviously now with reinforcement learning with
link |
01:15:15.760
recurrent neural networks at the core. If we could talk a little bit about dopamine, you have really,
link |
01:15:20.720
you're a part of coauthoring really exciting recent paper, very recent in terms of release
link |
01:15:27.840
on dopamine and temporal difference learning. Can you describe the key ideas of that paper?
link |
01:15:34.800
Sure. Yeah. I mean, one thing I want to pause to do is acknowledge my coauthors on actually
link |
01:15:40.240
both of the papers we're talking about. So this dopamine paper.
link |
01:15:42.640
I'll just, I'll certainly post all their names.
link |
01:15:45.520
Okay, wonderful. Yeah. Because I'm sort of a bashed to be the spokesperson for
link |
01:15:50.320
these papers when I had such amazing collaborators on both. So it's a comfort to me to know that
link |
01:15:56.960
you'll acknowledge them. Yeah, this is an incredible team there. But yeah.
link |
01:16:00.080
Oh, yeah. It's such a, it's so much fun. And in the case of the dopamine paper,
link |
01:16:06.240
we also collaborate with Naouchita at Harvard, who obviously a paper simply wouldn't have happened
link |
01:16:11.600
without him. But so you were asking for like a thumbnail sketch of?
link |
01:16:17.440
Yeah, thumbnail sketch or key ideas or, you know, things, the insights that, you know,
link |
01:16:22.400
continue on our kind of discussion here between neuroscience and AI.
link |
01:16:26.800
Yeah. I mean, this was another, a lot of the work that we've done so far is
link |
01:16:32.080
taking ideas that have bubbled up in AI and, you know, asking the question of whether the
link |
01:16:39.200
brain might be doing something related, which I think on the surface sounds like something that's
link |
01:16:45.840
really mainly of use to neuroscience. We see it also as a way of validating what we're doing
link |
01:16:54.240
on the AI side. If we can gain some evidence that the brain is using some technique that
link |
01:17:00.400
we've been trying out in our AI work, that gives us confidence that, you know, it may be a good idea
link |
01:17:07.600
that it'll, you know, scale to rich complex tasks that it'll interface well with other
link |
01:17:14.000
mechanisms. So you see it as a two way road, just because a particular paper is a little
link |
01:17:18.800
bit focused on from one to the, from AI, from neural networks to neuroscience. Ultimately,
link |
01:17:26.160
the discussion, the thinking, the productive long term aspect of it is the two way road
link |
01:17:32.080
nature of the whole. Yeah. I mean, we've talked about the notion of a virtuous circle between
link |
01:17:38.080
AI and neuroscience. And, you know, the way I see it, that's always been there since the two fields,
link |
01:17:47.360
you know, jointly existed. There have been some phases in that history when AI was sort of ahead.
link |
01:17:53.360
There are some phases when neuroscience was sort of ahead. I feel like, given the burst of
link |
01:17:59.920
innovation that's happened recently on the AI side, AI is kind of ahead in the sense that
link |
01:18:06.160
there are all of these ideas that we, you know, for which it's exciting to consider that there
link |
01:18:12.800
might be neural analogs. And neuroscience, you know, in a sense has been focusing on approaches
link |
01:18:22.240
to studying behavior that come from, you know, that are kind of derived from this earlier era
link |
01:18:27.360
of cognitive psychology. And, you know, so in some ways, fail to connect with some of the issues
link |
01:18:34.240
that we're grappling with in AI, like how do we deal with, you know, large, you know, complex
link |
01:18:39.200
environments. But, you know, I think it's inevitable that this circle will keep turning and there
link |
01:18:48.000
will be a moment in the not too different distant future when neuroscience is pelting AI researchers
link |
01:18:54.480
with insights that may change the direction of our work. Just a quick human question.
link |
01:19:00.080
Is it you have parts of your brain? This is very meta, but they're able to both think about
link |
01:19:08.320
neuroscience and AI. You know, I don't often meet people like that. So do you think, let me ask a
link |
01:19:17.680
meta plasticity question. Do you think a human being can be both good at AI and neuroscience?
link |
01:19:23.360
They're like, what on the team at DeepMind, what kind of human can occupy these two realms? And
link |
01:19:30.160
is that something you see everybody should be doing, can be doing, or is that a very special
link |
01:19:36.240
few can kind of jump? Just like we talked about art history, I would think it's a special person
link |
01:19:40.880
that can major in art history and also consider being a surgeon. Otherwise known as a dilettante?
link |
01:19:48.080
A dilettante, yeah. Easily distracted. No. I think it does take a special kind of person to be
link |
01:19:59.920
truly world class at both AI and neuroscience and I am not on that list. I happen to be someone
link |
01:20:08.880
who's interested in neuroscience and psychology involved using the kinds of modeling techniques
link |
01:20:17.520
that are now very central in AI. And that sort of, I guess, bought me a ticket to be involved in
link |
01:20:25.440
all of the amazing things that are going on in AI research right now. I do know a few people who I
link |
01:20:31.360
would consider pretty expert on both fronts, and I won't embarrass them by naming them. But
link |
01:20:37.040
there are exceptional people out there who are like this. The one thing that I find
link |
01:20:43.040
is a barrier to being truly world class on both fronts is the complexity of the technology
link |
01:20:55.040
that's involved in both disciplines now. So the engineering expertise that it takes to do
link |
01:21:05.040
truly front line hands on AI research is really, really considerable.
link |
01:21:10.640
The learning curve of the tools, just like the specifics of just whether it's programming
link |
01:21:15.280
or the kind of tools necessary to collect the data, to manage the data, to distribute,
link |
01:21:19.360
to compute all that kind of stuff. And on the neuroscience, I guess, side,
link |
01:21:22.400
there'll be all different sets of tools. Exactly, especially with the recent
link |
01:21:26.240
explosion in neuroscience methods. So having said all that, I think the best scenario
link |
01:21:37.600
for both neuroscience and AI is to have people interacting who live at every point on this
link |
01:21:48.320
spectrum from exclusively focused on neuroscience to exclusively focused on the engineering side
link |
01:21:55.520
of AI. But to have those people inhabiting a community where they're talking to people who
link |
01:22:03.680
live elsewhere on the spectrum. And I may be someone who's very close to the center in the
link |
01:22:09.920
sense that I have one foot in the neuroscience world and one foot in the AI world. And that
link |
01:22:15.600
central position I will admit prevents me, at least someone with my limited cognitive capacity,
link |
01:22:22.160
from having true technical expertise in either domain. But at the same time,
link |
01:22:28.720
I at least hope that it's worthwhile having people around who can kind of see the connections.
link |
01:22:35.520
Yeah, the community, the emergent intelligence of the community when it's nicely distributed
link |
01:22:43.360
is useful. Okay, so. Exactly, yeah. So hopefully that, I mean, I've seen that work,
link |
01:22:47.040
I've seen that work out well at DeepMind. There are people who, I mean, even if you just focus on
link |
01:22:53.600
the AI work that happens at DeepMind, it's been a good thing to have some people around doing
link |
01:22:59.680
that kind of work whose PhDs are in neuroscience or psychology. Every academic discipline has its
link |
01:23:08.480
kind of blind spots and kind of unfortunate obsessions and its metaphors and its reference
link |
01:23:16.880
points. And having some intellectual diversity is really healthy. People get each other unstuck,
link |
01:23:27.040
I think. I see it all the time at DeepMind. And I like to think that the people who bring
link |
01:23:33.440
some neuroscience background to the table are helping with that.
link |
01:23:37.280
So one of the, one of my, like, probably the deepest passion for me, what I would say,
link |
01:23:42.240
maybe we kind of spoke off, Mike, a little bit about it, but that I think is a blind spot for
link |
01:23:49.520
at least robotics and AI folks is human robot interaction, human agent interaction. Maybe
link |
01:23:56.960
do you have thoughts about how we reduce the size of that blind spot? Do you also share
link |
01:24:04.960
the feeling that not enough folks are studying this aspect of interaction?
link |
01:24:10.160
Well, I'm actually pretty intensively interested in this issue now. And there are people in my
link |
01:24:16.720
group who've actually pivoted pretty hard over the last few years from doing more traditional
link |
01:24:23.200
cognitive psychology and cognitive neuroscience to doing experimental work on human agent
link |
01:24:29.040
interaction. And there are a couple of reasons that I'm pretty passionately interested in this.
link |
01:24:35.520
One is it's kind of the outcome of having thought for a few years now about what we're up to.
link |
01:24:48.000
What are we doing? What is this what is this aid AI research for? So what does it mean to
link |
01:24:55.360
make the world a better place? I think I'm pretty sure that means making life better for humans.
link |
01:25:00.160
Yeah. And so how do you make life better for humans? That's a proposition that when you look at it
link |
01:25:09.920
carefully and honestly is rather horrendously complicated, especially when the AI systems that
link |
01:25:21.200
you're building are learning systems. You're not programming something that you then introduce
link |
01:25:30.080
to the world and it just works as programmed like Google Maps or something. We're building systems
link |
01:25:37.440
that learn from experience. So that typically leads to AI safety questions. How do we keep
link |
01:25:43.760
these things from getting out of control? How do we keep them from doing things that harm humans?
link |
01:25:48.880
And I mean, I hasten to say, I consider those hugely important issues. And there are large
link |
01:25:56.320
sectors of the research community at DeepMind and, of course, elsewhere who are dedicated to
link |
01:26:01.600
thinking hard all day every day about that. But I guess I would say a positive side to this too,
link |
01:26:09.440
which is to say, well, what would it mean to make human life better? And how can we imagine
link |
01:26:17.440
learning systems doing that? And in talking to my colleagues about that, we reached the
link |
01:26:24.640
initial conclusion that it's not sufficient to philosophize about that. You actually have to
link |
01:26:30.880
take into account how humans actually work and what humans want and the difficulties of knowing
link |
01:26:39.440
what humans want. And the difficulties that arise when humans want different things.
link |
01:26:45.760
And so human agent interaction has become a quite intensive focus of my group lately.
link |
01:26:54.880
If for no other reason that, in order to really address that issue in an adequate way,
link |
01:27:02.880
you have to, I mean, psychology becomes part of the picture.
link |
01:27:05.840
Yeah. And so there's a few elements there. So if you focus on solving, if you focus on the
link |
01:27:12.480
if you focus on the robotics problem, let's say AGI without humans in the picture is you're missing
link |
01:27:20.080
fundamentally the final step. When you do want to help human civilization, you eventually have
link |
01:27:25.120
to interact with humans. And when you create a learning system, just as you said, that will
link |
01:27:31.920
eventually have to interact with humans, the interaction itself has to become part of the
link |
01:27:39.440
learning process. So you can't just watch, well, my sense is, it sounds like your sense is you
link |
01:27:45.200
can't just watch humans to learn about humans. You have to also be part of the human world.
link |
01:27:50.080
You have to interact with humans. Yeah, exactly. And I mean, then questions arise that start
link |
01:27:56.080
imperceptibly, but inevitably to slip beyond the realm of engineering. So questions like,
link |
01:28:03.520
if you have an agent that can do something that you can't do,
link |
01:28:10.720
under what conditions do you want that agent to do it? So if I have a robot that can play
link |
01:28:22.480
Beethoven sonatas better than any human in the sense that the sensitivity,
link |
01:28:30.000
the expression is just beyond what any human, do I want to listen to that? Do I want to go to
link |
01:28:36.800
a concert and hear a robot play? These aren't engineering questions. These are questions
link |
01:28:43.040
about human preference and human culture. Psychology, bordering on philosophy.
link |
01:28:48.880
And then you start asking, well, even if we knew the answer to that, is it our place as AI
link |
01:28:56.400
engineers to build that into these agents? Probably the agents should interact with humans
link |
01:29:03.440
beyond the population of AI engineers and figure out what those humans want.
link |
01:29:08.640
And then when you start, I referred this the moment ago, but
link |
01:29:12.720
even that becomes complicated, be quote, what if two humans want different things?
link |
01:29:19.200
And you have only one agent that's able to interact with them and try to satisfy their
link |
01:29:23.680
preferences, then you're into the realm of economics and social choice theory and even
link |
01:29:32.480
politics. So there's a sense in which if you follow what we're doing to its logical conclusion,
link |
01:29:39.920
then it goes beyond questions of engineering and technology and starts to shade in perceptibly
link |
01:29:47.680
into questions about what kind of society do you want? And actually, once that dawned on me,
link |
01:29:55.680
I actually felt, I don't know what the right word is, quite refreshed in my involvement
link |
01:30:02.160
in AI research. It's almost like building this kind of stuff is going to lead us back to asking
link |
01:30:08.320
really fundamental questions about what's the good life and who gets to decide.
link |
01:30:16.560
And bringing in viewpoints from multiple sub communities to help us shape the way that we
link |
01:30:25.840
live. It started making me feel like doing AI research in a fully responsible way
link |
01:30:34.720
could potentially lead to a kind of cultural renewal. It's the way to understand human
link |
01:30:47.600
beings at the individual, the societal level, and maybe come a way to answer all the human
link |
01:30:53.440
questions of the meaning of life and all those kinds of things. Even if it doesn't give us a
link |
01:30:57.840
way of answering those questions, it may force us back to thinking about them. And it might
link |
01:31:05.760
restore a certain, I don't know, a certain depth to, or even, dare I say, spirituality to
link |
01:31:14.640
the way that, to the world. Maybe that's too grandiose.
link |
01:31:19.280
Well, I'm with you. I think AI will be the philosophy of the 21st century, the way which
link |
01:31:28.160
will open the door. I think a lot of AI researchers are afraid to open that door
link |
01:31:32.320
of exploring the beautiful richness of the human agent interaction, human AI interaction.
link |
01:31:39.360
I'm really happy that somebody like you have opened that door.
link |
01:31:43.520
And one thing I often think about is the usual schema for thinking about
link |
01:31:52.720
human agent interaction is this kind of dystopian, oh, our robot overlords.
link |
01:32:00.400
And again, I hasten to say AI safety is hugely important. And I'm not saying we
link |
01:32:05.760
shouldn't be thinking about those risks. Totally on board for that. But there's it.
link |
01:32:10.000
Having said that, what often follows for me is the thought that there's another
link |
01:32:21.680
kind of narrative that might be relevant, which is when we think of humans gaining more and more
link |
01:32:30.320
information about human life, the narrative there is usually that they gain more and more
link |
01:32:37.440
wisdom and they get closer to enlightenment and they become more benevolent. The Buddha is
link |
01:32:44.720
like that's a totally different narrative. And why isn't it the case that we imagine that the AI
link |
01:32:51.280
systems that we're creating, they're going to figure out more and more about the way the world
link |
01:32:55.200
works and the way that humans interact and they'll become beneficent. I'm not saying that will
link |
01:32:59.840
happen. I don't honestly expect that to happen without some careful setting things up very
link |
01:33:07.920
carefully. But it's another way things could go, right? And I would even push back on that. I
link |
01:33:14.240
personally believe that the most trajectories, natural human trajectories will lead us towards
link |
01:33:24.080
progress. So for me, there is a kind of sense that most trajectories in AI development will
link |
01:33:30.800
lead us into trouble to me. And we over focus on the worst case. It's like in computer science,
link |
01:33:38.320
theoretical computer science has been this focus on worst case analysis. There's something
link |
01:33:42.640
appealing to our human mind at some lowest level to be good. We don't want to be eaten by the tiger,
link |
01:33:49.280
I guess. So we want to do the worst case analysis. But the reality is that shouldn't stop us from
link |
01:33:55.760
actually building out all the other trajectories, which are potentially leading to all the positive
link |
01:34:01.360
worlds, all the, all the enlightenment, this book enlightenment now was even Panker and so on.
link |
01:34:06.800
This is looking generally at human progress. And there's so many ways that human progress
link |
01:34:12.160
can happen with AI. And I think you have to do that research. You have to do that work. You
link |
01:34:17.360
have to do the, not just the AI safety work of the one worst case analysis, how do we prevent that,
link |
01:34:23.360
but the actual tools and the glue and the mechanisms of human AI interaction that would
link |
01:34:31.520
lead to all the positive actions that can go. It's a super exciting area, right?
link |
01:34:36.320
Yeah. We should be spending, we should be spending a lot of our time saying what can go wrong.
link |
01:34:41.600
I think it's harder to see that there's work to be done to bring into focus the question of what
link |
01:34:50.640
it would look like for things to go right. That's not obvious. And we wouldn't be doing this if we
link |
01:34:58.880
didn't have the sense there was huge potential. We're not doing this for no reason. We have a
link |
01:35:06.160
sense that AGI would be a major boom to humanity. But I think it's worth starting now, even when
link |
01:35:14.240
our technology is quite primitive, asking, well, exactly what would that mean? We can start now
link |
01:35:20.320
with applications that are already going to make the world a better place, like solving protein
link |
01:35:24.560
folding. I think this deep mind has gotten heavy into science applications lately, which I think
link |
01:35:30.560
is a wonderful, wonderful move for us to be making. But when we think about AGI, when we think
link |
01:35:37.760
about building fully intelligent agents that are going to be able to, in a sense, do whatever they
link |
01:35:43.440
want, we should start thinking about what do we want them to want? What kind of world do we want
link |
01:35:50.880
to live in? That's not an easy question. And I think we just need to start working on it.
link |
01:35:56.880
And even on the path to sort of AGI, it doesn't have to be AGI, but just intelligent agents that
link |
01:36:01.600
interact with us and help us enrich our own existence on social networks, for example, and
link |
01:36:07.120
recommend our systems of various intelligence. There's so much interesting interaction that's
link |
01:36:10.640
yet to be understood and studied. And how do you create, I mean, Twitter is struggling with this
link |
01:36:18.880
very idea, how do you create AI systems that increase the quality and the health of a conversation?
link |
01:36:24.320
For sure. That's a beautiful, beautiful human psychology question.
link |
01:36:28.400
And how do you do that without deception being involved, without manipulation being involved,
link |
01:36:39.200
maximizing human autonomy? And how do you make these choices in a democratic way? How do you,
link |
01:36:47.040
how do we, again, I'm speaking for myself here, how do we face the fact that it's a small group
link |
01:36:57.920
of people who have the skill set to build these kinds of systems. But what it means to make the
link |
01:37:06.640
world a better place is something that we all have to be talking about. The world that we're
link |
01:37:14.880
trying to make a better place includes a huge variety of different kinds of people.
link |
01:37:20.160
Yeah. How do we cope with that? This is a problem that has been discussed in gory,
link |
01:37:26.560
extensive detail in social choice theory. One thing I'm really enjoying about the recent
link |
01:37:33.840
direction work has taken in some parts of my team is that, yeah, we're reading the AI literature,
link |
01:37:38.480
we're reading the neuroscience literature, but we've also started reading economics and,
link |
01:37:43.200
as I mentioned, social choice theory, even some political theory, because it turns out that
link |
01:37:48.800
it all becomes relevant. It all becomes relevant. But at the same time, we've been trying not to
link |
01:37:57.120
write philosophy papers, right? We've been trying not to write position papers. We're trying to
link |
01:38:02.320
figure out ways of doing actual empirical research that kind of take the first small steps to
link |
01:38:07.840
thinking about what it really means for humans with all of their complexity and contradiction and
link |
01:38:16.000
paradox to be brought into contact with these AI systems in a way that
link |
01:38:24.240
really makes the world a better place. And often reinforcement learning frameworks actually
link |
01:38:27.680
kind of allow you to do that machine learning. That's the exciting thing about AI is it allows
link |
01:38:33.920
you to reduce the unsolvable problem, philosophical problem into something more
link |
01:38:39.680
concrete that you can get a hold of. Yeah, and it allows you to kind of define the problem in some
link |
01:38:44.400
way that allows for growth in the system that's sort of, you know, you're not responsible for the
link |
01:38:52.560
details, right? You say, this is generally what I want you to do, and then learning takes care of
link |
01:38:57.920
the rest. Of course, the safety issues arise in that context. But I think also some of these
link |
01:39:05.120
positive issues arise in that context. What would it mean for an AI system to really come to understand
link |
01:39:10.880
what humans want? And with all of the subtleties of that, humans want help with certain things,
link |
01:39:24.000
but they don't want everything done for them, right? There is part of the satisfaction that
link |
01:39:29.680
humans get from life is in accomplishing things. So if there were devices around that did everything
link |
01:39:34.400
for, you know, I often think of the movie Wally, right? That's like dystopian in a totally different
link |
01:39:39.120
way. It's like, the machines are doing everything for us. That's not what we want it. You know,
link |
01:39:44.000
anyway, I just, I find this, you know, this kind of opens up a whole landscape of research
link |
01:39:49.600
that feels affirmative and exciting. Yeah. To me, it's one of the most exciting and it's
link |
01:39:54.800
wide open. Yeah. We have to, because it's a cool paper, talk about dopamine.
link |
01:39:59.280
Oh, yeah. Okay. So I can, we were going to, we were going to, I was going to give you a quick
link |
01:40:04.480
summary. Yeah. It's a quick summary of what's the title of the paper?
link |
01:40:10.560
I think we called it a distributional, a distributional code for value in dopamine
link |
01:40:16.560
based reinforcement learning. Yes. So that's another project that grew out of
link |
01:40:24.000
pure AI research. A number of people that DeepMind and a few other places had started working
link |
01:40:32.160
on a new version of reinforcement learning, which was defined by taking something in traditional
link |
01:40:40.160
reinforcement learning and just tweaking it. So the thing that they took from traditional
link |
01:40:44.240
reinforcement learning was a value signal. So at the center of reinforcement learning,
link |
01:40:50.080
at least most algorithms, is some representation of how well things are going. You're expected
link |
01:40:56.320
cumulative future reward. And that's usually represented as a single number. So if you imagine
link |
01:41:03.040
a gambler in a casino and the gambler's thinking, well, I have this probability of winning such
link |
01:41:09.200
and such an amount of money and I have this probability of losing such and such an amount
link |
01:41:12.080
of money, that situation would be represented as a single number, which is like the expected,
link |
01:41:17.680
the weighted average of all those outcomes. And this new form of reinforcement learning
link |
01:41:24.000
said, well, what if we generalize that to a distributional representation? So now we think
link |
01:41:29.120
of the gambler as literally thinking, well, there's this probability that I'll win this
link |
01:41:33.440
amount of money and there's this probability that I'll lose that amount of money. And we
link |
01:41:36.400
don't reduce that to a single number. And it had been observed through experiments, through just
link |
01:41:43.040
trying this out, that that kind of distributional representation really accelerated reinforcement
link |
01:41:50.960
learning and led to better policies. What's your intuition about, so we're talking about rewards.
link |
01:41:57.120
So what's your intuition? Why that is? Why does it depend? Well, it's kind of a
link |
01:42:01.040
a surprising historical note, at least surprised me when I learned it, that
link |
01:42:07.200
this had been tried out in a kind of heuristic way. People thought, well, gee, what would happen
link |
01:42:11.520
if we tried and then it had this empirically, it had this striking effect. And it was only then
link |
01:42:17.920
that people started thinking, well, gee, why? Why? Why? Why is this working? And that's led to a
link |
01:42:24.480
series of studies just trying to figure out why it works, which is ongoing. But one thing that's
link |
01:42:30.240
already clear from that research is that one reason that it helps is that it drives
link |
01:42:36.480
richer representation learning. So if you imagine two situations that have the same
link |
01:42:44.160
expected value, that the same kind of weighted average value, standard deep reinforcement learning
link |
01:42:50.640
algorithms are going to take those two situations and kind of in terms of the way they're represented
link |
01:42:55.520
internally, they're going to squeeze them together. Because the thing that you're trying to represent,
link |
01:43:02.400
which is their expected value, is the same. So all the way through the system,
link |
01:43:06.080
things are going to be mushed together. But what if those two situations actually have
link |
01:43:11.440
different value distributions? They have the same average value, but they have different
link |
01:43:17.600
distributions of value. In that situation, distributional learning will maintain the
link |
01:43:23.600
distinction between these two things. So to make a long story short, distributional learning
link |
01:43:27.920
can keep things separate in the internal representation that might otherwise be conflated
link |
01:43:34.160
or squished together. And maintaining those distinctions can be useful in when the system is
link |
01:43:39.600
now faced with some other task where the distinction is important.
link |
01:43:43.120
If we look at optimistic and pessimistic dopamine neurons, so first of all,
link |
01:43:47.040
what is dopamine? Why is this at all useful to think about in the artificial intelligence sense?
link |
01:44:00.640
But what do we know about dopamine in the human brain? What is it? Why is it useful?
link |
01:44:06.320
Why is it interesting? What does it have to do with the prefrontal cortex and learning in general?
link |
01:44:10.160
Yeah. So, well, this is also a case where there's a huge amount of detail and debate.
link |
01:44:19.520
But one currently prevailing idea is that the function of this neurotransmitter dopamine
link |
01:44:29.040
resembles a particular component of standard reinforcement learning algorithms, which is
link |
01:44:37.600
called the reward prediction error. So I was talking a moment ago about these value representations.
link |
01:44:44.080
How do you learn them? How do you update them based on experience? Well, if you made some
link |
01:44:49.600
prediction about a future reward, and then you get more reward than you were expecting,
link |
01:44:54.320
then probably retrospectively, you want to go back and increase the value representation
link |
01:45:00.560
that you attached to that earlier situation. If you got less reward than you were expecting,
link |
01:45:06.080
you should probably decrement that estimate. And that's the process of temporal difference.
link |
01:45:10.240
Exactly. This is the central mechanism of temporal difference learning, which is one of
link |
01:45:14.160
several sort of the backbone of our armamentarium in RL. And this connection between the reward
link |
01:45:24.240
prediction error and dopamine was made in the 1990s. And there's been a huge amount of research
link |
01:45:33.360
that seems to back it up. Dopamine may be doing other things, but this is clearly,
link |
01:45:38.880
at least roughly, one of the things that it's doing. But the usual idea was that dopamine was
link |
01:45:45.440
representing these reward prediction errors, again, in this single number way, representing your
link |
01:45:53.840
surprise with a single number. And in distributional reinforcement learning, this kind of new
link |
01:46:00.160
elaboration of the standard approach, it's not only the value function that's represented as a
link |
01:46:07.120
single number, it's also the reward prediction error. And so what happened was that Will Dabney,
link |
01:46:16.000
one of my collaborators, who was one of the first people to work on distributional
link |
01:46:20.320
temporal difference learning, talked to a guy in my group, Zeb Kurt Nelson,
link |
01:46:25.600
who's a computational neuroscientist, and said, gee, is it possible that dopamine might be doing
link |
01:46:31.280
something like this distributional coding thing? And they started looking at what was in the
link |
01:46:35.200
literature, and then they brought me in, and we started talking to Naochida. And we came up with
link |
01:46:40.080
some specific predictions about if the brain is using this kind of distributional coding,
link |
01:46:45.120
then in the tasks that now has studied, you should see this, this, this, and this. And that's where
link |
01:46:49.840
the paper came from. We enumerated a set of predictions, all of which ended up being fairly
link |
01:46:55.120
clearly confirmed, and all of which leads to at least some initial indication that the brain
link |
01:47:01.120
might be doing something like this distributional coding, that dopamine might be representing
link |
01:47:05.840
surprise signals in a way that is not just collapsing everything to a single number,
link |
01:47:10.560
but instead is kind of respecting the variety of future outcomes, if that makes sense.
link |
01:47:16.480
So yeah, so that's showing, suggesting possibly that dopamine has a really interesting
link |
01:47:21.200
representation scheme in the human brain for its reward signal. Exactly. That's fascinating.
link |
01:47:29.520
That's another beautiful example of AI revealing something nice about neuroscience,
link |
01:47:34.400
potentially suggesting possibilities. Well, you never know. So the minute you publish paper like
link |
01:47:38.960
that, the next thing you think is, I hope that replicates. I hope we see that same thing in
link |
01:47:44.160
other data sets. But of course, several labs now are doing the follow up experiments. So we'll
link |
01:47:49.280
know soon. But it has been a lot of fun for us to take these ideas from AI and bring them into
link |
01:47:55.920
neuroscience and see how far we can get. So we talked about it a little bit, but where do you see
link |
01:48:02.080
the field of neuroscience and artificial intelligence heading broadly? What are the
link |
01:48:09.280
possible exciting areas that you can see breakthroughs in the next, let's get crazy,
link |
01:48:16.240
not just three or five years, but next 10, 20, 30 years that would make you excited and perhaps
link |
01:48:26.800
you'd be part of. On the neuroscience side, there's a great deal of interest now in what's
link |
01:48:35.040
going on in AI. At the same time, I feel like neuroscience, especially the part of neuroscience
link |
01:48:50.000
that's focused on circuits and systems, really mechanism focused, there's been this explosion
link |
01:48:59.360
in new technology. Up until recently, the experiments that have exploited this technology
link |
01:49:10.720
have not involved a lot of interesting behavior. And this is for a variety of reasons,
link |
01:49:16.320
one of which is in order to employ some of these technologies, if you're studying a mouse,
link |
01:49:22.160
you have to head fix the mouse. In other words, you have to immobilize the mouse.
link |
01:49:25.920
And so it's been tricky to come up with ways of eliciting interesting behavior from a mouse
link |
01:49:31.360
that's restrained in this way, but people have begun to create very interesting solutions to
link |
01:49:39.200
this, like virtual reality environments where the animal can move a trackball. And as people have
link |
01:49:47.840
begun to explore what you can do with these technologies, I feel like more and more people
link |
01:49:51.280
are asking, well, let's try to bring behavior into the picture. Let's try to reintroduce behavior,
link |
01:49:58.160
which was supposed to be what this whole thing was about. And I'm hoping that those two trends,
link |
01:50:06.640
the growing interest in behavior and the widespread interest in what's going on in AI,
link |
01:50:14.000
will come together to kind of open a new chapter in neuroscience research, where there's a kind of
link |
01:50:22.560
a rebirth of interest in the structure of behavior and its underlying substrates. But that research
link |
01:50:29.600
is being informed by computational mechanisms that we're coming to understand in AI. If we
link |
01:50:37.920
can do that, then we might be taking a step closer to this utopian future that we were talking about
link |
01:50:43.520
earlier, where there's really no distinction between psychology and neuroscience. Neuroscience
link |
01:50:48.640
is about studying the mechanisms that underlie whatever it is the brain is for, and what is
link |
01:50:56.400
the brain for? It's for behavior. I feel like we could maybe take a step toward that now,
link |
01:51:02.960
if people are motivated in the right way. You also asked about AI. So that was the neuroscience
link |
01:51:09.920
question. You said neuroscience. That's right. And especially places like DeepMind are interested
link |
01:51:14.160
in both branches. What about the engineering of intelligence systems?
link |
01:51:20.640
I think one of the key challenges that a lot of people are seeing now in AI is to build systems
link |
01:51:30.640
that have the kind of flexibility that humans have in two senses. One is that humans
link |
01:51:39.840
can be good at many things. They're not just expert at one thing. And they're also flexible
link |
01:51:45.040
in the sense that they can switch between things very easily, and they can pick up new things
link |
01:51:51.440
very quickly, because they very ably see what a new task has in common with other things that
link |
01:51:58.240
they've done. And that's something that our AI systems do not have. There are some people who
link |
01:52:10.560
like to argue that deep learning and deep RL are simply wrong for getting that kind of flexibility.
link |
01:52:16.960
I don't share that belief, but the simpler fact of the matter is we're not building things yet
link |
01:52:23.600
that do have that kind of flexibility. And I think the attention of a large part of the AI
link |
01:52:29.040
community is starting to pivot to that question. How do we get that? That's going to lead to
link |
01:52:35.440
a focus on abstraction. It's going to lead to a focus on what in psychology we call cognitive
link |
01:52:42.640
control, which is the ability to switch between tasks, the ability to quickly put together a
link |
01:52:48.000
program of behavior that you've never executed before, but you know makes sense for a particular
link |
01:52:53.200
set of demands. It's very closely related to what the prefrontal cortex does on the neuroscience
link |
01:52:59.760
side. So I think it's going to be an interesting new chapter. So that's the reasoning side and
link |
01:53:06.400
cognition side, but let me ask the over romanticized question. Do you think we'll ever engineer an
link |
01:53:12.320
AGI system that we humans would be able to love and that would love us back? So have that level
link |
01:53:20.880
and depth of connection? I love that question. And it relates closely to things that I've been
link |
01:53:32.320
thinking about a lot lately in the context of this human AI research. There's social psychology
link |
01:53:39.040
research in particular by Susan Fisk at Princeton in the department where I used to work,
link |
01:53:46.880
where she dissects human attitudes toward other humans into a two dimensional scheme.
link |
01:54:01.520
One dimension is about ability. How able, how capable is this other person?
link |
01:54:10.000
But the other dimension is warmth. So you can imagine another person who's very skilled and
link |
01:54:16.480
capable, but it's very cold. And you wouldn't really like highly, you might have some reservations
link |
01:54:24.720
about that other person. But there's also a kind of reservation that we might have about
link |
01:54:31.280
another person who elicits in us or displays a lot of human warmth, but is not good at getting
link |
01:54:38.160
things done. The greatest esteem that we, we reserve our greatest esteem really for people who
link |
01:54:45.680
are both highly capable and also quite warm. That's like the best of the best. This isn't a
link |
01:54:56.160
normative statement I'm making. This is just an empirical statement. These are the two dimensions
link |
01:55:02.240
that people seem to kind of like, along which people size other people up. And in AI research,
link |
01:55:07.920
we really focus on this capability thing. We want our agents to be able to do stuff. This thing
link |
01:55:12.960
can play go at a super human level. That's awesome. But that's only one dimension. What's the,
link |
01:55:18.320
what about the other dimension? What would it mean for an AI system to be warm? And I don't know,
link |
01:55:25.440
maybe there are easy solutions here like we can put a face on our AI systems. It's cute. It has big
link |
01:55:30.720
ears. I mean, that's probably part of it. But I think it also has to do with a pattern of behavior,
link |
01:55:35.520
a pattern of, you know, what would it mean for an AI system to display caring, compassionate
link |
01:55:42.240
behavior in a way that actually made us feel like it was for real, that we didn't feel like it was
link |
01:55:48.720
simulated. We didn't feel like we were being duped. To me, that, you know, people talk about the
link |
01:55:54.960
Turing test or some, some descendant of it. I feel like that's the ultimate Turing test.
link |
01:56:00.560
You know, is there, is there an AI system that can not only convince us that it knows how to
link |
01:56:06.400
reason and it knows how to interpret language, but that we're comfortable saying, yeah, that AI
link |
01:56:13.360
system is a good guy. You know, like, I mean, that on the warmth scale, whatever warmth is,
link |
01:56:18.560
we kind of intuitively understand it. But we also want to be able to, yeah, we don't understand it
link |
01:56:26.800
explicitly enough yet to be able to engineer it. Exactly. And that's, and that's an open scientific
link |
01:56:33.200
question. You kind of alluded to it several times in the human AI interaction. That's a question
link |
01:56:37.840
that should be studied. And probably one of the most important questions as we move to AI.
link |
01:56:43.680
Humans are so good at it. Yeah. You know, it's not just weird. It's not just that we're born warm,
link |
01:56:49.280
you know, like, I suppose some people are, are warmer than others, given, you know, whatever
link |
01:56:54.080
genes they manage to inherit. But there's also, there's also, there are also learned skills
link |
01:57:00.160
involved, right? I mean, there are ways of communicating to other people that you care,
link |
01:57:06.320
that they matter to you, that you're enjoying interacting with them, right? And we learn these
link |
01:57:12.320
skills from one another. And it's not out of the question that we could build engineered systems.
link |
01:57:19.200
I think it's hopeless, as you say, that we could somehow hand design these sorts of, these sorts
link |
01:57:24.240
of behaviors. But it's not out of the question that we could build systems that kind of we,
link |
01:57:29.360
we, we instill in them something that sets them out in the right direction. So that they,
link |
01:57:36.480
they end up learning what it is to interact with humans in a way that's gratifying to humans.
link |
01:57:42.720
I mean, honestly, if that's not where we're headed, I want out.
link |
01:57:52.080
I think it's exciting as a scientific problem, just as you described. I honestly don't see a
link |
01:57:58.320
better way to end it than talking about warmth and love. And Matt, I don't think I've ever had such a
link |
01:58:04.800
wonderful conversation where my questions were so bad, and your answers were so beautiful.
link |
01:58:09.200
So I deeply appreciate it. I really enjoyed it. It's been very fun. As you know, as you can probably
link |
01:58:14.000
tell, I, I really, you know, I, there's something I like about kind of thinking outside the box and
link |
01:58:20.000
like, um, so it's good having fun with that. Awesome. Thanks so much for doing it.
link |
01:58:25.440
Thanks for listening to this conversation with Matt Bopinik. And thank you to our sponsors,
link |
01:58:30.320
The Jordan Harbinger Show and Magic Spoon Low Carb Keto Cereal. Please consider supporting
link |
01:58:37.120
this podcast by going to Jordan Harbinger dot com slash Lex and also going to magic spoon dot com
link |
01:58:43.600
slash Lex and using code Lex at checkout. Click the links, buy all the stuff. It's the best way
link |
01:58:51.360
to support this podcast and the journey I'm on in my research and the startup. If you enjoy this
link |
01:58:57.680
thing, subscribe on YouTube, review it with the five stars in a podcast, support on Patreon,
link |
01:59:03.680
follow on Spotify, or connect with me on Twitter at Lex Friedman. Again, spelled miraculously
link |
01:59:10.960
without the E just F R I D M A N. And now let me leave you with some words from neurologist
link |
01:59:18.160
V S Sama Chandran. How can a three pound mass of jelly that you can hold in your palm,
link |
01:59:24.960
imagine angels, contemplate the meaning of an infinity, even question its own place in cosmos,
link |
01:59:30.960
especially awe inspiring is the fact that any single brain, including yours, is made up of atoms
link |
01:59:38.320
that were forged in the hearts of countless far flung stars billions of years ago. These particles
link |
01:59:46.080
drifted for eons and light years until gravity and change brought them together here now. These
link |
01:59:53.280
atoms now form a conglomerate, your brain, they cannot only ponder the very stars they gave
link |
01:59:59.840
at birth, but can also think about its own ability to think and wonder about its own ability to wonder.
link |
02:00:07.600
With the arrival of humans, it has been said the universe has suddenly become conscious of itself.
link |
02:00:13.680
This truly is the greatest mystery of all. Thank you for listening and hope to see you next time.