back to index

Matt Botvinick: Neuroscience, Psychology, and AI at DeepMind | Lex Fridman Podcast #106


small model | large model

link |
00:00:00.000
The following is a conversation with Matt Botmanek,
link |
00:00:03.440
Director of Neuroscience Research at DeepMind.
link |
00:00:06.680
He's a brilliant, cross disciplinary mind,
link |
00:00:09.360
navigating effortlessly between cognitive psychology,
link |
00:00:12.480
computational neuroscience, and artificial intelligence.
link |
00:00:16.760
Quick summary of the ads.
link |
00:00:18.320
Two sponsors, The Jordan Harbinger Show
link |
00:00:21.060
and Magic Spoon Cereal.
link |
00:00:23.880
Please consider supporting the podcast
link |
00:00:25.600
by going to jordanharbinger.com slash lex
link |
00:00:29.320
and also going to magicspoon.com slash lex
link |
00:00:33.800
and using code lex at checkout
link |
00:00:36.120
after you buy all of their cereal.
link |
00:00:39.080
Click the links, buy the stuff.
link |
00:00:40.920
It's the best way to support this podcast
link |
00:00:43.040
and the journey I'm on.
link |
00:00:44.740
If you enjoy this podcast, subscribe on YouTube,
link |
00:00:47.680
review it with five stars on Apple Podcast,
link |
00:00:49.920
follow on Spotify, support on Patreon,
link |
00:00:52.380
or connect with me on Twitter at lexfriedman,
link |
00:00:55.600
spelled surprisingly without the E,
link |
00:00:58.920
just F R I D M A N.
link |
00:01:02.080
As usual, I'll do a few minutes of ads now
link |
00:01:03.920
and never any ads in the middle
link |
00:01:05.160
that can break the flow of the conversation.
link |
00:01:07.620
This episode is supported by The Jordan Harbinger Show.
link |
00:01:11.740
Go to jordanharbinger.com slash lex.
link |
00:01:15.200
It's how he knows I sent you.
link |
00:01:16.900
On that page, subscribe to his podcast
link |
00:01:19.400
on Apple Podcast, Spotify, and you know where to look.
link |
00:01:24.320
I've been binging on his podcast.
link |
00:01:26.120
Jordan is a great interviewer
link |
00:01:28.400
and even a better human being.
link |
00:01:30.280
I recently listened to his conversation with Jack Barsky,
link |
00:01:32.760
former sleeper agent for the KGB in the 80s
link |
00:01:36.120
and author of Deep Undercover,
link |
00:01:38.880
which is a memoir that paints yet another
link |
00:01:40.740
interesting perspective on the Cold War era.
link |
00:01:43.440
I've been reading a lot about the Stalin
link |
00:01:46.720
and then Gorbachev and Putin eras of Russia,
link |
00:01:49.280
but this conversation made me realize
link |
00:01:50.800
that I need to do a deep dive into the Cold War era
link |
00:01:53.680
to get a complete picture of Russia's recent history.
link |
00:01:57.120
Again, go to jordanharbinger.com slash lex.
link |
00:02:01.160
Subscribe to his podcast.
link |
00:02:02.880
It's how he knows I sent you.
link |
00:02:04.440
It's awesome, you won't regret it.
link |
00:02:06.740
This episode is also supported by Magic Spoon,
link |
00:02:10.320
low carb, keto friendly, super amazingly delicious cereal.
link |
00:02:15.700
I've been on a keto or very low carb diet
link |
00:02:18.300
for a long time now.
link |
00:02:19.480
It helps with my mental performance.
link |
00:02:21.300
It helps with my physical performance,
link |
00:02:22.840
even during this crazy push up, pull up challenge I'm doing,
link |
00:02:26.520
including the running, it just feels great.
link |
00:02:29.680
I used to love cereal.
link |
00:02:31.320
Obviously, I can't have it now
link |
00:02:33.840
because most cereals have crazy amounts of sugar,
link |
00:02:36.820
which is terrible for you, so I quit it years ago.
link |
00:02:40.140
But Magic Spoon, amazingly, somehow,
link |
00:02:44.260
is a totally different thing.
link |
00:02:45.920
Zero sugar, 11 grams of protein,
link |
00:02:48.340
and only three net grams of carbs.
link |
00:02:50.920
It tastes delicious.
link |
00:02:53.140
It has a lot of flavors, two new ones,
link |
00:02:55.200
including peanut butter.
link |
00:02:56.760
But if you know what's good for you,
link |
00:02:58.520
you'll go with cocoa, my favorite flavor,
link |
00:03:01.560
and the flavor of champions.
link |
00:03:04.200
Click the magicspoon.com slash lex link in the description
link |
00:03:07.880
and use code lex at checkout for free shipping
link |
00:03:11.040
and to let them know I sent you.
link |
00:03:13.100
They have agreed to sponsor this podcast for a long time.
link |
00:03:16.480
They're an amazing sponsor and an even better cereal.
link |
00:03:19.920
I highly recommend it.
link |
00:03:21.760
It's delicious, it's good for you, you won't regret it.
link |
00:03:24.720
And now, here's my conversation with Matt Botpenik.
link |
00:03:29.600
How much of the human brain do you think we understand?
link |
00:03:33.400
I think we're at a weird moment
link |
00:03:36.920
in the history of neuroscience in the sense that
link |
00:03:45.200
I feel like we understand a lot about the brain
link |
00:03:47.320
at a very high level, but a very coarse level.
link |
00:03:52.600
When you say high level, what are you thinking?
link |
00:03:54.280
Are you thinking functional?
link |
00:03:55.440
Are you thinking structurally?
link |
00:03:56.960
So in other words, what is the brain for?
link |
00:04:00.960
What kinds of computation does the brain do?
link |
00:04:05.000
What kinds of behaviors would we have to explain
link |
00:04:12.320
if we were gonna look down at the mechanistic level?
link |
00:04:16.560
And at that level, I feel like we understand
link |
00:04:18.440
much, much more about the brain
link |
00:04:19.680
than we did when I was in high school.
link |
00:04:22.060
But it's almost like we're seeing it through a fog.
link |
00:04:25.240
It's only at a very coarse level.
link |
00:04:26.600
We don't really understand what the neuronal mechanisms are
link |
00:04:30.200
that underlie these computations.
link |
00:04:32.500
We've gotten better at saying,
link |
00:04:34.600
what are the functions that the brain is computing
link |
00:04:36.720
that we would have to understand
link |
00:04:38.400
if we were gonna get down to the neuronal level?
link |
00:04:40.200
And at the other end of the spectrum,
link |
00:04:45.500
in the last few years, incredible progress has been made
link |
00:04:49.600
in terms of technologies that allow us to see,
link |
00:04:54.880
actually literally see, in some cases,
link |
00:04:57.220
what's going on at the single unit level,
link |
00:05:01.040
even the dendritic level.
link |
00:05:02.640
And then there's this yawning gap in between.
link |
00:05:05.800
Well, that's interesting.
link |
00:05:06.640
So at the high level,
link |
00:05:07.460
so that's almost a cognitive science level.
link |
00:05:09.600
And then at the neuronal level,
link |
00:05:11.900
that's neurobiology and neuroscience,
link |
00:05:14.600
just studying single neurons,
link |
00:05:16.040
the synaptic connections and all the dopamine,
link |
00:05:19.800
all the kind of neurotransmitters.
link |
00:05:21.560
One blanket statement I should probably make
link |
00:05:23.360
is that as I've gotten older,
link |
00:05:27.760
I have become more and more reluctant
link |
00:05:30.200
to make a distinction between psychology and neuroscience.
link |
00:05:33.400
To me, the point of neuroscience
link |
00:05:37.240
is to study what the brain is for.
link |
00:05:41.780
If you're a nephrologist
link |
00:05:44.360
and you wanna learn about the kidney,
link |
00:05:46.560
you start by saying, what is this thing for?
link |
00:05:50.000
Well, it seems to be for taking blood on one side
link |
00:05:55.800
that has metabolites in it that shouldn't be there,
link |
00:06:01.120
sucking them out of the blood
link |
00:06:03.320
while leaving the good stuff behind,
link |
00:06:05.160
and then excreting that in the form of urine.
link |
00:06:07.060
That's what the kidney is for.
link |
00:06:08.400
It's like obvious.
link |
00:06:10.240
So the rest of the work is deciding how it does that.
link |
00:06:13.200
And this, it seems to me,
link |
00:06:14.800
is the right approach to take to the brain.
link |
00:06:17.080
You say, well, what is the brain for?
link |
00:06:19.120
The brain, as far as I can tell, is for producing behavior.
link |
00:06:22.760
It's for going from perceptual inputs to behavioral outputs,
link |
00:06:27.980
and the behavioral outputs should be adaptive.
link |
00:06:31.420
So that's what psychology is about.
link |
00:06:33.620
It's about understanding the structure of that function.
link |
00:06:35.920
And then the rest of neuroscience is about figuring out
link |
00:06:38.920
how those operations are actually carried out
link |
00:06:41.880
at a mechanistic level.
link |
00:06:44.160
That's really interesting, but so unlike the kidney,
link |
00:06:47.960
the brain, the gap between the electrical signal
link |
00:06:52.020
and behavior, so you truly see neuroscience
link |
00:06:57.120
as the science that touches behavior,
link |
00:07:01.220
how the brain generates behavior,
link |
00:07:03.260
or how the brain converts raw visual information
link |
00:07:07.400
into understanding.
link |
00:07:08.960
Like, you basically see cognitive science,
link |
00:07:12.520
psychology, and neuroscience as all one science.
link |
00:07:15.860
Yeah, it's a personal statement.
link |
00:07:19.240
Is that a hopeful or a realistic statement?
link |
00:07:22.920
So certainly you will be correct in your feeling
link |
00:07:26.880
in some number of years, but that number of years
link |
00:07:29.240
could be 200, 300 years from now.
link |
00:07:31.440
Oh, well, there's a...
link |
00:07:33.400
Is that aspirational or is that pragmatic engineering
link |
00:07:37.600
feeling that you have?
link |
00:07:39.360
It's both in the sense that this is what I hope
link |
00:07:46.520
and expect will bear fruit over the coming decades,
link |
00:07:53.360
but it's also pragmatic in the sense that I'm not sure
link |
00:07:57.560
what we're doing in either psychology or neuroscience
link |
00:08:02.840
if that's not the framing.
link |
00:08:04.920
I don't know what it means to understand the brain
link |
00:08:09.760
if there's no, if part of the enterprise
link |
00:08:14.320
is not about understanding the behavior
link |
00:08:18.520
that's being produced.
link |
00:08:20.020
I mean, yeah, but I would compare it
link |
00:08:23.040
to maybe astronomers looking at the movement
link |
00:08:25.880
of the planets and the stars without any interest
link |
00:08:30.120
of the underlying physics, right?
link |
00:08:32.360
And I would argue that at least in the early days,
link |
00:08:35.560
there is some value to just tracing the movement
link |
00:08:37.780
of the planets and the stars without thinking
link |
00:08:41.680
about the physics too much because it's such a big leap
link |
00:08:44.100
to start thinking about the physics
link |
00:08:45.600
before you even understand even the basic structural
link |
00:08:48.640
elements of...
link |
00:08:49.520
Oh, I agree with that.
link |
00:08:50.420
I agree.
link |
00:08:51.260
But you're saying in the end, the goal should be
link |
00:08:53.240
to deeply understand.
link |
00:08:54.760
Well, right, and I think...
link |
00:08:57.300
So I thought about this a lot when I was in grad school
link |
00:08:59.240
because a lot of what I studied in grad school
link |
00:09:00.600
was psychology and I found myself a little bit confused
link |
00:09:06.120
about what it meant to...
link |
00:09:08.680
It seems like what we were talking about a lot of the time
link |
00:09:11.500
were virtual causal mechanisms.
link |
00:09:14.800
Like, oh, well, you know, attentional selection
link |
00:09:18.500
then selects some object in the environment
link |
00:09:22.200
and that is then passed on to the motor, you know,
link |
00:09:25.600
information about that is passed on to the motor system.
link |
00:09:27.800
But these are virtual mechanisms.
link |
00:09:29.760
These are, you know, they're metaphors.
link |
00:09:31.480
They're, you know, there's no reduction going on
link |
00:09:37.040
in that conversation to some physical mechanism that,
link |
00:09:40.200
you know, which is really what it would take
link |
00:09:43.240
to fully understand, you know, how behavior is rising.
link |
00:09:47.320
But the causal mechanisms are definitely neurons interacting.
link |
00:09:50.780
I'm willing to say that at this point in history.
link |
00:09:53.360
So in psychology, at least for me personally,
link |
00:09:56.240
there was this strange insecurity about trafficking
link |
00:10:00.160
in these metaphors, you know,
link |
00:10:02.680
which were supposed to explain the function of the mind.
link |
00:10:07.360
If you can't ground them in physical mechanisms,
link |
00:10:09.400
then what is the explanatory validity of these explanations?
link |
00:10:16.120
And I managed to soothe my own nerves
link |
00:10:21.120
by thinking about the history of genetics research.
link |
00:10:29.400
So I'm very far from being an expert
link |
00:10:32.460
on the history of this field.
link |
00:10:34.660
But I know enough to say that, you know,
link |
00:10:38.160
Mendelian genetics preceded, you know, Watson and Crick.
link |
00:10:42.800
And so there was a significant period of time
link |
00:10:45.520
during which people were, you know,
link |
00:10:49.600
productively investigating the structure of inheritance
link |
00:10:54.760
using what was essentially a metaphor,
link |
00:10:56.880
the notion of a gene, you know.
link |
00:10:58.600
Oh, genes do this and genes do that.
link |
00:11:00.760
But, you know, where are the genes?
link |
00:11:02.520
They're sort of an explanatory thing that we made up.
link |
00:11:06.080
And we ascribed to them these causal properties.
link |
00:11:08.880
Oh, there's a dominant, there's a recessive,
link |
00:11:10.640
and then they recombine it.
link |
00:11:12.800
And then later, there was a kind of blank there
link |
00:11:17.460
that was filled in with a physical mechanism.
link |
00:11:21.620
That connection was made.
link |
00:11:24.300
But it was worth having that metaphor
link |
00:11:26.800
because that gave us a good sense
link |
00:11:29.360
of what kind of causal mechanism we were looking for.
link |
00:11:34.280
And the fundamental metaphor of cognition, you said,
link |
00:11:38.880
is the interaction of neurons.
link |
00:11:40.780
Is that, what is the metaphor?
link |
00:11:42.680
No, no, the metaphor,
link |
00:11:44.280
the metaphors we use in cognitive psychology
link |
00:11:47.640
are things like attention, the way that memory works.
link |
00:11:56.040
I retrieve something from memory, right?
link |
00:11:59.440
A memory retrieval occurs.
link |
00:12:01.880
What is that?
link |
00:12:02.860
You know, that's not a physical mechanism
link |
00:12:06.620
that I can examine in its own right.
link |
00:12:08.960
But it's still worth having, that metaphorical level.
link |
00:12:13.840
Yeah, so yeah, I misunderstood actually.
link |
00:12:16.000
So the higher level of abstractions
link |
00:12:17.640
is the metaphor that's most useful.
link |
00:12:19.640
Yes.
link |
00:12:20.480
But what about, so how does that connect
link |
00:12:24.420
to the idea that that arises from interaction of neurons?
link |
00:12:33.000
Well, even, is the interaction of neurons
link |
00:12:35.940
also not a metaphor to you?
link |
00:12:38.080
Or is it literally, like that's no longer a metaphor.
link |
00:12:42.400
That's already the lowest level of abstractions
link |
00:12:46.160
that could actually be directly studied.
link |
00:12:50.280
Well, I'm hesitating because I think
link |
00:12:53.840
what I want to say could end up being controversial.
link |
00:12:57.960
So what I want to say is, yes,
link |
00:12:59.960
the interactions of neurons, that's not metaphorical.
link |
00:13:03.040
That's a physical fact.
link |
00:13:04.680
That's where the causal interactions actually occur.
link |
00:13:08.500
Now, I suppose you could say,
link |
00:13:09.880
well, even that is metaphorical relative
link |
00:13:12.720
to the quantum events that underlie.
link |
00:13:15.840
I don't want to go down that rabbit hole.
link |
00:13:17.320
It's always turtles on top of turtles.
link |
00:13:18.920
Yeah, there's turtles all the way down.
link |
00:13:21.200
There's a reduction that you can do.
link |
00:13:22.560
You can say these psychological phenomena
link |
00:13:25.720
can be explained through a very different
link |
00:13:28.200
kind of causal mechanism,
link |
00:13:29.160
which has to do with neurotransmitter release.
link |
00:13:31.440
And so what we're really trying to do
link |
00:13:33.800
in neuroscience writ large, as I say,
link |
00:13:37.120
which for me includes psychology,
link |
00:13:39.760
is to take these psychological phenomena
link |
00:13:44.400
and map them onto neural events.
link |
00:13:49.980
I think remaining forever at the level of description
link |
00:13:57.160
that is natural for psychology,
link |
00:14:00.520
for me personally, would be disappointing.
link |
00:14:02.280
I want to understand how mental activity
link |
00:14:05.640
arises from neural activity.
link |
00:14:10.360
But the converse is also true.
link |
00:14:13.000
Studying neural activity without any sense
link |
00:14:15.880
of what you're trying to explain,
link |
00:14:19.800
to me feels like at best groping around at random.
link |
00:14:27.280
Now, you've kind of talked about this bridging
link |
00:14:30.280
of the gap between psychology and neuroscience,
link |
00:14:32.880
but do you think it's possible,
link |
00:14:34.040
like my love is, like I fell in love with psychology
link |
00:14:38.280
and psychiatry in general with Freud
link |
00:14:40.120
and when I was really young,
link |
00:14:41.760
and I hoped to understand the mind.
link |
00:14:43.540
And for me, understanding the mind,
link |
00:14:45.240
at least at that young age before I discovered AI
link |
00:14:48.400
and even neuroscience was to, is psychology.
link |
00:14:52.840
And do you think it's possible to understand the mind
link |
00:14:55.840
without getting into all the messy details of neuroscience?
link |
00:14:59.920
Like you kind of mentioned to you it's appealing
link |
00:15:03.120
to try to understand the mechanisms at the lowest level,
link |
00:15:06.040
but do you think that's needed,
link |
00:15:07.560
that's required to understand how the mind works?
link |
00:15:11.480
That's an important part of the whole picture,
link |
00:15:14.760
but I would be the last person on earth
link |
00:15:18.480
to suggest that that reality
link |
00:15:23.440
renders psychology in its own right unproductive.
link |
00:15:29.440
I trained as a psychologist.
link |
00:15:31.160
I am fond of saying that I have learned much more
link |
00:15:35.000
from psychology than I have from neuroscience.
link |
00:15:38.480
To me, psychology is a hugely important discipline.
link |
00:15:43.740
And one thing that warms in my heart is that
link |
00:15:50.360
ways of investigating behavior
link |
00:15:54.080
that have been native to cognitive psychology
link |
00:15:58.000
since it's dawn in the 60s
link |
00:16:01.600
are starting to become,
link |
00:16:03.960
they're starting to become interesting to AI researchers
link |
00:16:07.680
for a variety of reasons.
link |
00:16:09.480
And that's been exciting for me to see.
link |
00:16:11.680
Can you maybe talk a little bit about what you see
link |
00:16:14.920
as beautiful aspects of psychology,
link |
00:16:19.320
maybe limiting aspects of psychology?
link |
00:16:21.920
I mean, maybe just start it off as a science, as a field.
link |
00:16:25.640
To me, it was when I understood what psychology is,
link |
00:16:29.760
analytical psychology,
link |
00:16:30.880
like the way it's actually carried out,
link |
00:16:32.760
it was really disappointing to see two aspects.
link |
00:16:36.240
One is how small the N is,
link |
00:16:39.200
how small the number of subject is in the studies.
link |
00:16:43.040
And two, it was disappointing to see
link |
00:16:45.320
how controlled the entire,
link |
00:16:47.480
how much it was in the lab.
link |
00:16:50.520
It wasn't studying humans in the wild.
link |
00:16:52.680
There was no mechanism for studying humans in the wild.
link |
00:16:55.000
So that's where I became a little bit disillusioned
link |
00:16:57.640
to psychology.
link |
00:16:59.480
And then the modern world of the internet
link |
00:17:01.680
is so exciting to me.
link |
00:17:02.960
The Twitter data or YouTube data,
link |
00:17:05.720
data of human behavior on the internet becomes exciting
link |
00:17:08.280
because the N grows and then in the wild grows.
link |
00:17:11.920
But that's just my narrow sense.
link |
00:17:13.880
Like, do you have a optimistic or pessimistic
link |
00:17:16.560
cynical view of psychology?
link |
00:17:18.160
How do you see the field broadly?
link |
00:17:21.120
When I was in graduate school,
link |
00:17:22.720
it was early enough that there was still a thrill
link |
00:17:27.800
in seeing that there were ways of doing,
link |
00:17:32.960
there were ways of doing experimental science
link |
00:17:36.560
that provided insight to the structure of the mind.
link |
00:17:40.040
One thing that impressed me most when I was at that stage
link |
00:17:43.720
in my education was neuropsychology,
link |
00:17:46.000
looking at, analyzing the behavior of populations
link |
00:17:51.000
who had brain damage of different kinds
link |
00:17:55.560
and trying to understand what the specific deficits were
link |
00:18:02.920
that arose from a lesion in a particular part of the brain.
link |
00:18:06.760
And the kind of experimentation that was done
link |
00:18:08.960
and that's still being done to get answers in that context
link |
00:18:13.520
was so creative and it was so deliberate.
link |
00:18:18.160
It was good science.
link |
00:18:21.360
An experiment answered one question but raised another
link |
00:18:24.400
and somebody would do an experiment
link |
00:18:25.600
that answered that question.
link |
00:18:26.600
And you really felt like you were narrowing in on
link |
00:18:29.360
some kind of approximate understanding
link |
00:18:31.760
of what this part of the brain was for.
link |
00:18:34.840
Do you have an example from memory
link |
00:18:36.880
of what kind of aspects of the mind
link |
00:18:39.560
could be studied in this kind of way?
link |
00:18:41.400
Oh, sure.
link |
00:18:42.240
I mean, the very detailed neuropsychological studies
link |
00:18:45.840
of language function,
link |
00:18:49.720
looking at production and reception
link |
00:18:52.040
and the relationship between visual function,
link |
00:18:57.080
reading and auditory and semantic.
link |
00:19:00.680
There were these, and still are, these beautiful models
link |
00:19:03.920
that came out of that kind of research
link |
00:19:05.560
that really made you feel like you understood something
link |
00:19:08.480
that you hadn't understood before
link |
00:19:10.320
about how language processing is organized in the brain.
link |
00:19:15.320
But having said all that,
link |
00:19:20.840
I think you are, I mean, I agree with you
link |
00:19:25.400
that the cost of doing highly controlled experiments
link |
00:19:30.960
is that you, by construction, miss out on the richness
link |
00:19:36.480
and complexity of the real world.
link |
00:19:39.160
One thing that, so I was drawn into science
link |
00:19:42.360
by what in those days was called connectionism,
link |
00:19:44.960
which is, of course, what we now call deep learning.
link |
00:19:49.120
And at that point in history,
link |
00:19:50.840
neural networks were primarily being used
link |
00:19:54.200
in order to model human cognition.
link |
00:19:56.440
They weren't yet really useful for industrial applications.
link |
00:20:00.200
So you always found neural networks
link |
00:20:02.080
in biological form beautiful.
link |
00:20:04.080
Oh, neural networks were very concretely the thing
link |
00:20:07.160
that drew me into science.
link |
00:20:09.160
I was handed, are you familiar with the PDP books
link |
00:20:13.320
from the 80s when I was in,
link |
00:20:15.720
I went to medical school before I went into science.
link |
00:20:18.240
And, yeah.
link |
00:20:19.160
Really, interesting.
link |
00:20:20.800
Wow.
link |
00:20:21.960
I also did a graduate degree in art history,
link |
00:20:23.920
so I'm kind of exploring.
link |
00:20:26.480
Well, art history, I understand.
link |
00:20:28.560
That's just a curious, creative mind.
link |
00:20:31.280
But medical school, with the dream of what,
link |
00:20:33.960
if we take that slight tangent?
link |
00:20:36.560
What, did you want to be a surgeon?
link |
00:20:39.120
I actually was quite interested in surgery.
link |
00:20:41.680
I was interested in surgery and psychiatry.
link |
00:20:44.200
And I thought, I must be the only person on the planet
link |
00:20:49.520
who was torn between those two fields.
link |
00:20:52.680
And I said exactly that to my advisor in medical school,
link |
00:20:56.840
who turned out, I found out later,
link |
00:20:59.440
to be a famous psychoanalyst.
link |
00:21:01.920
And he said to me, no, no, it's actually not so uncommon
link |
00:21:05.160
to be interested in surgery and psychiatry.
link |
00:21:07.520
And he conjectured that the reason
link |
00:21:10.480
that people develop these two interests
link |
00:21:12.600
is that both fields are about going beneath the surface
link |
00:21:16.360
and kind of getting into the kind of secret.
link |
00:21:19.120
I mean, maybe you understand this as someone
link |
00:21:21.040
who was interested in psychoanalysis.
link |
00:21:23.440
There's sort of a, there's a cliche phrase
link |
00:21:26.200
that people use now, like in NPR,
link |
00:21:28.400
the secret life of blankety blank, right?
link |
00:21:31.400
And that was part of the thrill of surgery,
link |
00:21:33.560
was seeing the secret activity
link |
00:21:38.120
that's inside everybody's abdomen and thorax.
link |
00:21:40.560
That's a very poetic way to connect it to disciplines
link |
00:21:43.880
that are very, practically speaking,
link |
00:21:45.560
different from each other.
link |
00:21:46.520
That's for sure, that's for sure, yes.
link |
00:21:48.480
So how did we get onto medical school?
link |
00:21:52.480
So I was in medical school
link |
00:21:53.720
and I was doing a psychiatry rotation
link |
00:21:57.360
and my kind of advisor in that rotation
link |
00:22:02.280
asked me what I was interested in.
link |
00:22:04.720
And I said, well, maybe psychiatry.
link |
00:22:07.800
He said, why?
link |
00:22:09.280
And I said, well, I've always been interested
link |
00:22:11.120
in how the brain works.
link |
00:22:13.080
I'm pretty sure that nobody's doing scientific research
link |
00:22:16.160
that addresses my interests,
link |
00:22:19.160
which are, I didn't have a word for it then,
link |
00:22:21.880
but I would have said about cognition.
link |
00:22:25.200
And he said, well, you know, I'm not sure that's true.
link |
00:22:27.680
You might be interested in these books.
link |
00:22:29.600
And he pulled down the PDB books from his shelf
link |
00:22:32.440
and they were still shrink wrapped.
link |
00:22:33.960
He hadn't read them, but he handed them to me.
link |
00:22:36.920
He said, you feel free to borrow these.
link |
00:22:38.680
And that was, you know, I went back to my dorm room
link |
00:22:41.440
and I just, you know, read them cover to cover.
link |
00:22:43.400
And what's PDB?
link |
00:22:44.960
Parallel distributed processing,
link |
00:22:46.520
which was one of the original names for deep learning.
link |
00:22:50.840
And so I apologize for the romanticized question,
link |
00:22:55.000
but what idea in the space of neuroscience
link |
00:22:58.360
and the space of the human brain is to you
link |
00:23:00.840
the most beautiful, mysterious, surprising?
link |
00:23:03.880
What had always fascinated me,
link |
00:23:08.480
even when I was a pretty young kid, I think,
link |
00:23:12.320
was the paradox that lies in the fact
link |
00:23:21.360
that the brain is so mysterious
link |
00:23:25.640
and seems so distant.
link |
00:23:30.640
But at the same time,
link |
00:23:32.520
it's responsible for the full transparency
link |
00:23:37.360
of everyday life.
link |
00:23:39.040
The brain is literally what makes everything obvious
link |
00:23:41.520
and familiar.
link |
00:23:43.080
And there's always one in the room with you.
link |
00:23:47.280
Yeah.
link |
00:23:48.120
I used to teach, when I taught at Princeton,
link |
00:23:50.520
I used to teach a cognitive neuroscience course.
link |
00:23:53.000
And the very last thing I would say to the students was,
link |
00:23:56.720
you know, people often,
link |
00:24:00.160
when people think of scientific inspiration,
link |
00:24:04.200
the metaphor is often, well, look to the stars.
link |
00:24:08.120
The stars will inspire you to wonder at the universe
link |
00:24:12.360
and think about your place in it and how things work.
link |
00:24:15.800
And I'm all for looking at the stars,
link |
00:24:18.360
but I've always been much more inspired.
link |
00:24:21.600
And my sense of wonder comes from the,
link |
00:24:25.360
not from the distant, mysterious stars,
link |
00:24:28.560
but from the extremely intimately close brain.
link |
00:24:34.440
Yeah.
link |
00:24:35.280
There's something just endlessly fascinating
link |
00:24:38.680
to me about that.
link |
00:24:40.000
The, like, just like you said,
link |
00:24:41.360
the one that's close and yet distant
link |
00:24:45.500
in terms of our understanding of it.
link |
00:24:48.000
Do you, are you also captivated by the fact
link |
00:24:53.640
that this very conversation is happening
link |
00:24:56.040
because two brains are communicating so that?
link |
00:24:57.560
Yes, exactly.
link |
00:24:59.120
The, I guess what I mean is the subjective nature
link |
00:25:03.800
of the experience, if it can take a small attention
link |
00:25:06.320
into the mystical of it, the consciousness,
link |
00:25:10.240
or when you were saying you're captivated
link |
00:25:13.320
by the idea of the brain,
link |
00:25:14.920
are you talking about specifically
link |
00:25:16.320
the mechanism of cognition?
link |
00:25:18.200
Or are you also just, like, at least for me,
link |
00:25:23.080
it's almost like paralyzing the beauty and the mystery
link |
00:25:26.600
of the fact that it creates the entirety of the experience,
link |
00:25:29.480
not just the reasoning capability, but the experience.
link |
00:25:32.880
Well, I definitely resonate with that latter thought.
link |
00:25:38.920
And I often find discussions of artificial intelligence
link |
00:25:45.280
to be disappointingly narrow.
link |
00:25:50.720
Speaking as someone who has always had an interest in art.
link |
00:25:55.720
Right.
link |
00:25:56.560
I was just gonna go there
link |
00:25:57.400
because it sounds like somebody who has an interest in art.
link |
00:26:00.200
Yeah, I mean, there are many layers
link |
00:26:04.000
to full bore human experience.
link |
00:26:08.200
And in some ways it's not enough to say,
link |
00:26:12.040
oh, well, don't worry, we're talking about cognition,
link |
00:26:15.020
but we'll add emotion, you know?
link |
00:26:17.240
There's an incredible scope
link |
00:26:21.200
to what humans go through in every moment.
link |
00:26:25.280
And yes, so that's part of what fascinates me,
link |
00:26:33.320
is that our brains are producing that.
link |
00:26:40.040
But at the same time, it's so mysterious to us.
link |
00:26:43.040
How?
link |
00:26:46.240
Our brains are literally in our heads
link |
00:26:49.120
producing this experience.
link |
00:26:50.600
Producing the experience.
link |
00:26:52.120
And yet it's so mysterious to us.
link |
00:26:55.100
And so, and the scientific challenge
link |
00:26:57.000
of getting at the actual explanation for that
link |
00:27:00.880
is so overwhelming.
link |
00:27:03.360
That's just, I don't know.
link |
00:27:05.600
Certain people have fixations on particular questions
link |
00:27:08.440
and that's always, that's just always been mine.
link |
00:27:11.680
Yeah, I would say the poetry of that is fascinating.
link |
00:27:14.020
And I'm really interested in natural language as well.
link |
00:27:16.740
And when you look at artificial intelligence community,
link |
00:27:19.440
it always saddens me how much
link |
00:27:23.880
when you try to create a benchmark
link |
00:27:25.720
for the community to gather around,
link |
00:27:28.200
how much of the magic of language is lost
link |
00:27:30.920
when you create that benchmark.
link |
00:27:33.240
That there's something, we talk about experience,
link |
00:27:35.920
the music of the language, the wit,
link |
00:27:38.600
the something that makes a rich experience,
link |
00:27:41.080
something that would be required to pass
link |
00:27:43.800
the spirit of the Turing test is lost in these benchmarks.
link |
00:27:47.660
And I wonder how to get it back in
link |
00:27:50.240
because it's very difficult.
link |
00:27:51.920
The moment you try to do like real good rigorous science,
link |
00:27:55.160
you lose some of that magic.
link |
00:27:56.960
When you try to study cognition
link |
00:28:00.160
in a rigorous scientific way,
link |
00:28:01.560
it feels like you're losing some of the magic.
link |
00:28:03.800
The seeing cognition in a mechanistic way
link |
00:28:07.520
that AI folk at this stage in our history.
link |
00:28:10.060
Well, I agree with you, but at the same time,
link |
00:28:13.040
one thing that I found really exciting
link |
00:28:18.040
about that first wave of deep learning models in cognition
link |
00:28:22.960
was the fact that the people who were building these models
link |
00:28:29.640
were focused on the richness and complexity
link |
00:28:32.960
of human cognition.
link |
00:28:34.800
So an early debate in cognitive science,
link |
00:28:40.080
which I sort of witnessed as a grad student
link |
00:28:41.820
was about something that sounds very dry,
link |
00:28:44.200
which is the formation of the past tense.
link |
00:28:47.180
But there were these two camps.
link |
00:28:49.200
One said, well, the mind encodes certain rules
link |
00:28:54.400
and it also has a list of exceptions
link |
00:28:57.900
because of course, the rule is add ED,
link |
00:29:00.380
but that's not always what you do.
link |
00:29:01.820
So you have to have a list of exceptions.
link |
00:29:05.000
And then there were the connectionists
link |
00:29:06.960
who evolved into the deep learning people who said,
link |
00:29:10.700
well, if you look carefully at the data,
link |
00:29:13.820
if you actually look at corpora, like language corpora,
link |
00:29:18.280
it turns out to be very rich
link |
00:29:20.080
because yes, there are most verbs
link |
00:29:25.080
that you just tack on ED, and then there are exceptions,
link |
00:29:28.640
but there are rules that the exceptions aren't just random.
link |
00:29:36.040
There are certain clues to which verbs
link |
00:29:39.560
should be exceptional.
link |
00:29:41.040
And then there are exceptions to the exceptions.
link |
00:29:44.120
And there was a word that was kind of deployed
link |
00:29:47.760
in order to capture this, which was quasi regular.
link |
00:29:51.760
In other words, there are rules, but it's messy.
link |
00:29:54.740
And there's either structure even among the exceptions.
link |
00:29:58.760
And it would be, yeah, you could try to write down,
link |
00:30:01.280
we could try to write down the structure
link |
00:30:03.820
in some sort of closed form,
link |
00:30:04.840
but really the right way to understand
link |
00:30:07.560
how the brain is handling all this,
link |
00:30:09.080
and by the way, producing all of this,
link |
00:30:11.440
is to build a deep neural network
link |
00:30:14.000
and train it on this data
link |
00:30:15.200
and see how it ends up representing all of this richness.
link |
00:30:18.520
So the way that deep learning
link |
00:30:21.420
was deployed in cognitive psychology
link |
00:30:23.720
was that was the spirit of it.
link |
00:30:25.960
It was about that richness.
link |
00:30:29.560
And that's something that I always found very compelling,
link |
00:30:31.960
still do.
link |
00:30:33.160
Is there something especially interesting
link |
00:30:36.200
and profound to you
link |
00:30:37.520
in terms of our current deep learning neural network,
link |
00:30:40.480
artificial neural network approaches,
link |
00:30:42.640
and whatever we do understand
link |
00:30:46.300
about the biological neural networks in our brain?
link |
00:30:49.000
Is there, there's quite a few differences.
link |
00:30:52.440
Are some of them to you,
link |
00:30:54.680
either interesting or perhaps profound
link |
00:30:58.040
in terms of the gap we might want to try to close
link |
00:31:03.040
in trying to create a human level intelligence?
link |
00:31:07.560
What I would say here is something
link |
00:31:08.840
that a lot of people are saying,
link |
00:31:10.720
which is that one seeming limitation
link |
00:31:16.580
of the systems that we're building now
link |
00:31:18.960
is that they lack the kind of flexibility,
link |
00:31:22.900
the readiness to sort of turn on a dime
link |
00:31:25.960
when the context calls for it
link |
00:31:28.200
that is so characteristic of human behavior.
link |
00:31:32.200
So is that connected to you to the,
link |
00:31:34.920
like which aspect of the neural networks in our brain
link |
00:31:37.720
is that connected to?
link |
00:31:39.160
Is that closer to the cognitive science level of,
link |
00:31:45.080
now again, see like my natural inclination
link |
00:31:47.320
is to separate into three disciplines of neuroscience,
link |
00:31:51.640
cognitive science and psychology.
link |
00:31:54.280
And you've already kind of shut that down
link |
00:31:56.380
by saying you're kind of see them as separate,
link |
00:31:58.360
but just to look at those layers,
link |
00:32:01.500
I guess where is there something about the lowest layer
link |
00:32:05.320
of the way the neural neurons interact
link |
00:32:09.160
that is profound to you in terms of this difference
link |
00:32:13.320
to the artificial neural networks,
link |
00:32:15.480
or is all the key differences
link |
00:32:17.220
at a higher level of abstraction?
link |
00:32:20.720
One thing I often think about is that,
link |
00:32:24.440
if you take an introductory computer science course
link |
00:32:27.140
and they are introducing you to the notion
link |
00:32:29.600
of Turing machines,
link |
00:32:31.480
one way of articulating
link |
00:32:36.000
what the significance of a Turing machine is,
link |
00:32:39.320
is that it's a machine emulator.
link |
00:32:42.760
It can emulate any other machine.
link |
00:32:47.540
And that to me,
link |
00:32:52.960
that way of looking at a Turing machine
link |
00:32:56.200
really sticks with me.
link |
00:32:57.640
I think of humans as maybe sharing
link |
00:33:01.960
in some of that character.
link |
00:33:05.000
We're capacity limited,
link |
00:33:06.160
we're not Turing machines obviously,
link |
00:33:07.540
but we have the ability to adapt behaviors
link |
00:33:11.040
that are very much unlike anything we've done before,
link |
00:33:15.420
but there's some basic mechanism
link |
00:33:17.720
that's implemented in our brain
link |
00:33:18.960
that allows us to run software.
link |
00:33:22.400
But just on that point, you mentioned Turing machine,
link |
00:33:24.600
but nevertheless, it's fundamentally
link |
00:33:26.840
our brains are just computational devices in your view.
link |
00:33:29.720
Is that what you're getting at?
link |
00:33:31.160
It was a little bit unclear to this line you drew.
link |
00:33:35.680
Is there any magic in there
link |
00:33:37.800
or is it just basic computation?
link |
00:33:40.720
I'm happy to think of it as just basic computation,
link |
00:33:43.320
but mind you, I won't be satisfied
link |
00:33:46.120
until somebody explains to me
link |
00:33:48.280
what the basic computations are
link |
00:33:49.840
that are leading to the full richness of human cognition.
link |
00:33:54.760
It's not gonna be enough for me
link |
00:33:56.680
to understand what the computations are
link |
00:33:58.880
that allow people to do arithmetic or play chess.
link |
00:34:02.160
I want the whole thing.
link |
00:34:06.360
And a small tangent,
link |
00:34:07.780
because you kind of mentioned coronavirus,
link |
00:34:10.480
there's group behavior.
link |
00:34:12.400
Oh, sure.
link |
00:34:13.480
Is there something interesting
link |
00:34:14.960
to your search of understanding the human mind
link |
00:34:18.720
where behavior of large groups
link |
00:34:21.520
or just behavior of groups is interesting,
link |
00:34:24.240
seeing that as a collective mind,
link |
00:34:25.640
as a collective intelligence,
link |
00:34:27.120
perhaps seeing the groups of people
link |
00:34:28.880
as a single intelligent organisms,
link |
00:34:31.080
especially looking at the reinforcement learning work
link |
00:34:34.200
you've done recently.
link |
00:34:35.600
Well, yeah, I can't.
link |
00:34:36.920
I mean, I have the honor of working
link |
00:34:41.760
with a lot of incredibly smart people
link |
00:34:43.640
and I wouldn't wanna take any credit
link |
00:34:45.480
for leading the way on the multiagent work
link |
00:34:48.820
that's come out of my group or DeepMind lately,
link |
00:34:51.360
but I do find it fascinating.
link |
00:34:53.840
And I mean, I think it can't be debated.
link |
00:35:00.760
You know, human behavior arises within communities.
link |
00:35:06.000
That just seems to me self evident.
link |
00:35:08.960
But to me, it is self evident,
link |
00:35:11.400
but that seems to be a profound aspects
link |
00:35:14.720
of something that created.
link |
00:35:16.040
That was like, if you look at like 2001 Space Odyssey
link |
00:35:19.160
when the monkeys touched the...
link |
00:35:21.360
Yeah.
link |
00:35:22.200
That's the magical moment I think Yuval Harari argues
link |
00:35:25.320
that the ability of our large numbers of humans
link |
00:35:29.400
to hold an idea, to converge towards idea together,
link |
00:35:31.880
like you said, shaking hands versus bumping elbows,
link |
00:35:34.360
somehow converge without being in a room altogether,
link |
00:35:40.880
just kind of this like distributed convergence
link |
00:35:43.380
towards an idea over a particular period of time
link |
00:35:46.720
seems to be fundamental to just every aspect
link |
00:35:51.520
of our cognition, of our intelligence,
link |
00:35:53.400
because humans, I will talk about reward,
link |
00:35:56.720
but it seems like we don't really have
link |
00:35:58.720
a clear objective function under which we operate,
link |
00:36:01.320
but we all kind of converge towards one somehow.
link |
00:36:04.160
And that to me has always been a mystery
link |
00:36:07.600
that I think is somehow productive
link |
00:36:09.840
for also understanding AI systems.
link |
00:36:13.620
But I guess that's the next step.
link |
00:36:16.520
The first step is try to understand the mind.
link |
00:36:18.780
Well, I don't know.
link |
00:36:19.700
I mean, I think there's something to the argument
link |
00:36:22.520
that that kind of like strictly bottom up approach
link |
00:36:27.520
is wrongheaded.
link |
00:36:29.920
In other words, there are basic phenomena,
link |
00:36:34.880
basic aspects of human intelligence
link |
00:36:36.860
that can only be understood in the context of groups.
link |
00:36:43.280
I'm perfectly open to that.
link |
00:36:44.680
I've never been particularly convinced by the notion
link |
00:36:48.680
that we should consider intelligence
link |
00:36:52.360
to inhere at the level of communities.
link |
00:36:55.600
I don't know why, I'm sort of stuck on the notion
link |
00:36:58.720
that the basic unit that we want to understand
link |
00:37:01.380
is individual humans.
link |
00:37:02.720
And if we have to understand that
link |
00:37:05.880
in the context of other humans, fine.
link |
00:37:08.560
But for me, intelligence is just,
link |
00:37:11.320
I stubbornly define it as something
link |
00:37:14.640
that is an aspect of an individual human.
link |
00:37:18.800
That's just my, I don't know if that's a matter of taste.
link |
00:37:20.200
I'm with you, but that could be the reductionist dream
link |
00:37:22.880
of a scientist because you can understand a single human.
link |
00:37:26.400
It also is very possible that intelligence can only arise
link |
00:37:30.760
when there's multiple intelligences.
link |
00:37:33.040
When there's multiple sort of, it's a sad thing,
link |
00:37:37.480
if that's true, because it's very difficult to study.
link |
00:37:39.880
But if it's just one human,
link |
00:37:42.440
that one human would not be homosapien,
link |
00:37:44.880
would not become that intelligent.
link |
00:37:46.520
That's a possibility.
link |
00:37:48.500
I'm with you.
link |
00:37:50.040
One thing I will say along these lines
link |
00:37:52.800
is that I think a serious effort
link |
00:38:01.280
to understand human intelligence
link |
00:38:05.600
and maybe to build humanlike intelligence
link |
00:38:09.680
needs to pay just as much attention
link |
00:38:11.840
to the structure of the environment
link |
00:38:14.000
as to the structure of the cognizing system,
link |
00:38:20.040
whether it's a brain or an AI system.
link |
00:38:23.260
That's one thing I took away actually
link |
00:38:24.640
from my early studies with the pioneers
link |
00:38:27.920
of neural network research,
link |
00:38:29.900
people like Jay McClelland and John Cohen.
link |
00:38:34.080
The structure of cognition is really,
link |
00:38:38.600
it's only partly a function of the architecture of the brain
link |
00:38:44.480
and the learning algorithms that it implements.
link |
00:38:46.980
What really shapes it is the interaction of those things
link |
00:38:51.520
with the structure of the world
link |
00:38:54.460
in which those things are embedded.
link |
00:38:56.680
And that's especially important for,
link |
00:38:58.280
that's made most clear in reinforcement learning
link |
00:39:00.880
where the simulated environment is,
link |
00:39:03.720
you can only learn as much as you can simulate.
link |
00:39:05.800
And that's what DeepMind made very clear
link |
00:39:09.360
with the other aspect of the environment,
link |
00:39:11.080
which is the self play mechanism of the other agent,
link |
00:39:15.600
of the competitive behavior,
link |
00:39:16.840
which the other agent becomes the environment essentially.
link |
00:39:20.000
And that's, I mean, one of the most exciting ideas in AI
link |
00:39:24.080
is the self play mechanism that's able to learn successfully.
link |
00:39:27.960
So there you go.
link |
00:39:28.800
There's a thing where competition is essential
link |
00:39:31.600
for learning, at least in that context.
link |
00:39:35.040
So if we can step back into another sort of beautiful world,
link |
00:39:37.960
which is the actual mechanics,
link |
00:39:42.040
the dirty mess of it of the human brain,
link |
00:39:44.680
is there something for people who might not know?
link |
00:39:49.440
Is there something you can comment on
link |
00:39:51.120
or describe the key parts of the brain
link |
00:39:53.960
that are important for intelligence or just in general,
link |
00:39:56.840
what are the different parts of the brain
link |
00:39:58.620
that you're curious about that you've studied
link |
00:40:01.120
and that are just good to know about
link |
00:40:03.880
when you're thinking about cognition?
link |
00:40:06.240
Well, my area of expertise, if I have one,
link |
00:40:11.200
is prefrontal cortex.
link |
00:40:14.200
So, you know. What's that?
link |
00:40:16.560
Where do we?
link |
00:40:18.200
It depends on who you ask.
link |
00:40:21.520
The technical definition is anatomical.
link |
00:40:25.640
There are parts of your brain
link |
00:40:30.680
that are responsible for motor behavior
link |
00:40:32.480
and they're very easy to identify.
link |
00:40:35.740
And the region of your cerebral cortex,
link |
00:40:40.760
the sort of outer crust of your brain
link |
00:40:43.960
that lies in front of those
link |
00:40:46.440
is defined as the prefrontal cortex.
link |
00:40:49.360
And when you say anatomical, sorry to interrupt,
link |
00:40:51.960
so that's referring to sort of the geographic region
link |
00:40:57.160
as opposed to some kind of functional definition.
link |
00:41:00.160
Exactly, so this is kind of the coward's way out.
link |
00:41:04.400
I'm telling you what the prefrontal cortex is
link |
00:41:06.000
just in terms of what part of the real estate it occupies.
link |
00:41:09.640
It's the thing in the front of the brain.
link |
00:41:10.720
Yeah, exactly.
link |
00:41:11.680
And in fact, the early history
link |
00:41:14.960
of neuroscientific investigation
link |
00:41:20.840
of what this front part of the brain does
link |
00:41:23.480
is sort of funny to read
link |
00:41:25.760
because it was really World War I
link |
00:41:32.280
that started people down this road
link |
00:41:34.580
of trying to figure out what different parts of the brain,
link |
00:41:37.280
the human brain do in the sense
link |
00:41:39.440
that there were a lot of people with brain damage
link |
00:41:42.560
who came back from the war with brain damage.
link |
00:41:44.800
And that provided, as tragic as that was,
link |
00:41:47.740
it provided an opportunity for scientists
link |
00:41:49.900
to try to identify the functions of different brain regions.
link |
00:41:53.440
And that was actually incredibly productive,
link |
00:41:56.200
but one of the frustrations that neuropsychologists faced
link |
00:41:59.480
was they couldn't really identify exactly
link |
00:42:02.160
what the deficit was that arose from damage
link |
00:42:05.040
to these most kind of frontal parts of the brain.
link |
00:42:08.440
It was just a very difficult thing to pin down.
link |
00:42:13.680
There were a couple of neuropsychologists
link |
00:42:16.080
who identified through a large amount
link |
00:42:20.600
of clinical experience and close observation,
link |
00:42:23.000
they started to put their finger on a syndrome
link |
00:42:26.240
that was associated with frontal damage.
link |
00:42:27.680
Actually, one of them was a Russian neuropsychologist
link |
00:42:30.480
named Luria, who students of cognitive psychology still read.
link |
00:42:36.160
And what he started to figure out was that
link |
00:42:41.360
the frontal cortex was somehow involved in flexibility,
link |
00:42:48.060
in guiding behaviors that required someone
link |
00:42:52.320
to override a habit, or to do something unusual,
link |
00:42:57.600
or to change what they were doing in a very flexible way
link |
00:43:01.040
from one moment to another.
link |
00:43:02.560
So focused on like new experiences.
link |
00:43:05.080
And so the way your brain processes
link |
00:43:08.800
and acts in new experiences.
link |
00:43:10.960
Yeah, what later helped bring this function
link |
00:43:14.760
into better focus was a distinction
link |
00:43:17.240
between controlled and automatic behavior,
link |
00:43:19.880
or in other literatures, this is referred to
link |
00:43:23.680
as habitual behavior versus goal directed behavior.
link |
00:43:28.280
So it's very, very clear that the human brain
link |
00:43:33.440
has pathways that are dedicated to habits,
link |
00:43:36.600
to things that you do all the time,
link |
00:43:39.360
and they need to be automatized
link |
00:43:42.440
so that they don't require you to concentrate too much.
link |
00:43:45.140
So that leaves your cognitive capacity
link |
00:43:47.840
free to do other things.
link |
00:43:49.800
Just think about the difference
link |
00:43:51.640
between driving when you're learning to drive
link |
00:43:55.960
versus driving after you're a fairly expert.
link |
00:43:59.160
There are brain pathways that slowly absorb
link |
00:44:03.560
those frequently performed behaviors
link |
00:44:07.840
so that they can be habits, so that they can be automatic.
link |
00:44:12.360
That's kind of like the purest form of learning.
link |
00:44:14.900
I guess it's happening there, which is why,
link |
00:44:18.360
I mean, this is kind of jumping ahead,
link |
00:44:20.000
which is why that perhaps is the most useful for us
link |
00:44:22.480
to focusing on and trying to see
link |
00:44:24.080
how artificial intelligence systems can learn.
link |
00:44:27.340
Is that the way you think?
link |
00:44:28.180
It's interesting.
link |
00:44:29.000
I do think about this distinction
link |
00:44:30.040
between controlled and automatic,
link |
00:44:31.440
or goal directed and habitual behavior a lot
link |
00:44:34.600
in thinking about where we are in AI research.
link |
00:44:42.960
But just to finish the kind of dissertation here,
link |
00:44:46.480
the role of the prefrontal cortex
link |
00:44:51.380
is generally understood these days
link |
00:44:54.600
sort of in contradistinction to that habitual domain.
link |
00:45:00.440
In other words, the prefrontal cortex
link |
00:45:02.320
is what helps you override those habits.
link |
00:45:05.840
It's what allows you to say,
link |
00:45:07.440
well, what I usually do in this situation is X,
link |
00:45:10.800
but given the context, I probably should do Y.
link |
00:45:14.160
I mean, the elbow bump is a great example, right?
link |
00:45:18.080
Reaching out and shaking hands
link |
00:45:19.300
is probably a habitual behavior,
link |
00:45:22.520
and it's the prefrontal cortex that allows us
link |
00:45:26.000
to bear in mind that there's something unusual
link |
00:45:28.760
going on right now, and in this situation,
link |
00:45:31.360
I need to not do the usual thing.
link |
00:45:34.720
The kind of behaviors that Luria reported,
link |
00:45:38.560
and he built tests for detecting these kinds of things,
link |
00:45:42.040
were exactly like this.
link |
00:45:43.460
So in other words, when I stick out my hand,
link |
00:45:47.540
I want you instead to present your elbow.
link |
00:45:49.760
A patient with frontal damage
link |
00:45:51.080
would have a great deal of trouble with that.
link |
00:45:53.520
Somebody proffering their hand would elicit a handshake.
link |
00:45:58.800
The prefrontal cortex is what allows us to say,
link |
00:46:00.920
hold on, hold on, that's the usual thing,
link |
00:46:03.840
but I have the ability to bear in mind
link |
00:46:07.120
even very unusual contexts and to reason about
link |
00:46:10.520
what behavior is appropriate there.
link |
00:46:13.240
Just to get a sense, are us humans special
link |
00:46:17.560
in the presence of the prefrontal cortex?
link |
00:46:20.680
Do mice have a prefrontal cortex?
link |
00:46:22.640
Do other mammals that we can study?
link |
00:46:25.900
If no, then how do they integrate new experiences?
link |
00:46:30.040
Yeah, that's a really tricky question
link |
00:46:33.760
and a very timely question
link |
00:46:35.840
because we have revolutionary new technologies
link |
00:46:44.040
for monitoring, measuring,
link |
00:46:48.280
and also causally influencing neural behavior
link |
00:46:52.040
in mice and fruit flies.
link |
00:46:57.000
And these techniques are not fully available
link |
00:47:00.640
even for studying brain function in monkeys,
link |
00:47:06.080
let alone humans.
link |
00:47:08.160
And so it's a very sort of, for me at least,
link |
00:47:12.920
a very urgent question whether the kinds of things
link |
00:47:16.160
that we wanna understand about human intelligence
link |
00:47:18.000
can be pursued in these other organisms.
link |
00:47:22.000
And to put it briefly, there's disagreement.
link |
00:47:26.500
People who study fruit flies will often tell you,
link |
00:47:32.960
hey, fruit flies are smarter than you think.
link |
00:47:35.520
And they'll point to experiments where fruit flies
link |
00:47:37.600
were able to learn new behaviors,
link |
00:47:40.320
were able to generalize from one stimulus to another
link |
00:47:44.180
in a way that suggests that they have abstractions
link |
00:47:47.500
that guide their generalization.
link |
00:47:51.880
I've had many conversations in which
link |
00:47:53.840
I will start by observing,
link |
00:47:58.160
recounting some observation about mouse behavior
link |
00:48:05.200
where it seemed like mice were taking an awfully long time
link |
00:48:09.060
to learn a task that for a human would be profoundly trivial.
link |
00:48:13.660
And I will conclude from that,
link |
00:48:16.460
that mice really don't have the cognitive flexibility
link |
00:48:18.800
that we want to explain.
link |
00:48:20.100
And then a mouse researcher will say to me,
link |
00:48:21.760
well, hold on, that experiment may not have worked
link |
00:48:26.360
because you asked a mouse to deal with stimuli
link |
00:48:31.280
and behaviors that were very unnatural for the mouse.
link |
00:48:34.300
If instead you kept the logic of the experiment the same,
link |
00:48:38.760
but presented the information in a way
link |
00:48:44.440
that aligns with what mice are used to dealing with
link |
00:48:46.880
in their natural habitats,
link |
00:48:48.480
you might find that a mouse actually has more intelligence
link |
00:48:51.080
than you think.
link |
00:48:52.440
And then they'll go on to show you videos
link |
00:48:54.920
of mice doing things in their natural habitat,
link |
00:48:57.440
which seem strikingly intelligent,
link |
00:49:00.000
dealing with physical problems.
link |
00:49:02.920
I have to drag this piece of food back to my lair,
link |
00:49:07.180
but there's something in my way
link |
00:49:08.560
and how do I get rid of that thing?
link |
00:49:10.400
So I think these are open questions
link |
00:49:13.160
to put it, to sum that up.
link |
00:49:15.400
And then taking a small step back related to that
link |
00:49:18.520
is you kind of mentioned we're taking a little shortcut
link |
00:49:21.440
by saying it's a geographic part of the prefrontal cortex
link |
00:49:26.600
is a region of the brain.
link |
00:49:28.280
But if we, what's your sense in a bigger philosophical view,
link |
00:49:33.720
prefrontal cortex and the brain in general,
link |
00:49:36.260
do you have a sense that it's a set of subsystems
link |
00:49:38.840
in the way we've kind of implied
link |
00:49:41.180
that are pretty distinct or to what degree is it that
link |
00:49:46.180
or to what degree is it a giant interconnected mess
link |
00:49:49.460
where everything kind of does everything
link |
00:49:51.380
and it's impossible to disentangle them?
link |
00:49:54.920
I think there's overwhelming evidence
link |
00:49:57.020
that there's functional differentiation,
link |
00:50:00.060
that it's clearly not the case
link |
00:50:03.460
that all parts of the brain are doing the same thing.
link |
00:50:07.100
This follows immediately from the kinds of studies
link |
00:50:11.100
of brain damage that we were chatting about before.
link |
00:50:14.620
It's obvious from what you see
link |
00:50:18.060
if you stick an electrode in the brain
link |
00:50:19.620
and measure what's going on at the level of neural activity.
link |
00:50:25.960
Having said that, there are two other things to add,
link |
00:50:30.680
which kind of, I don't know,
link |
00:50:32.740
maybe tug in the other direction.
link |
00:50:34.340
One is that it's when you look carefully
link |
00:50:39.740
at functional differentiation in the brain,
link |
00:50:42.220
what you usually end up concluding,
link |
00:50:44.900
at least this is my observation of the literature,
link |
00:50:48.140
is that the differences between regions are graded
link |
00:50:52.780
rather than being discreet.
link |
00:50:55.180
So it doesn't seem like it's easy
link |
00:50:57.460
to divide the brain up into true modules
link |
00:51:03.300
that have clear boundaries and that have
link |
00:51:07.460
you know, clear channels of communication between them.
link |
00:51:16.020
And this applies to the prefrontal cortex?
link |
00:51:18.020
Yeah, oh yeah.
link |
00:51:18.860
The prefrontal cortex is made up
link |
00:51:20.200
of a bunch of different subregions,
link |
00:51:23.140
the functions of which are not clearly defined
link |
00:51:27.380
and the borders of which seem to be quite vague.
link |
00:51:32.300
And then there's another thing that's popping up
link |
00:51:34.420
in very recent research, which, you know, which,
link |
00:51:40.280
involves application of these new techniques,
link |
00:51:44.940
which there are a number of studies that suggest that
link |
00:51:48.820
parts of the brain that we would have previously thought
link |
00:51:51.540
were quite focused in their function
link |
00:51:57.740
are actually carrying signals
link |
00:51:59.100
that we wouldn't have thought would be there.
link |
00:52:01.340
For example, looking in the primary visual cortex,
link |
00:52:04.500
which is classically thought of as basically
link |
00:52:07.900
the first cortical way station
link |
00:52:09.380
for processing visual information.
link |
00:52:10.900
Basically what it should care about is, you know,
link |
00:52:12.980
where are the edges in this scene that I'm viewing?
link |
00:52:17.460
It turns out that if you have enough data,
link |
00:52:19.460
you can recover information from primary visual cortex
link |
00:52:22.220
about all sorts of things.
link |
00:52:23.220
Like, you know, what behavior the animal is engaged
link |
00:52:26.180
in right now and how much reward is on offer
link |
00:52:29.340
in the task that it's pursuing.
link |
00:52:31.340
So it's clear that even regions whose function
link |
00:52:36.740
is pretty well defined at a core screen
link |
00:52:40.540
are nonetheless carrying some information
link |
00:52:42.860
about information from very different domains.
link |
00:52:47.060
So, you know, the history of neuroscience
link |
00:52:49.780
is sort of this oscillation between the two views
link |
00:52:52.660
that you articulated, you know, the kind of modular view
link |
00:52:55.460
and then the big, you know, mush view.
link |
00:52:57.740
And, you know, I think, I guess we're gonna end up
link |
00:53:01.580
somewhere in the middle.
link |
00:53:02.800
Which is unfortunate for our understanding
link |
00:53:05.580
because there's something about our, you know,
link |
00:53:08.880
conceptual system that finds it's easy to think about
link |
00:53:11.380
a modularized system and easy to think about
link |
00:53:13.680
a completely undifferentiated system.
link |
00:53:15.500
But something that kind of lies in between is confusing.
link |
00:53:19.980
But we're gonna have to get used to it, I think.
link |
00:53:21.860
Unless we can understand deeply the lower level mechanism
link |
00:53:24.660
of neuronal communication.
link |
00:53:25.860
Yeah, yeah.
link |
00:53:26.760
But on that topic, you kind of mentioned information.
link |
00:53:29.660
Just to get a sense, I imagine something
link |
00:53:31.860
that there's still mystery and disagreement on
link |
00:53:34.620
is how does the brain carry information and signal?
link |
00:53:38.060
Like what in your sense is the basic mechanism
link |
00:53:43.380
of communication in the brain?
link |
00:53:46.420
Well, I guess I'm old fashioned in that I consider
link |
00:53:52.020
the networks that we use in deep learning research
link |
00:53:54.340
to be a reasonable approximation to, you know,
link |
00:53:59.080
the mechanisms that carry information in the brain.
link |
00:54:02.500
So the usual way of articulating that is to say,
link |
00:54:06.180
what really matters is a rate code.
link |
00:54:08.540
What matters is how quickly is an individual neuron spiking?
link |
00:54:14.580
You know, what's the frequency at which it's spiking?
link |
00:54:16.380
Is it right?
link |
00:54:17.200
So the timing of the spike.
link |
00:54:18.040
Yeah, is it firing fast or slow?
link |
00:54:20.340
Let's, you know, let's put a number on that.
link |
00:54:22.740
And that number is enough to capture
link |
00:54:24.380
what neurons are doing.
link |
00:54:26.140
There's, you know, there's still uncertainty
link |
00:54:30.620
about whether that's an adequate description
link |
00:54:34.500
of how information is transmitted within the brain.
link |
00:54:39.880
There, you know, there are studies that suggest
link |
00:54:42.820
that the precise timing of spikes matters.
link |
00:54:46.060
There are studies that suggest that there are computations
link |
00:54:50.660
that go on within the dendritic tree, within a neuron,
link |
00:54:54.520
that are quite rich and structured
link |
00:54:57.100
and that really don't equate to anything that we're doing
link |
00:54:59.980
in our artificial neural networks.
link |
00:55:02.820
Having said that, I feel like we can get,
link |
00:55:05.360
I feel like we're getting somewhere
link |
00:55:08.260
by sticking to this high level of abstraction.
link |
00:55:11.620
Just the rate, and by the way,
link |
00:55:13.380
we're talking about the electrical signal.
link |
00:55:16.220
I remember reading some vague paper somewhere recently
link |
00:55:20.060
where the mechanical signal, like the vibrations
link |
00:55:23.420
or something of the neurons, also communicates information.
link |
00:55:28.820
I haven't seen that, but.
link |
00:55:30.260
There's somebody who was arguing
link |
00:55:32.100
that the electrical signal, this is in a nature paper,
link |
00:55:36.840
something like that, where the electrical signal
link |
00:55:38.780
is actually a side effect of the mechanical signal.
link |
00:55:43.740
But I don't think that changes the story.
link |
00:55:46.100
But it's almost an interesting idea
link |
00:55:49.060
that there could be a deeper, it's always like in physics
link |
00:55:52.420
with quantum mechanics, there's always a deeper story
link |
00:55:55.740
that could be underlying the whole thing.
link |
00:55:57.500
But you think it's basically the rate of spiking
link |
00:56:00.540
that gets us, that's like the lowest hanging fruit
link |
00:56:02.820
that can get us really far.
link |
00:56:04.060
This is a classical view.
link |
00:56:06.580
I mean, this is not, the only way in which this stance
link |
00:56:10.700
would be controversial is in the sense
link |
00:56:13.580
that there are members of the neuroscience community
link |
00:56:17.100
who are interested in alternatives.
link |
00:56:18.820
But this is really a very mainstream view.
link |
00:56:21.400
The way that neurons communicate
link |
00:56:22.940
is that neurotransmitters arrive,
link |
00:56:30.180
they wash up on a neuron, the neuron has receptors
link |
00:56:34.500
for those transmitters, the meeting of the transmitter
link |
00:56:39.040
with these receptors changes the voltage of the neuron.
link |
00:56:42.340
And if enough voltage change occurs, then a spike occurs,
link |
00:56:46.860
one of these like discrete events.
link |
00:56:48.660
And it's that spike that is conducted down the axon
link |
00:56:52.300
and leads to neurotransmitter release.
link |
00:56:54.580
This is just like neuroscience 101.
link |
00:56:56.860
This is like the way the brain is supposed to work.
link |
00:56:59.300
Now, what we do when we build artificial neural networks
link |
00:57:03.660
of the kind that are now popular in the AI community
link |
00:57:08.060
is that we don't worry about those individual spikes.
link |
00:57:11.780
We just worry about the frequency
link |
00:57:14.220
at which those spikes are being generated.
link |
00:57:16.980
And people talk about that as the activity of a neuron.
link |
00:57:22.340
And so the activity of units in a deep learning system
link |
00:57:27.180
is broadly analogous to the spike rate of a neuron.
link |
00:57:32.900
There are people who believe that there are other forms
link |
00:57:38.020
of communication in the brain.
link |
00:57:39.180
In fact, I've been involved in some research recently
link |
00:57:41.260
that suggests that the voltage fluctuations
link |
00:57:46.260
that occur in populations of neurons
link |
00:57:49.260
that are sort of below the level of spike production
link |
00:57:54.860
may be important for communication.
link |
00:57:57.220
But I'm still pretty old school in the sense
link |
00:58:00.220
that I think that the things that we're building
link |
00:58:02.700
in AI research constitute reasonable models
link |
00:58:06.980
of how a brain would work.
link |
00:58:10.300
Let me ask just for fun a crazy question, because I can.
link |
00:58:14.220
Do you think it's possible we're completely wrong
link |
00:58:17.020
about the way this basic mechanism
link |
00:58:20.060
of neuronal communication, that the information
link |
00:58:23.700
is stored in some very different kind of way in the brain?
link |
00:58:26.340
Oh, heck yes.
link |
00:58:27.580
I mean, look, I wouldn't be a scientist
link |
00:58:29.900
if I didn't think there was any chance we were wrong.
link |
00:58:32.500
But I mean, if you look at the history
link |
00:58:36.420
of deep learning research as it's been applied
link |
00:58:39.900
to neuroscience, of course the vast majority
link |
00:58:42.620
of deep learning research these days isn't about neuroscience.
link |
00:58:45.380
But if you go back to the 1980s,
link |
00:58:49.060
there's sort of an unbroken chain of research
link |
00:58:52.740
in which a particular strategy is taken,
link |
00:58:54.940
which is, hey, let's train a deep learning system.
link |
00:59:00.180
Let's train a multi layer neural network
link |
00:59:04.060
on this task that we trained our rat on,
link |
00:59:09.260
or our monkey on, or this human being on.
link |
00:59:12.300
And then let's look at what the units
link |
00:59:15.700
deep in the system are doing.
link |
00:59:17.700
And let's ask whether what they're doing
link |
00:59:20.780
resembles what we know about what neurons
link |
00:59:23.260
deep in the brain are doing.
link |
00:59:24.620
And over and over and over and over,
link |
00:59:28.540
that strategy works in the sense that
link |
00:59:32.020
the learning algorithms that we have access to,
link |
00:59:34.340
which typically center on back propagation,
link |
00:59:37.740
they give rise to patterns of activity,
link |
00:59:42.060
patterns of response,
link |
00:59:45.220
patterns of neuronal behavior in these artificial models
link |
00:59:48.740
that look hauntingly similar to what you see in the brain.
link |
00:59:53.660
And is that a coincidence?
link |
00:59:57.380
At a certain point, it starts looking like such coincidence
link |
01:00:00.780
is unlikely to not be deeply meaningful, yeah.
link |
01:00:03.340
Yeah, the circumstantial evidence is overwhelming.
link |
01:00:07.140
But it could be.
link |
01:00:07.980
But you're always open to total flipping at the table.
link |
01:00:10.460
Hey, of course.
link |
01:00:11.620
So you have coauthored several recent papers
link |
01:00:15.140
that sort of weave beautifully between the world
link |
01:00:17.860
of neuroscience and artificial intelligence.
link |
01:00:20.660
And maybe if we could, can we just try to dance around
link |
01:00:26.380
and talk about some of them?
link |
01:00:27.500
Maybe try to pick out interesting ideas
link |
01:00:29.740
that jump to your mind from memory.
link |
01:00:32.300
So maybe looking at, we were talking about
link |
01:00:34.300
the prefrontal cortex, the 2018, I believe, paper
link |
01:00:38.220
called the Prefrontal Cortex
link |
01:00:40.060
as a Meta Reinforcement Learning System.
link |
01:00:42.140
What, is there a key idea
link |
01:00:44.340
that you can speak to from that paper?
link |
01:00:47.660
Yeah, I mean, the key idea is about meta learning.
link |
01:00:53.860
What is meta learning?
link |
01:00:54.860
Meta learning is, by definition,
link |
01:01:00.940
a situation in which you have a learning algorithm
link |
01:01:06.100
and the learning algorithm operates in such a way
link |
01:01:09.780
that it gives rise to another learning algorithm.
link |
01:01:14.060
In the earliest applications of this idea,
link |
01:01:17.140
you had one learning algorithm sort of adjusting
link |
01:01:20.340
the parameters on another learning algorithm.
link |
01:01:23.060
But the case that we're interested in this paper
link |
01:01:25.100
is one where you start with just one learning algorithm
link |
01:01:29.140
and then another learning algorithm kind of emerges
link |
01:01:33.020
out of thin air.
link |
01:01:35.180
I can say more about what I mean by that.
link |
01:01:36.700
I don't mean to be scurrentist,
link |
01:01:39.780
but that's the idea of meta learning.
link |
01:01:44.140
It relates to the old idea in psychology
link |
01:01:46.020
of learning to learn.
link |
01:01:49.380
Situations where you have experiences
link |
01:01:54.300
that make you better at learning something new.
link |
01:01:57.980
A familiar example would be learning a foreign language.
link |
01:02:01.380
The first time you learn a foreign language,
link |
01:02:02.860
it may be quite laborious and disorienting
link |
01:02:06.420
and novel, but let's say you've learned
link |
01:02:10.300
two foreign languages.
link |
01:02:12.220
The third foreign language, obviously,
link |
01:02:14.140
is gonna be much easier to pick up.
link |
01:02:15.940
And why?
link |
01:02:16.780
Because you've learned how to learn.
link |
01:02:18.220
You know how this goes.
link |
01:02:20.220
You know, okay, I'm gonna have to learn how to conjugate.
link |
01:02:22.140
I'm gonna have to...
link |
01:02:23.940
That's a simple form of meta learning
link |
01:02:26.340
in the sense that there's some slow learning mechanism
link |
01:02:30.260
that's helping you kind of update
link |
01:02:33.020
your fast learning mechanism.
link |
01:02:34.300
Does that make sense?
link |
01:02:35.660
So how from our understanding from the psychology world,
link |
01:02:40.540
from neuroscience, our understanding
link |
01:02:43.180
how meta learning might work in the human brain,
link |
01:02:47.180
what lessons can we draw from that
link |
01:02:49.980
that we can bring into the artificial intelligence world?
link |
01:02:53.060
Well, yeah, so the origin of that paper
link |
01:02:55.980
was in AI work that we were doing in my group.
link |
01:03:00.180
We were looking at what happens
link |
01:03:03.700
when you train a recurrent neural network
link |
01:03:06.260
using standard reinforcement learning algorithms.
link |
01:03:10.180
But you train that network, not just in one task,
link |
01:03:12.660
but you train it in a bunch of interrelated tasks.
link |
01:03:15.140
And then you ask what happens when you give it
link |
01:03:18.700
yet another task in that sort of line of interrelated tasks.
link |
01:03:23.380
And what we started to realize is that
link |
01:03:29.380
a form of meta learning spontaneously happens
link |
01:03:31.860
in recurrent neural networks.
link |
01:03:33.780
And the simplest way to explain it is to say
link |
01:03:39.540
a recurrent neural network has a kind of memory
link |
01:03:43.500
in its activation patterns.
link |
01:03:45.340
It's recurrent by definition in the sense
link |
01:03:47.540
that you have units that connect to other units,
link |
01:03:50.180
that connect to other units.
link |
01:03:51.060
So you have sort of loops of connectivity,
link |
01:03:53.660
which allows activity to stick around
link |
01:03:55.740
and be updated over time.
link |
01:03:57.380
In psychology we call, in neuroscience
link |
01:03:59.020
we call this working memory.
link |
01:04:00.100
It's like actively holding something in mind.
link |
01:04:04.260
And so that memory gives
link |
01:04:09.260
the recurrent neural network a dynamics, right?
link |
01:04:13.100
The way that the activity pattern evolves over time
link |
01:04:17.700
is inherent to the connectivity
link |
01:04:19.980
of the recurrent neural network, okay?
link |
01:04:21.580
So that's idea number one.
link |
01:04:23.500
Now, the dynamics of that network are shaped
link |
01:04:26.020
by the connectivity, by the synaptic weights.
link |
01:04:29.660
And those synaptic weights are being shaped
link |
01:04:31.660
by this reinforcement learning algorithm
link |
01:04:33.860
that you're training the network with.
link |
01:04:37.700
So the punchline is if you train a recurrent neural network
link |
01:04:41.260
with a reinforcement learning algorithm
link |
01:04:43.140
that's adjusting its weights,
link |
01:04:44.180
and you do that for long enough,
link |
01:04:47.060
the activation dynamics will become very interesting, right?
link |
01:04:50.860
So imagine I give you a task
link |
01:04:53.180
where you have to press one button or another,
link |
01:04:56.060
left button or right button.
link |
01:04:57.580
And there's some probability
link |
01:05:00.820
that I'm gonna give you an M&M
link |
01:05:02.260
if you press the left button,
link |
01:05:04.220
and there's some probability I'll give you an M&M
link |
01:05:06.220
if you press the other button.
link |
01:05:07.620
And you have to figure out what those probabilities are
link |
01:05:09.340
just by trying things out.
link |
01:05:12.060
But as I said before,
link |
01:05:13.780
instead of just giving you one of these tasks,
link |
01:05:15.500
I give you a whole sequence.
link |
01:05:17.020
You know, I give you two buttons
link |
01:05:18.700
and you figure out which one's best.
link |
01:05:19.860
And I go, good job, here's a new box.
link |
01:05:22.180
Two new buttons, you have to figure out which one's best.
link |
01:05:24.100
Good job, here's a new box.
link |
01:05:25.420
And every box has its own probabilities
link |
01:05:27.340
and you have to figure it out.
link |
01:05:28.300
So if you train a recurrent neural network
link |
01:05:30.420
on that kind of sequence of tasks,
link |
01:05:33.700
what happens, it seemed almost magical to us
link |
01:05:37.380
when we first started kind of realizing what was going on.
link |
01:05:41.180
The slow learning algorithm that's adjusting
link |
01:05:43.620
the synaptic weights,
link |
01:05:46.980
those slow synaptic changes give rise to a network dynamics
link |
01:05:51.380
that themselves, that, you know,
link |
01:05:53.020
the dynamics themselves turn into a learning algorithm.
link |
01:05:56.860
So in other words, you can tell this is happening
link |
01:05:59.060
by just freezing the synaptic weights saying,
link |
01:06:01.020
okay, no more learning, you're done.
link |
01:06:03.460
Here's a new box, figure out which button is best.
link |
01:06:07.620
And the recurrent neural network will do this just fine.
link |
01:06:09.620
There's no, like it figures out which button is best.
link |
01:06:13.060
It kind of transitions from exploring the two buttons
link |
01:06:16.700
to just pressing the one that it likes best
link |
01:06:18.380
in a very rational way.
link |
01:06:20.700
How is that happening?
link |
01:06:21.660
It's happening because the activity dynamics
link |
01:06:24.700
of the network have been shaped by the slow learning process
link |
01:06:28.460
that's occurred over many, many boxes.
link |
01:06:30.660
And so what's happened is that this slow learning algorithm
link |
01:06:34.660
that's slowly adjusting the weights
link |
01:06:37.140
is changing the dynamics of the network,
link |
01:06:39.740
the activity dynamics into its own learning algorithm.
link |
01:06:43.460
And as we were kind of realizing that this is a thing,
link |
01:06:51.340
it just so happened that the group that was working on this
link |
01:06:53.740
included a bunch of neuroscientists
link |
01:06:56.020
and it started kind of ringing a bell for us,
link |
01:06:59.900
which is to say that we thought this sounds a lot
link |
01:07:02.860
like the distinction between synaptic learning
link |
01:07:06.180
and activity, synaptic memory
link |
01:07:08.460
and activity based memory in the brain.
link |
01:07:11.700
And it also reminded us of recurrent connectivity
link |
01:07:15.900
that's very characteristic of prefrontal function.
link |
01:07:19.620
So this is kind of why it's good to have people working
link |
01:07:22.820
on AI that know a little bit about neuroscience
link |
01:07:26.180
and vice versa, because we started thinking
link |
01:07:29.340
about whether we could apply this principle to neuroscience.
link |
01:07:32.340
And that's where the paper came from.
link |
01:07:33.660
So the kind of principle of the recurrence
link |
01:07:37.540
they can see in the prefrontal cortex,
link |
01:07:39.540
then you start to realize that it's possible
link |
01:07:43.660
for something like an idea of a learning
link |
01:07:46.340
to learn emerging from this learning process
link |
01:07:50.860
as long as you keep varying the environment sufficiently.
link |
01:07:54.500
Exactly, so the kind of metaphorical transition
link |
01:07:59.300
we made to neuroscience was to think,
link |
01:08:00.740
okay, well, we know that the prefrontal cortex
link |
01:08:03.660
is highly recurrent.
link |
01:08:04.940
We know that it's an important locus for working memory
link |
01:08:08.500
for activation based memory.
link |
01:08:11.260
So maybe the prefrontal cortex
link |
01:08:13.660
supports reinforcement learning.
link |
01:08:15.620
In other words, what is reinforcement learning?
link |
01:08:19.260
You take an action, you see how much reward you got,
link |
01:08:21.620
you update your policy of behavior.
link |
01:08:24.580
Maybe the prefrontal cortex is doing that sort of thing
link |
01:08:26.860
strictly in its activation patterns.
link |
01:08:28.500
It's keeping around a memory in its activity patterns
link |
01:08:31.900
of what you did, how much reward you got,
link |
01:08:35.340
and it's using that activity based memory
link |
01:08:38.980
as a basis for updating behavior.
link |
01:08:41.100
But then the question is, well,
link |
01:08:42.180
how did the prefrontal cortex get so smart?
link |
01:08:44.540
In other words, where did these activity dynamics come from?
link |
01:08:48.020
How did that program that's implemented
link |
01:08:50.780
in the recurrent dynamics of the prefrontal cortex arise?
link |
01:08:54.460
And one answer that became evident in this work was,
link |
01:08:58.060
well, maybe the mechanisms that operate
link |
01:09:00.940
on the synaptic level, which we believe are mediated
link |
01:09:05.020
by dopamine, are responsible for shaping those dynamics.
link |
01:09:10.180
So this may be a silly question,
link |
01:09:12.420
but because this kind of several temporal sort of classes
link |
01:09:19.340
of learning are happening and the learning to learnism
link |
01:09:23.020
emerges, can you keep building stacks of learning
link |
01:09:28.660
to learn to learn, learning to learn to learn
link |
01:09:30.940
to learn to learn because it keeps,
link |
01:09:32.900
I mean, basically abstractions of more powerful abilities
link |
01:09:37.020
to generalize of learning complex rules.
link |
01:09:41.140
Yeah, that's overstretching this kind of mechanism.
link |
01:09:46.100
Well, one of the people in AI who started thinking
link |
01:09:51.260
about meta learning from very early on,
link |
01:09:54.700
Jürgen Schmidhuber sort of cheekily suggested,
link |
01:09:59.780
I think it may have been in his PhD thesis,
link |
01:10:03.900
that we should think about meta, meta, meta,
link |
01:10:06.900
meta, meta, meta learning.
link |
01:10:08.740
That's really what's gonna get us to true intelligence.
link |
01:10:13.140
Certainly there's a poetic aspect to it
link |
01:10:15.380
and it seems interesting and correct
link |
01:10:19.260
that that kind of levels of abstraction would be powerful,
link |
01:10:21.660
but is that something you see in the brain?
link |
01:10:23.940
This kind of, is it useful to think of learning
link |
01:10:27.780
in these meta, meta, meta way or is it just meta learning?
link |
01:10:32.100
Well, one thing that really fascinated me
link |
01:10:35.300
about this mechanism that we were starting to look at,
link |
01:10:39.020
and other groups started talking
link |
01:10:41.100
about very similar things at the same time.
link |
01:10:44.740
And then a kind of explosion of interest
link |
01:10:47.020
in meta learning happened in the AI community
link |
01:10:48.980
shortly after that.
link |
01:10:50.580
I don't know if we had anything to do with that,
link |
01:10:52.060
but I was gratified to see that a lot of people
link |
01:10:55.620
started talking about meta learning.
link |
01:10:57.780
One of the things that I liked about the kind of flavor
link |
01:11:01.380
of meta learning that we were studying was that
link |
01:11:04.060
it didn't require anything special.
link |
01:11:05.940
It was just, if you took a system that had
link |
01:11:08.620
some form of memory that the function of which
link |
01:11:12.460
could be shaped by pick URL algorithm,
link |
01:11:16.860
then this would just happen, right?
link |
01:11:19.100
I mean, there are a lot of forms of,
link |
01:11:21.300
there are a lot of meta learning algorithms
link |
01:11:23.180
that have been proposed since then
link |
01:11:24.500
that are fascinating and effective
link |
01:11:26.580
in their domains of application.
link |
01:11:29.780
But they're engineered, they're things that somebody
link |
01:11:32.580
had to say, well, gee, if we wanted meta learning
link |
01:11:34.340
to happen, how would we do that?
link |
01:11:35.700
Here's an algorithm that would,
link |
01:11:37.060
but there's something about the kind of meta learning
link |
01:11:39.500
that we were studying that seemed to me special
link |
01:11:42.540
in the sense that it wasn't an algorithm.
link |
01:11:44.980
It was just something that automatically happened
link |
01:11:48.740
if you had a system that had memory
link |
01:11:51.060
and it was trained with a reinforcement learning algorithm.
link |
01:11:54.020
And in that sense, it can be as meta as it wants to be.
link |
01:11:59.020
There's no limit on how abstract the meta learning can get
link |
01:12:04.700
because it's not reliant on a human engineering
link |
01:12:07.980
a particular meta learning algorithm to get there.
link |
01:12:11.540
And that's, I also, I don't know,
link |
01:12:15.140
I guess I hope that that's relevant in the brain.
link |
01:12:17.820
I think there's a kind of beauty
link |
01:12:19.180
in the ability of this emergent.
link |
01:12:23.380
The emergent aspect of it, as opposed to engineered.
link |
01:12:26.460
Exactly, it's something that just, it just happens
link |
01:12:29.020
in a sense, in a sense, you can't avoid this happening.
link |
01:12:33.620
If you have a system that has memory
link |
01:12:35.820
and the function of that memory is shaped
link |
01:12:39.660
by reinforcement learning, and this system is trained
link |
01:12:42.740
in a series of interrelated tasks, this is gonna happen.
link |
01:12:46.900
You can't stop it.
link |
01:12:48.460
As long as you have certain properties,
link |
01:12:50.140
maybe like a recurrent structure to.
link |
01:12:52.540
You have to have memory.
link |
01:12:53.380
It actually doesn't have to be a recurrent neural network.
link |
01:12:55.220
One of, a paper that I was honored to be involved
link |
01:12:58.740
with even earlier, used a kind of slot based memory.
link |
01:13:02.260
Do you remember the title?
link |
01:13:03.100
Just for people to understand.
link |
01:13:05.060
It was Memory Augmented Neural Networks.
link |
01:13:08.140
I think it was, I think the title was
link |
01:13:10.180
Meta Learning in Memory Augmented Neural Networks.
link |
01:13:14.660
And it was the same exact story.
link |
01:13:17.940
If you have a system with memory,
link |
01:13:21.100
here it was a different kind of memory,
link |
01:13:22.780
but the function of that memory is shaped
link |
01:13:26.860
by reinforcement learning.
link |
01:13:29.900
Here it was the reads and writes that occurred
link |
01:13:34.300
on this slot based memory.
link |
01:13:36.420
This will just happen.
link |
01:13:39.940
But this brings us back to something I was saying earlier
link |
01:13:42.060
about the importance of the environment.
link |
01:13:46.340
This will happen if the system is being trained
link |
01:13:49.940
in a setting where there's like a sequence of tasks
link |
01:13:53.060
that all share some abstract structure.
link |
01:13:56.100
Sometimes we talk about task distributions.
link |
01:13:59.020
And that's something that's very obviously true
link |
01:14:04.180
of the world that humans inhabit.
link |
01:14:09.500
Like if you just kind of think about what you do every day,
link |
01:14:13.140
you never do exactly the same thing
link |
01:14:16.280
that you did the day before.
link |
01:14:17.640
But everything that you do sort of has a family resemblance.
link |
01:14:21.060
It shares a structure with something that you did before.
link |
01:14:23.500
And so the real world is sort of
link |
01:14:29.260
saturated with this kind of, this property.
link |
01:14:32.700
It's endless variety with endless redundancy.
link |
01:14:37.540
And that's the setting in which
link |
01:14:38.700
this kind of meta learning happens.
link |
01:14:40.540
And it does seem like we're just so good at finding,
link |
01:14:44.980
just like in this emergent phenomena you described,
link |
01:14:47.820
we're really good at finding that redundancy,
link |
01:14:50.020
finding those similarities, the family resemblance.
link |
01:14:53.480
Some people call it sort of, what is it?
link |
01:14:56.560
Melanie Mitchell was talking about analogies.
link |
01:14:59.180
So we're able to connect concepts together
link |
01:15:01.940
in this kind of way,
link |
01:15:03.860
in this same kind of automated emergent way,
link |
01:15:06.020
which there's so many echoes here
link |
01:15:08.620
of psychology and neuroscience.
link |
01:15:10.640
And obviously now with reinforcement learning
link |
01:15:15.300
with recurrent neural networks at the core.
link |
01:15:18.260
If we could talk a little bit about dopamine,
link |
01:15:20.180
you have really, you're a part of coauthoring
link |
01:15:23.780
really exciting recent paper, very recent,
link |
01:15:26.420
in terms of release on dopamine
link |
01:15:28.900
and temporal difference learning.
link |
01:15:31.040
Can you describe the key ideas of that paper?
link |
01:15:34.820
Sure, yeah.
link |
01:15:35.660
I mean, one thing I want to pause to do
link |
01:15:37.740
is acknowledge my coauthors
link |
01:15:39.460
on actually both of the papers we're talking about.
link |
01:15:41.540
So this dopamine paper.
link |
01:15:42.660
I'll just, I'll certainly post all their names.
link |
01:15:45.700
Okay, wonderful.
link |
01:15:46.540
Yeah, because I'm sort of abashed
link |
01:15:49.300
to be the spokesperson for these papers
link |
01:15:51.000
when I had such amazing collaborators on both.
link |
01:15:55.180
So it's a comfort to me to know
link |
01:15:56.980
that you'll acknowledge them.
link |
01:15:58.580
Yeah, there's an incredible team there, but yeah.
link |
01:16:00.420
Oh yeah, it's such a, it's so much fun.
link |
01:16:03.080
And in the case of the dopamine paper,
link |
01:16:06.360
we also collaborated with Naochit at Harvard,
link |
01:16:09.020
who, you know, obviously a paper simply
link |
01:16:11.180
wouldn't have happened without him.
link |
01:16:12.620
But so you were asking for like a thumbnail sketch of.
link |
01:16:17.540
Yeah, thumbnail sketch or key ideas or, you know,
link |
01:16:20.820
things, the insights that are, you know,
link |
01:16:22.500
continuing on our kind of discussion here
link |
01:16:24.780
between neuroscience and AI.
link |
01:16:26.900
Yeah, I mean, this was another,
link |
01:16:28.860
a lot of the work that we've done so far
link |
01:16:30.620
is taking ideas that have bubbled up in AI
link |
01:16:35.380
and, you know, asking the question of whether the brain
link |
01:16:39.660
might be doing something related,
link |
01:16:41.460
which I think on the surface sounds like something
link |
01:16:45.420
that's really mainly of use to neuroscience.
link |
01:16:49.380
We see it also as a way of validating
link |
01:16:53.600
what we're doing on the AI side.
link |
01:16:55.320
If we can gain some evidence that the brain
link |
01:16:57.940
is using some technique that we've been trying out
link |
01:17:01.760
in our AI work, that gives us confidence
link |
01:17:05.500
that, you know, it may be a good idea,
link |
01:17:07.780
that it'll, you know, scale to rich, complex tasks,
link |
01:17:11.560
that it'll interface well with other mechanisms.
link |
01:17:14.840
So you see it as a two way road.
link |
01:17:16.860
Yeah, for sure. Just because a particular paper
link |
01:17:18.520
is a little bit focused on from one to the,
link |
01:17:21.140
from AI, from neural networks to neuroscience.
link |
01:17:25.620
Ultimately the discussion, the thinking,
link |
01:17:28.380
the productive longterm aspect of it
link |
01:17:30.840
is the two way road nature of the whole interaction.
link |
01:17:33.220
Yeah, I mean, we've talked about the notion
link |
01:17:36.260
of a virtuous circle between AI and neuroscience.
link |
01:17:39.300
And, you know, the way I see it,
link |
01:17:42.660
that's always been there since the two fields,
link |
01:17:47.460
you know, jointly existed.
link |
01:17:50.100
There have been some phases in that history
link |
01:17:52.140
when AI was sort of ahead.
link |
01:17:53.540
There are some phases when neuroscience was sort of ahead.
link |
01:17:56.340
I feel like given the burst of innovation
link |
01:18:00.660
that's happened recently on the AI side,
link |
01:18:03.780
AI is kind of ahead in the sense that
link |
01:18:06.320
there are all of these ideas that we, you know,
link |
01:18:10.620
for which it's exciting to consider
link |
01:18:12.660
that there might be neural analogs.
link |
01:18:16.100
And neuroscience, you know,
link |
01:18:19.620
in a sense has been focusing on approaches
link |
01:18:22.420
to studying behavior that come from, you know,
link |
01:18:24.860
that are kind of derived from this earlier era
link |
01:18:27.540
of cognitive psychology.
link |
01:18:29.620
And, you know, so in some ways fail to connect
link |
01:18:33.540
with some of the issues that we're grappling with in AI.
link |
01:18:36.700
Like how do we deal with, you know,
link |
01:18:37.940
large, you know, complex environments.
link |
01:18:41.560
But, you know, I think it's inevitable
link |
01:18:45.220
that this circle will keep turning
link |
01:18:47.920
and there will be a moment
link |
01:18:49.540
in the not too different distant future
link |
01:18:51.300
when neuroscience is pelting AI researchers
link |
01:18:54.640
with insights that may change the direction of our work.
link |
01:18:58.260
Just a quick human question.
link |
01:19:00.940
Is it, you have parts of your brain,
link |
01:19:05.460
this is very meta, but they're able to both think
link |
01:19:08.260
about neuroscience and AI.
link |
01:19:10.300
You know, I don't often meet people like that.
link |
01:19:14.220
So do you think, let me ask a meta plasticity question.
link |
01:19:19.780
Do you think a human being can be both good at AI
link |
01:19:22.660
and neuroscience?
link |
01:19:23.580
It's like what, on the team at DeepMind,
link |
01:19:26.500
what kind of human can occupy these two realms?
link |
01:19:30.180
And is that something you see everybody should be doing,
link |
01:19:33.340
can be doing, or is that a very special few
link |
01:19:36.620
can kind of jump?
link |
01:19:37.460
Just like we talk about art history,
link |
01:19:39.180
I would think it's a special person
link |
01:19:41.020
that can major in art history
link |
01:19:43.620
and also consider being a surgeon.
link |
01:19:46.860
Otherwise known as a dilettante.
link |
01:19:48.380
A dilettante, yeah.
link |
01:19:50.140
Easily distracted.
link |
01:19:52.100
No, I think it does take a special kind of person
link |
01:19:58.620
to be truly world class at both AI and neuroscience.
link |
01:20:02.660
And I am not on that list.
link |
01:20:05.940
I happen to be someone whose interest in neuroscience
link |
01:20:10.300
and psychology involved using the kinds
link |
01:20:15.940
of modeling techniques that are now very central in AI.
link |
01:20:20.940
And that sort of, I guess, bought me a ticket
link |
01:20:24.140
to be involved in all of the amazing things
link |
01:20:26.500
that are going on in AI research right now.
link |
01:20:29.500
I do know a few people who I would consider
link |
01:20:32.660
pretty expert on both fronts,
link |
01:20:34.780
and I won't embarrass them by naming them,
link |
01:20:36.260
but there are exceptional people out there
link |
01:20:40.540
who are like this.
link |
01:20:41.380
The one thing that I find is a barrier
link |
01:20:45.900
to being truly world class on both fronts
link |
01:20:49.300
is just the complexity of the technology
link |
01:20:54.980
that's involved in both disciplines now.
link |
01:20:58.180
So the engineering expertise that it takes
link |
01:21:02.980
to do truly frontline, hands on AI research
link |
01:21:07.860
is really, really considerable.
link |
01:21:10.620
The learning curve of the tools,
link |
01:21:11.940
just like the specifics of just whether it's programming
link |
01:21:15.260
or the kind of tools necessary to collect the data,
link |
01:21:17.500
to manage the data, to distribute, to compute,
link |
01:21:19.780
all that kind of stuff.
link |
01:21:20.780
And on the neuroscience, I guess, side,
link |
01:21:22.380
there'll be all different sets of tools.
link |
01:21:24.580
Exactly, especially with the recent explosion
link |
01:21:26.820
in neuroscience methods.
link |
01:21:28.980
So having said all that,
link |
01:21:32.100
I think the best scenario for both neuroscience
link |
01:21:39.860
and AI is to have people interacting
link |
01:21:44.860
who live at every point on this spectrum
link |
01:21:48.140
from exclusively focused on neuroscience
link |
01:21:51.900
to exclusively focused on the engineering side of AI.
link |
01:21:55.540
But to have those people inhabiting a community
link |
01:22:01.060
where they're talking to people who live elsewhere
link |
01:22:03.740
on the spectrum.
link |
01:22:04.820
And I may be someone who's very close to the center
link |
01:22:08.660
in the sense that I have one foot in the neuroscience world
link |
01:22:12.180
and one foot in the AI world,
link |
01:22:14.020
and that central position, I will admit,
link |
01:22:17.220
prevents me, at least someone
link |
01:22:19.060
with my limited cognitive capacity,
link |
01:22:21.300
from having true technical expertise in either domain.
link |
01:22:26.820
But at the same time, I at least hope
link |
01:22:30.140
that it's worthwhile having people around
link |
01:22:32.340
who can kind of see the connections.
link |
01:22:34.980
Yeah, the community, the emergent intelligence
link |
01:22:39.100
of the community when it's nicely distributed is useful.
link |
01:22:43.300
Exactly, yeah.
link |
01:22:44.580
So hopefully that, I mean, I've seen that work,
link |
01:22:46.620
I've seen that work out well at DeepMind.
link |
01:22:48.420
There are people who, I mean, even if you just focus
link |
01:22:52.860
on the AI work that happens at DeepMind,
link |
01:22:55.820
it's been a good thing to have some people around
link |
01:22:59.540
doing that kind of work whose PhDs are in neuroscience
link |
01:23:03.260
or psychology.
link |
01:23:04.780
Every academic discipline has its kind of blind spots
link |
01:23:09.780
and kind of unfortunate obsessions and its metaphors
link |
01:23:16.820
and its reference points,
link |
01:23:18.260
and having some intellectual diversity is really healthy.
link |
01:23:24.020
People get each other unstuck, I think.
link |
01:23:28.420
I see it all the time at DeepMind.
link |
01:23:30.620
And I like to think that the people
link |
01:23:33.060
who bring some neuroscience background to the table
link |
01:23:35.940
are helping with that.
link |
01:23:37.460
So one of my probably the deepest passion for me,
link |
01:23:41.420
what I would say, maybe we kind of spoke off mic
link |
01:23:44.140
a little bit about it, but that I think is a blind spot
link |
01:23:49.460
for at least robotics and AI folks
link |
01:23:51.380
is human robot interaction, human agent interaction.
link |
01:23:55.540
Maybe do you have thoughts about how we reduce the size
link |
01:24:01.860
of that blind spot?
link |
01:24:02.980
Do you also share the feeling that not enough folks
link |
01:24:07.460
are studying this aspect of interaction?
link |
01:24:10.260
Well, I'm actually pretty intensively interested
link |
01:24:14.540
in this issue now, and there are people in my group
link |
01:24:17.060
who've actually pivoted pretty hard over the last few years
link |
01:24:20.940
from doing more traditional cognitive psychology
link |
01:24:24.180
and cognitive neuroscience to doing experimental work
link |
01:24:28.060
on human agent interaction.
link |
01:24:30.220
And there are a couple of reasons that I'm
link |
01:24:33.700
pretty passionately interested in this.
link |
01:24:35.500
One is it's kind of the outcome of having thought
link |
01:24:42.460
for a few years now about what we're up to.
link |
01:24:46.900
Like what are we doing?
link |
01:24:49.340
Like what is this AI research for?
link |
01:24:53.420
So what does it mean to make the world a better place?
link |
01:24:57.020
I think I'm pretty sure that means making life better
link |
01:24:59.740
for humans.
link |
01:25:02.620
And so how do you make life better for humans?
link |
01:25:05.820
That's a proposition that when you look at it carefully
link |
01:25:10.540
and honestly is rather horrendously complicated,
link |
01:25:15.860
especially when the AI systems
link |
01:25:18.820
that you're building are learning systems.
link |
01:25:25.220
They're not, you're not programming something
link |
01:25:29.060
that you then introduce to the world
link |
01:25:31.420
and it just works as programmed,
link |
01:25:33.140
like Google Maps or something.
link |
01:25:36.500
We're building systems that learn from experience.
link |
01:25:39.700
So that typically leads to AI safety questions.
link |
01:25:43.500
How do we keep these things from getting out of control?
link |
01:25:45.420
How do we keep them from doing things that harm humans?
link |
01:25:49.060
And I mean, I hasten to say,
link |
01:25:51.820
I consider those hugely important issues.
link |
01:25:54.500
And there are large sectors of the research community
link |
01:25:58.900
at DeepMind and of course elsewhere
link |
01:26:00.780
who are dedicated to thinking hard all day,
link |
01:26:03.460
every day about that.
link |
01:26:04.980
But there's, I guess I would say a positive side to this too
link |
01:26:09.620
which is to say, well, what would it mean
link |
01:26:13.300
to make human life better?
link |
01:26:15.900
And how can we imagine learning systems doing that?
link |
01:26:21.180
And in talking to my colleagues about that,
link |
01:26:23.500
we reached the initial conclusion
link |
01:26:25.700
that it's not sufficient to philosophize about that.
link |
01:26:30.100
You actually have to take into account
link |
01:26:32.060
how humans actually work and what humans want
link |
01:26:37.860
and the difficulties of knowing what humans want
link |
01:26:41.740
and the difficulties that arise
link |
01:26:43.780
when humans want different things.
link |
01:26:47.380
And so human agent interaction has become,
link |
01:26:50.900
a quite intensive focus of my group lately.
link |
01:26:56.460
If for no other reason that,
link |
01:26:59.020
in order to really address that issue in an adequate way,
link |
01:27:04.660
you have to, I mean, psychology becomes part of the picture.
link |
01:27:07.340
Yeah, and so there's a few elements there.
link |
01:27:10.380
So if you focus on solving like the,
link |
01:27:12.900
if you focus on the robotics problem,
link |
01:27:14.700
let's say AGI without humans in the picture
link |
01:27:18.140
is you're missing fundamentally the final step.
link |
01:27:22.300
When you do want to help human civilization,
link |
01:27:24.580
you eventually have to interact with humans.
link |
01:27:27.340
And when you create a learning system, just as you said,
link |
01:27:31.380
that will eventually have to interact with humans,
link |
01:27:34.340
the interaction itself has to be become,
link |
01:27:37.900
has to become part of the learning process.
link |
01:27:40.780
So you can't just watch, well, my sense is,
link |
01:27:43.820
it sounds like your sense is you can't just watch humans
link |
01:27:46.580
to learn about humans.
link |
01:27:48.260
You have to also be part of the human world.
link |
01:27:50.220
You have to interact with humans.
link |
01:27:51.420
Yeah, exactly.
link |
01:27:52.260
And I mean, then questions arise that start imperceptibly,
link |
01:27:57.380
but inevitably to slip beyond the realm of engineering.
link |
01:28:02.380
So questions like, if you have an agent
link |
01:28:05.940
that can do something that you can't do,
link |
01:28:10.900
under what conditions do you want that agent to do it?
link |
01:28:13.780
So if I have a robot that can play Beethoven sonatas
link |
01:28:24.700
better than any human, in the sense that the sensitivity,
link |
01:28:30.740
the expression is just beyond what any human,
link |
01:28:33.940
do I want to listen to that?
link |
01:28:36.300
Do I want to go to a concert and hear a robot play?
link |
01:28:38.780
These aren't engineering questions.
link |
01:28:41.340
These are questions about human preference
link |
01:28:44.340
and human culture.
link |
01:28:45.980
Psychology bordering on philosophy.
link |
01:28:47.940
Yeah, and then you start asking,
link |
01:28:50.260
well, even if we knew the answer to that,
link |
01:28:54.660
is it our place as AI engineers
link |
01:28:57.060
to build that into these agents?
link |
01:28:59.180
Probably the agents should interact with humans
link |
01:29:03.500
beyond the population of AI engineers
link |
01:29:05.620
and figure out what those humans want.
link |
01:29:08.780
And then when you start,
link |
01:29:10.620
I referred this the moment ago,
link |
01:29:11.780
but even that becomes complicated.
link |
01:29:14.340
Be quote, what if two humans want different things?
link |
01:29:19.100
And you have only one agent that's able to interact with them
link |
01:29:22.380
and try to satisfy their preferences.
link |
01:29:24.620
Then you're into the realm of economics
link |
01:29:30.340
and social choice theory and even politics.
link |
01:29:33.660
So there's a sense in which,
link |
01:29:35.540
if you kind of follow what we're doing
link |
01:29:37.980
to its logical conclusion,
link |
01:29:39.940
then it goes beyond questions of engineering and technology
link |
01:29:45.060
and starts to shade imperceptibly into questions
link |
01:29:48.420
about what kind of society do you want?
link |
01:29:51.660
And actually, once that dawned on me,
link |
01:29:55.740
I actually felt,
link |
01:29:58.620
I don't know what the right word is,
link |
01:29:59.860
quite refreshed in my involvement in AI research.
link |
01:30:03.020
It was almost like building this kind of stuff
link |
01:30:06.300
is gonna lead us back to asking really fundamental questions
link |
01:30:10.220
about what is this,
link |
01:30:13.860
what's the good life and who gets to decide
link |
01:30:16.700
and bringing in viewpoints from multiple sub communities
link |
01:30:23.780
to help us shape the way that we live.
link |
01:30:27.460
There's something, it started making me feel like
link |
01:30:30.820
doing AI research in a fully responsible way, would,
link |
01:30:38.300
could potentially lead to a kind of like cultural renewal.
link |
01:30:42.820
Yeah, it's the way to understand human beings
link |
01:30:48.180
at the individual, at the societal level.
link |
01:30:50.340
It may become a way to answer all the silly human questions
link |
01:30:54.020
of the meaning of life and all those kinds of things.
link |
01:30:57.060
Even if it doesn't give us a way
link |
01:30:58.060
of answering those questions,
link |
01:30:59.220
it may force us back to thinking about them.
link |
01:31:03.660
And it might bring, it might restore a certain,
link |
01:31:06.940
I don't know, a certain depth to,
link |
01:31:10.460
or even dare I say spirituality to the way that,
link |
01:31:16.380
to the world, I don't know.
link |
01:31:18.060
Maybe that's too grandiose.
link |
01:31:19.380
Well, I'm with you.
link |
01:31:21.020
I think it's AI will be the philosophy of the 21st century,
link |
01:31:27.620
the way which will open the door.
link |
01:31:29.020
I think a lot of AI researchers are afraid to open that door
link |
01:31:32.500
of exploring the beautiful richness
link |
01:31:35.660
of the human agent interaction, human AI interaction.
link |
01:31:39.540
I'm really happy that somebody like you
link |
01:31:42.380
have opened that door.
link |
01:31:43.700
And one thing I often think about is the usual schema
link |
01:31:49.500
for thinking about human agent interaction
link |
01:31:54.500
as this kind of dystopian, oh, our robot overlords.
link |
01:32:00.460
And again, I hasten to say AI safety is hugely important.
link |
01:32:03.500
And I'm not saying we shouldn't be thinking
link |
01:32:06.420
about those risks, totally on board for that.
link |
01:32:09.540
But there's, having said that,
link |
01:32:17.060
what often follows for me is the thought
link |
01:32:18.860
that there's another kind of narrative
link |
01:32:22.980
that might be relevant, which is,
link |
01:32:24.780
when we think of humans gaining more and more information
link |
01:32:31.020
about human life, the narrative there is usually
link |
01:32:36.380
that they gain more and more wisdom
link |
01:32:38.540
and they get closer to enlightenment
link |
01:32:40.700
and they become more benevolent.
link |
01:32:43.260
And the Buddha is like, that's a totally different narrative.
link |
01:32:47.300
And why isn't it the case that we imagine
link |
01:32:50.380
that the AI systems that we're creating
link |
01:32:52.460
are just gonna, like, they're gonna figure out
link |
01:32:53.980
more and more about the way the world works
link |
01:32:55.660
and the way that humans interact
link |
01:32:56.820
and they'll become beneficent.
link |
01:32:59.180
I'm not saying that will happen.
link |
01:33:00.500
I don't honestly expect that to happen
link |
01:33:05.420
without some careful, setting things up very carefully.
link |
01:33:08.820
But it's another way things could go, right?
link |
01:33:11.340
And yeah, and I would even push back on that.
link |
01:33:13.820
I personally believe that the most trajectories,
link |
01:33:18.820
natural human trajectories will lead us towards progress.
link |
01:33:25.460
So for me, there is a kind of sense
link |
01:33:28.420
that most trajectories in AI development
link |
01:33:30.820
will lead us into trouble.
link |
01:33:32.540
To me, and we over focus on the worst case.
link |
01:33:37.140
It's like in computer science,
link |
01:33:38.500
theoretical computer science has been this focus
link |
01:33:40.860
on worst case analysis.
link |
01:33:42.060
There's something appealing to our human mind
link |
01:33:45.180
at some lowest level to be good.
link |
01:33:47.660
I mean, we don't wanna be eaten by the tiger, I guess.
link |
01:33:50.220
So we wanna do the worst case analysis.
link |
01:33:52.300
But the reality is that shouldn't stop us
link |
01:33:55.660
from actually building out all the other trajectories
link |
01:33:58.620
which are potentially leading to all the positive worlds,
link |
01:34:01.900
all the enlightenment.
link |
01:34:04.540
There's a book, Enlightenment Now,
link |
01:34:05.700
with Steven Pinker and so on.
link |
01:34:06.980
This is looking generally at human progress.
link |
01:34:09.660
And there's so many ways that human progress
link |
01:34:12.300
can happen with AI.
link |
01:34:13.900
And I think you have to do that research.
link |
01:34:16.300
You have to do that work.
link |
01:34:17.380
You have to do the, not just the AI safety work
link |
01:34:20.700
of the one worst case analysis.
link |
01:34:22.500
How do we prevent that?
link |
01:34:23.500
But the actual tools and the glue
link |
01:34:27.540
and the mechanisms of human AI interaction
link |
01:34:31.340
that would lead to all the positive actions that can go.
link |
01:34:34.180
It's a super exciting area, right?
link |
01:34:36.540
Yeah, we should be spending,
link |
01:34:38.340
we should be spending a lot of our time saying
link |
01:34:40.820
what can go wrong.
link |
01:34:42.860
I think it's harder to see that there's work to be done
link |
01:34:47.860
to bring into focus the question of what it would look like
link |
01:34:51.540
for things to go right.
link |
01:34:54.420
That's not obvious.
link |
01:34:57.660
And we wouldn't be doing this if we didn't have the sense
link |
01:34:59.620
there was huge potential, right?
link |
01:35:01.980
We're not doing this for no reason.
link |
01:35:05.100
We have a sense that AGI would be a major boom to humanity.
link |
01:35:10.100
But I think it's worth starting now,
link |
01:35:13.700
even when our technology is quite primitive,
link |
01:35:15.620
asking exactly what would that mean?
link |
01:35:19.420
We can start now with applications
link |
01:35:21.060
that are already gonna make the world a better place,
link |
01:35:22.580
like solving protein folding.
link |
01:35:25.060
I think DeepMind has gotten heavy
link |
01:35:27.860
into science applications lately,
link |
01:35:30.060
which I think is a wonderful, wonderful move
link |
01:35:34.380
for us to be making.
link |
01:35:36.060
But when we think about AGI,
link |
01:35:37.260
when we think about building fully intelligent
link |
01:35:39.860
agents that are gonna be able to, in a sense,
link |
01:35:42.460
do whatever they want,
link |
01:35:45.540
we should start thinking about
link |
01:35:46.740
what do we want them to want, right?
link |
01:35:48.940
What kind of world do we wanna live in?
link |
01:35:52.300
That's not an easy question.
link |
01:35:54.300
And I think we just need to start working on it.
link |
01:35:56.700
And even on the path to,
link |
01:35:58.620
it doesn't have to be AGI,
link |
01:35:59.900
but just intelligent agents that interact with us
link |
01:36:02.300
and help us enrich our own existence on social networks,
link |
01:36:06.220
for example, on recommender systems of various intelligence.
link |
01:36:08.820
And there's so much interesting interaction
link |
01:36:10.540
that's yet to be understood and studied.
link |
01:36:12.300
And how do you create,
link |
01:36:15.540
I mean, Twitter is struggling with this very idea,
link |
01:36:19.460
how do you create AI systems
link |
01:36:21.420
that increase the quality and the health of a conversation?
link |
01:36:24.380
For sure.
link |
01:36:25.220
That's a beautiful human psychology question.
link |
01:36:28.500
And how do you do that
link |
01:36:29.740
without deception being involved,
link |
01:36:34.740
without manipulation being involved,
link |
01:36:38.100
maximizing human autonomy?
link |
01:36:42.420
And how do you make these choices in a democratic way?
link |
01:36:45.820
How do we face the,
link |
01:36:50.180
again, I'm speaking for myself here.
link |
01:36:52.740
How do we face the fact that
link |
01:36:55.700
it's a small group of people
link |
01:36:57.740
who have the skillset to build these kinds of systems,
link |
01:37:01.340
but what it means to make the world a better place
link |
01:37:05.860
is something that we all have to be talking about.
link |
01:37:09.020
Yeah, the world that we're trying to make a better place
link |
01:37:14.020
includes a huge variety of different kinds of people.
link |
01:37:18.020
Yeah, how do we cope with that?
link |
01:37:19.420
This is a problem that has been discussed
link |
01:37:22.820
in gory, extensive detail in social choice theory.
link |
01:37:28.500
One thing I'm really interested in
link |
01:37:29.900
and one thing I'm really enjoying
link |
01:37:32.900
about the recent direction work has taken
link |
01:37:35.180
in some parts of my team is that,
link |
01:37:36.900
yeah, we're reading the AI literature,
link |
01:37:38.620
we're reading the neuroscience literature,
link |
01:37:39.940
but we've also started reading economics
link |
01:37:42.940
and, as I mentioned, social choice theory,
link |
01:37:44.820
even some political theory,
link |
01:37:45.940
because it turns out that it all becomes relevant.
link |
01:37:50.380
It all becomes relevant.
link |
01:37:53.540
But at the same time,
link |
01:37:55.660
we've been trying not to write philosophy papers,
link |
01:38:00.140
we've been trying not to write physician papers.
link |
01:38:01.980
We're trying to figure out ways
link |
01:38:03.780
of doing actual empirical research
link |
01:38:05.740
that kind of take the first small steps
link |
01:38:07.780
to thinking about what it really means
link |
01:38:10.820
for humans with all of their complexity
link |
01:38:13.580
and contradiction and paradox
link |
01:38:18.540
to be brought into contact with these AI systems
link |
01:38:22.340
in a way that really makes the world a better place.
link |
01:38:25.540
Often, reinforcement learning frameworks
link |
01:38:27.540
actually kind of allow you to do that,
link |
01:38:30.860
machine learning, and so that's the exciting thing about AI
link |
01:38:33.580
is it allows you to reduce the unsolvable problem,
link |
01:38:37.260
philosophical problem, into something more concrete
link |
01:38:40.380
that you can get ahold of.
link |
01:38:41.700
Yeah, and it allows you to kind of define the problem
link |
01:38:43.900
in some way that allows for growth in the system
link |
01:38:49.980
that's sort of, you know,
link |
01:38:51.140
you're not responsible for the details, right?
link |
01:38:54.100
You say, this is generally what I want you to do,
link |
01:38:56.700
and then learning takes care of the rest.
link |
01:38:59.580
Of course, the safety issues arise in that context,
link |
01:39:04.100
but I think also some of these positive issues
link |
01:39:05.980
arise in that context.
link |
01:39:06.940
What would it mean for an AI system
link |
01:39:09.180
to really come to understand what humans want?
link |
01:39:14.780
And with all of the subtleties of that, right?
link |
01:39:18.940
You know, humans want help with certain things,
link |
01:39:24.660
but they don't want everything done for them, right?
link |
01:39:27.420
There is, part of the satisfaction
link |
01:39:29.660
that humans get from life is in accomplishing things.
link |
01:39:32.700
So if there were devices around that did everything for,
link |
01:39:34.660
you know, I often think of the movie WALLI, right?
link |
01:39:37.500
That's like dystopian in a totally different way.
link |
01:39:39.380
It's like, the machines are doing everything for us.
link |
01:39:41.340
That's not what we wanted.
link |
01:39:43.780
You know, anyway, I find this, you know,
link |
01:39:46.700
this opens up a whole landscape of research
link |
01:39:50.500
that feels affirmative and exciting.
link |
01:39:52.740
To me, it's one of the most exciting, and it's wide open.
link |
01:39:56.020
We have to, because it's a cool paper,
link |
01:39:58.260
talk about dopamine.
link |
01:39:59.300
Oh yeah, okay, so I can.
link |
01:40:01.100
We were gonna, I was gonna give you a quick summary.
link |
01:40:04.980
Yeah, a quick summary of, what's the title of the paper?
link |
01:40:09.900
I think we called it a distributional code for value
link |
01:40:14.900
in dopamine based reinforcement learning, yes.
link |
01:40:19.020
So that's another project that grew out of pure AI research.
link |
01:40:25.740
A number of people at DeepMind and a few other places
link |
01:40:29.620
had started working on a new version
link |
01:40:32.340
of reinforcement learning,
link |
01:40:35.740
which was defined by taking something
link |
01:40:38.940
in traditional reinforcement learning and just tweaking it.
link |
01:40:41.420
So the thing that they took
link |
01:40:42.740
from traditional reinforcement learning was a value signal.
link |
01:40:46.860
So at the center of reinforcement learning,
link |
01:40:49.540
at least most algorithms, is some representation
link |
01:40:52.580
of how well things are going,
link |
01:40:54.140
your expected cumulative future reward.
link |
01:40:57.660
And that's usually represented as a single number.
link |
01:41:01.220
So if you imagine a gambler in a casino
link |
01:41:04.260
and the gambler's thinking, well, I have this probability
link |
01:41:07.980
of winning such and such an amount of money,
link |
01:41:09.540
and I have this probability of losing such and such
link |
01:41:11.260
an amount of money, that situation would be represented
link |
01:41:14.860
as a single number, which is like the expected,
link |
01:41:17.260
the weighted average of all those outcomes.
link |
01:41:20.580
And this new form of reinforcement learning said,
link |
01:41:23.740
well, what if we generalize that
link |
01:41:26.460
to a distributional representation?
link |
01:41:28.140
So now we think of the gambler as literally thinking,
link |
01:41:30.820
well, there's this probability
link |
01:41:32.260
that I'll win this amount of money,
link |
01:41:33.620
and there's this probability
link |
01:41:34.580
that I'll lose that amount of money,
link |
01:41:35.700
and we don't reduce that to a single number.
link |
01:41:37.820
And it had been observed through experiments,
link |
01:41:40.580
through just trying this out,
link |
01:41:42.420
that that kind of distributional representation
link |
01:41:45.900
really accelerated reinforcement learning
link |
01:41:49.620
and led to better policies.
link |
01:41:52.380
What's your intuition about,
link |
01:41:53.620
so we're talking about rewards.
link |
01:41:55.260
Yeah.
link |
01:41:56.100
So what's your intuition why that is, why does it do that?
link |
01:41:58.420
Well, it's kind of a surprising historical note,
link |
01:42:02.620
at least surprised me when I learned it,
link |
01:42:04.460
that this had been proven to be true.
link |
01:42:07.260
This had been tried out in a kind of heuristic way.
link |
01:42:09.820
People thought, well, gee, what would happen if we tried?
link |
01:42:12.500
And then it had this, empirically,
link |
01:42:14.580
it had this striking effect.
link |
01:42:17.300
And it was only then that people started thinking,
link |
01:42:19.300
well, gee, wait, why?
link |
01:42:21.380
Wait, why?
link |
01:42:22.220
Why is this working?
link |
01:42:23.420
And that's led to a series of studies
link |
01:42:26.180
just trying to figure out why it works, which is ongoing.
link |
01:42:29.740
But one thing that's already clear from that research
link |
01:42:31.780
is that one reason that it helps
link |
01:42:34.340
is that it drives richer representation learning.
link |
01:42:39.420
So if you imagine two situations
link |
01:42:43.060
that have the same expected value,
link |
01:42:45.300
the same kind of weighted average value,
link |
01:42:48.980
standard deep reinforcement learning algorithms
link |
01:42:51.300
are going to take those two situations
link |
01:42:53.500
and kind of, in terms of the way
link |
01:42:55.020
they're represented internally,
link |
01:42:56.460
they're gonna squeeze them together
link |
01:42:58.180
because the thing that you're trying to represent,
link |
01:43:02.580
which is their expected value, is the same.
link |
01:43:04.180
So all the way through the system,
link |
01:43:06.260
things are gonna be mushed together.
link |
01:43:08.420
But what if those two situations
link |
01:43:11.060
actually have different value distributions?
link |
01:43:13.940
They have the same average value,
link |
01:43:16.900
but they have different distributions of value.
link |
01:43:19.900
In that situation, distributional learning
link |
01:43:22.300
will maintain the distinction between these two things.
link |
01:43:25.100
So to make a long story short,
link |
01:43:26.820
distributional learning can keep things separate
link |
01:43:30.020
in the internal representation
link |
01:43:32.180
that might otherwise be conflated or squished together.
link |
01:43:35.140
And maintaining those distinctions
link |
01:43:36.380
can be useful when the system is now faced
link |
01:43:40.180
with some other task where the distinction is important.
link |
01:43:43.260
If we look at the optimistic
link |
01:43:44.540
and pessimistic dopamine neurons.
link |
01:43:46.580
So first of all, what is dopamine?
link |
01:43:50.900
Oh, God.
link |
01:43:51.740
Why is this at all useful
link |
01:43:58.220
to think about in the artificial intelligence sense?
link |
01:44:00.740
But what do we know about dopamine in the human brain?
link |
01:44:04.180
What is it?
link |
01:44:05.620
Why is it useful?
link |
01:44:06.460
Why is it interesting?
link |
01:44:07.460
What does it have to do with the prefrontal cortex
link |
01:44:09.380
and learning in general?
link |
01:44:10.260
Yeah, so, well, this is also a case
link |
01:44:15.540
where there's a huge amount of detail and debate.
link |
01:44:19.660
But one currently prevailing idea
link |
01:44:24.740
is that the function of this neurotransmitter dopamine
link |
01:44:29.060
resembles a particular component
link |
01:44:33.460
of standard reinforcement learning algorithms,
link |
01:44:36.860
which is called the reward prediction error.
link |
01:44:39.860
So I was talking a moment ago
link |
01:44:41.580
about these value representations.
link |
01:44:44.220
How do you learn them?
link |
01:44:45.180
How do you update them based on experience?
link |
01:44:46.900
Well, if you made some prediction about a future reward
link |
01:44:51.820
and then you get more reward than you were expecting,
link |
01:44:54.460
then probably retrospectively,
link |
01:44:56.020
you want to go back and increase the value representation
link |
01:45:00.740
that you attached to that earlier situation.
link |
01:45:03.820
If you got less reward than you were expecting,
link |
01:45:06.180
you should probably decrement that estimate.
link |
01:45:08.540
And that's the process of temporal difference.
link |
01:45:10.300
Exactly, this is the central mechanism
link |
01:45:12.020
of temporal difference learning,
link |
01:45:12.860
which is one of several sort of the backbone
link |
01:45:17.660
of our momentarium in NRL.
link |
01:45:20.420
And this connection between the reward prediction error
link |
01:45:25.020
and dopamine was made in the 1990s.
link |
01:45:31.940
And there's been a huge amount of research
link |
01:45:33.420
that seems to back it up.
link |
01:45:35.860
Dopamine may be doing other things,
link |
01:45:37.340
but this is clearly, at least roughly,
link |
01:45:39.860
one of the things that it's doing.
link |
01:45:42.460
But the usual idea was that dopamine
link |
01:45:45.100
was representing these reward prediction errors,
link |
01:45:48.060
again, in this like kind of single number way
link |
01:45:51.340
that representing your surprise with a single number.
link |
01:45:56.700
And in distributional reinforcement learning,
link |
01:45:58.500
this kind of new elaboration of the standard approach,
link |
01:46:03.660
it's not only the value function
link |
01:46:06.060
that's represented as a single number,
link |
01:46:08.460
it's also the reward prediction error.
link |
01:46:10.940
And so what happened was that Will Dabney,
link |
01:46:16.180
one of my collaborators who was one of the first people
link |
01:46:18.980
to work on distributional temporal difference learning,
link |
01:46:22.300
talked to a guy in my group, Zeb Kurt Nelson,
link |
01:46:25.740
who's a computational neuroscientist,
link |
01:46:27.660
and said, gee, you know, is it possible
link |
01:46:29.580
that dopamine might be doing something
link |
01:46:31.740
like this distributional coding thing?
link |
01:46:33.420
And they started looking at what was in the literature,
link |
01:46:35.980
and then they brought me in,
link |
01:46:36.820
and we started talking to Nao Uchida,
link |
01:46:39.220
and we came up with some specific predictions
link |
01:46:41.300
about if the brain is using
link |
01:46:43.500
this kind of distributional coding,
link |
01:46:45.140
then in the tasks that Nao has studied,
link |
01:46:47.340
you should see this, this, this, and this,
link |
01:46:49.300
and that's where the paper came from.
link |
01:46:50.620
We kind of enumerated a set of predictions,
link |
01:46:53.540
all of which ended up being fairly clearly confirmed,
link |
01:46:57.260
and all of which leads to at least some initial indication
link |
01:47:00.740
that the brain might be doing something
link |
01:47:02.180
like this distributional coding,
link |
01:47:03.420
that dopamine might be representing surprise signals
link |
01:47:06.780
in a way that is not just collapsing everything
link |
01:47:09.980
to a single number, but instead is kind of respecting
link |
01:47:12.180
the variety of future outcomes, if that makes sense.
link |
01:47:16.620
So yeah, so that's showing, suggesting possibly
link |
01:47:19.580
that dopamine has a really interesting
link |
01:47:21.900
representation scheme in the human brain
link |
01:47:25.940
for its reward signal.
link |
01:47:27.660
Exactly. That's fascinating.
link |
01:47:29.660
That's another beautiful example of AI
link |
01:47:32.140
revealing something nice about neuroscience,
link |
01:47:34.460
potentially suggesting possibilities.
link |
01:47:36.260
Well, you never know.
link |
01:47:37.100
So the minute you publish a paper like that,
link |
01:47:39.260
the next thing you think is, I hope that replicates.
link |
01:47:42.620
Like, I hope we see that same thing in other data sets,
link |
01:47:44.940
but of course, several labs now
link |
01:47:47.380
are doing the followup experiments, so we'll know soon.
link |
01:47:50.180
But it has been a lot of fun for us
link |
01:47:52.580
to take these ideas from AI
link |
01:47:54.780
and kind of bring them into neuroscience
link |
01:47:56.820
and see how far we can get.
link |
01:47:58.980
So we kind of talked about it a little bit,
link |
01:48:01.300
but where do you see the field of neuroscience
link |
01:48:04.020
and artificial intelligence heading broadly?
link |
01:48:07.740
Like, what are the possible exciting areas
link |
01:48:12.580
that you can see breakthroughs in the next,
link |
01:48:15.300
let's get crazy, not just three or five years,
link |
01:48:17.980
but the next 10, 20, 30 years
link |
01:48:22.340
that would make you excited
link |
01:48:26.100
and perhaps you'd be part of?
link |
01:48:29.020
On the neuroscience side,
link |
01:48:32.980
there's a great deal of interest now
link |
01:48:34.420
in what's going on in AI.
link |
01:48:36.780
And at the same time,
link |
01:48:41.500
I feel like, so neuroscience,
link |
01:48:45.900
especially the part of neuroscience
link |
01:48:50.100
that's focused on circuits and systems,
link |
01:48:54.180
kind of like really mechanism focused,
link |
01:48:57.780
there's been this explosion in new technology.
link |
01:49:01.980
And up until recently,
link |
01:49:05.100
the experiments that have exploited this technology
link |
01:49:08.940
have not involved a lot of interesting behavior.
link |
01:49:13.340
And this is for a variety of reasons,
link |
01:49:16.300
one of which is in order to employ
link |
01:49:18.700
some of these technologies,
link |
01:49:19.860
you actually have to, if you're studying a mouse,
link |
01:49:22.260
you have to head fix the mouse.
link |
01:49:23.620
In other words, you have to like immobilize the mouse.
link |
01:49:26.260
And so it's been tricky to come up
link |
01:49:28.700
with ways of eliciting interesting behavior
link |
01:49:30.860
from a mouse that's restrained in this way,
link |
01:49:33.460
but people have begun to create
link |
01:49:35.660
very interesting solutions to this,
link |
01:49:39.460
like virtual reality environments
link |
01:49:41.300
where the animal can kind of move a track ball.
link |
01:49:43.860
And as people have kind of begun to explore
link |
01:49:48.780
what you can do with these technologies,
link |
01:49:50.260
I feel like more and more people are asking,
link |
01:49:52.820
well, let's try to bring behavior into the picture.
link |
01:49:55.740
Let's try to like reintroduce behavior,
link |
01:49:58.220
which was supposed to be what this whole thing was about.
link |
01:50:01.020
And I'm hoping that those two trends,
link |
01:50:05.700
the kind of growing interest in behavior
link |
01:50:09.180
and the widespread interest in what's going on in AI,
link |
01:50:14.180
will come together to kind of open a new chapter
link |
01:50:17.580
in neuroscience research where there's a kind of
link |
01:50:22.580
a rebirth of interest in the structure of behavior
link |
01:50:25.820
and its underlying substrates,
link |
01:50:27.540
but that that research is being informed
link |
01:50:31.340
by computational mechanisms
link |
01:50:33.700
that we're coming to understand in AI.
link |
01:50:36.740
If we can do that, then we might be taking a step closer
link |
01:50:39.580
to this utopian future that we were talking about earlier
link |
01:50:43.260
where there's really no distinction
link |
01:50:44.860
between psychology and neuroscience.
link |
01:50:46.940
Neuroscience is about studying the mechanisms
link |
01:50:50.900
that underlie whatever it is the brain is for,
link |
01:50:54.660
and what is the brain for?
link |
01:50:56.340
What is the brain for? It's for behavior.
link |
01:50:58.460
I feel like we could maybe take a step toward that now
link |
01:51:03.100
if people are motivated in the right way.
link |
01:51:06.780
You also asked about AI.
link |
01:51:08.780
So that was a neuroscience question.
link |
01:51:10.340
You said neuroscience, that's right.
link |
01:51:12.180
And especially places like DeepMind
link |
01:51:13.740
are interested in both branches.
link |
01:51:15.260
So what about the engineering of intelligence systems?
link |
01:51:20.820
I think one of the key challenges
link |
01:51:24.900
that a lot of people are seeing now in AI
link |
01:51:28.700
is to build systems that have the kind of flexibility
link |
01:51:34.300
and the kind of flexibility that humans have in two senses.
link |
01:51:38.580
One is that humans can be good at many things.
link |
01:51:41.860
They're not just expert at one thing.
link |
01:51:44.300
And they're also flexible in the sense
link |
01:51:45.620
that they can switch between things very easily
link |
01:51:49.660
and they can pick up new things very quickly
link |
01:51:52.060
because they very ably see what a new task has in common
link |
01:51:57.620
with other things that they've done.
link |
01:52:01.860
And that's something that our AI systems
link |
01:52:05.340
just blatantly do not have.
link |
01:52:09.100
There are some people who like to argue
link |
01:52:11.380
that deep learning and deep RL
link |
01:52:13.740
are simply wrong for getting that kind of flexibility.
link |
01:52:17.080
I don't share that belief,
link |
01:52:20.060
but the simpler fact of the matter
link |
01:52:22.620
is we're not building things yet
link |
01:52:23.860
that do have that kind of flexibility.
link |
01:52:25.500
And I think the attention of a large part
link |
01:52:28.700
of the AI community is starting to pivot to that question.
link |
01:52:31.500
How do we get that?
link |
01:52:33.460
That's gonna lead to a focus on abstraction.
link |
01:52:38.060
It's gonna lead to a focus on
link |
01:52:40.460
what in psychology we call cognitive control,
link |
01:52:43.620
which is the ability to switch between tasks,
link |
01:52:45.900
the ability to quickly put together a program of behavior
link |
01:52:49.300
that you've never executed before,
link |
01:52:51.740
but you know makes sense for a particular set of demands.
link |
01:52:55.260
It's very closely related to what the prefrontal cortex does
link |
01:52:59.140
on the neuroscience side.
link |
01:53:01.060
So I think it's gonna be an interesting new chapter.
link |
01:53:05.380
So that's the reasoning side and cognition side,
link |
01:53:07.420
but let me ask the over romanticized question.
link |
01:53:10.540
Do you think we'll ever engineer an AGI system
link |
01:53:13.700
that we humans would be able to love
link |
01:53:17.140
and that would love us back?
link |
01:53:19.580
So have that level and depth of connection?
link |
01:53:26.220
I love that question.
link |
01:53:27.860
And it relates closely to things
link |
01:53:31.980
that I've been thinking about a lot lately,
link |
01:53:33.900
in the context of this human AI research.
link |
01:53:36.620
There's social psychology research
link |
01:53:41.140
in particular by Susan Fisk at Princeton
link |
01:53:44.940
the department where I used to work,
link |
01:53:48.420
where she dissects human attitudes toward other humans
link |
01:53:54.500
into a sort of two dimensional scheme.
link |
01:53:59.900
And one dimension is about ability.
link |
01:54:03.940
How able, how capable is this other person?
link |
01:54:10.100
But the other dimension is warmth.
link |
01:54:11.780
So you can imagine another person who's very skilled
link |
01:54:15.580
and capable, but is very cold.
link |
01:54:19.540
And you wouldn't really like highly,
link |
01:54:22.500
you might have some reservations about that other person.
link |
01:54:26.660
But there's also a kind of reservation
link |
01:54:28.980
that we might have about another person
link |
01:54:31.020
who elicits in us or displays a lot of human warmth,
link |
01:54:34.860
but is not good at getting things done.
link |
01:54:37.940
We reserve our greatest esteem really
link |
01:54:40.940
for people who are both highly capable
link |
01:54:43.820
and also quite warm.
link |
01:54:47.300
That's like the best of the best.
link |
01:54:49.820
This isn't a normative statement I'm making.
link |
01:54:53.300
This is just an empirical statement.
link |
01:54:55.780
This is what humans seem...
link |
01:54:57.180
These are the two dimensions that people seem to kind of like
link |
01:54:59.740
along which people size other people up.
link |
01:55:02.660
And in AI research,
link |
01:55:03.980
there's a lot of people who think that humans are
link |
01:55:06.580
very capable, and in AI research,
link |
01:55:08.700
we really focus on this capability thing.
link |
01:55:11.420
We want our agents to be able to do stuff.
link |
01:55:13.420
This thing can play go at a superhuman level.
link |
01:55:15.460
That's awesome.
link |
01:55:16.860
But that's only one dimension.
link |
01:55:18.700
What about the other dimension?
link |
01:55:20.060
What would it mean for an AI system to be warm?
link |
01:55:25.060
And I don't know, maybe there are easy solutions here.
link |
01:55:27.620
Like we can put a face on our AI systems.
link |
01:55:30.620
It's cute, it has big ears.
link |
01:55:32.020
I mean, that's probably part of it.
link |
01:55:33.820
But I think it also has to do with a pattern of behavior.
link |
01:55:36.540
A pattern of what would it mean for an AI system
link |
01:55:40.180
to display caring, compassionate behavior
link |
01:55:43.460
in a way that actually made us feel like it was for real?
link |
01:55:47.740
That we didn't feel like it was simulated.
link |
01:55:49.940
We didn't feel like we were being duped.
link |
01:55:53.100
To me, people talk about the Turing test
link |
01:55:55.740
or some descendant of it.
link |
01:55:57.860
I feel like that's the ultimate Turing test.
link |
01:56:01.140
Is there an AI system that can not only convince us
link |
01:56:05.460
that it knows how to reason
link |
01:56:07.180
and it knows how to interpret language,
link |
01:56:09.100
but that we're comfortable saying,
link |
01:56:12.700
yeah, that AI system's a good guy.
link |
01:56:15.980
On the warmth scale, whatever warmth is,
link |
01:56:18.700
we kind of intuitively understand it,
link |
01:56:20.860
but we also wanna be able to, yeah,
link |
01:56:25.060
we don't understand it explicitly enough yet
link |
01:56:29.180
to be able to engineer it.
link |
01:56:30.940
Exactly.
link |
01:56:31.780
And that's an open scientific question.
link |
01:56:33.620
You kind of alluded it several times
link |
01:56:35.340
in the human AI interaction.
link |
01:56:37.220
That's a question that should be studied
link |
01:56:38.900
and probably one of the most important questions
link |
01:56:42.300
as we move to AGI.
link |
01:56:43.540
We humans are so good at it.
link |
01:56:46.020
Yeah.
link |
01:56:46.860
It's not just that we're born warm.
link |
01:56:50.140
I suppose some people are warmer than others
link |
01:56:53.060
given whatever genes they manage to inherit.
link |
01:56:55.700
But there are also learned skills involved.
link |
01:57:01.620
There are ways of communicating to other people
link |
01:57:04.740
that you care, that they matter to you,
link |
01:57:07.740
that you're enjoying interacting with them, right?
link |
01:57:11.100
And we learn these skills from one another.
link |
01:57:14.140
And it's not out of the question
link |
01:57:16.740
that we could build engineered systems.
link |
01:57:20.020
I think it's hopeless, as you say,
link |
01:57:21.460
that we could somehow hand design
link |
01:57:23.580
these sorts of behaviors.
link |
01:57:26.100
But it's not out of the question
link |
01:57:27.060
that we could build systems that kind of,
link |
01:57:30.060
we instill in them something that sets them out
link |
01:57:34.460
in the right direction,
link |
01:57:35.980
so that they end up learning what it is
link |
01:57:39.580
to interact with humans
link |
01:57:40.540
in a way that's gratifying to humans.
link |
01:57:44.180
I mean, honestly, if that's not where we're headed,
link |
01:57:49.220
I want out.
link |
01:57:50.340
I think it's exciting as a scientific problem,
link |
01:57:54.940
just as you described.
link |
01:57:56.820
I honestly don't see a better way to end it
link |
01:57:59.500
than talking about warmth and love.
link |
01:58:01.180
And Matt, I don't think I've ever had such a wonderful
link |
01:58:05.380
conversation where my questions were so bad
link |
01:58:07.540
and your answers were so beautiful.
link |
01:58:09.380
So I deeply appreciate it.
link |
01:58:10.740
I really enjoyed it.
link |
01:58:11.580
Thanks for talking to me.
link |
01:58:12.420
Well, it's been very fun.
link |
01:58:13.260
As you can probably tell,
link |
01:58:17.140
there's something I like about kind of thinking
link |
01:58:19.020
outside the box and like,
link |
01:58:21.060
so it's good having an opportunity to do that.
link |
01:58:22.940
Awesome.
link |
01:58:23.780
Thanks so much for doing it.
link |
01:58:25.620
Thanks for listening to this conversation
link |
01:58:27.180
with Matt Bopenik.
link |
01:58:28.420
And thank you to our sponsors,
link |
01:58:30.540
The Jordan Harbinger Show
link |
01:58:32.300
and Magic Spoon Low Carb Keto Cereal.
link |
01:58:36.140
Please consider supporting this podcast
link |
01:58:38.020
by going to jordanharbinger.com slash lex
link |
01:58:41.020
and also going to magicspoon.com slash lex
link |
01:58:44.940
and using code lex at checkout.
link |
01:58:48.220
Click the links, buy all the stuff.
link |
01:58:50.900
It's the best way to support this podcast
link |
01:58:52.860
and the journey I'm on in my research and the startup.
link |
01:58:57.260
If you enjoy this thing, subscribe on YouTube,
link |
01:58:59.580
review it with the five stars in Apple Podcasts,
link |
01:59:02.380
support it on Patreon, follow on Spotify
link |
01:59:05.380
or connect with me on Twitter at lexfreedman.
link |
01:59:08.220
Again, spelled miraculously without the E,
link |
01:59:12.220
just F R I D M A N.
link |
01:59:15.060
And now let me leave you with some words
link |
01:59:17.100
from neurologist V.S. Amarachandran.
link |
01:59:20.820
How can a three pound mass of jelly
link |
01:59:23.340
that you can hold in your palm imagine angels,
link |
01:59:26.620
contemplate the meaning of an infinity
link |
01:59:28.700
and even question its own place in the cosmos?
link |
01:59:31.740
Especially awe inspiring is the fact that any single brain,
link |
01:59:35.660
including yours, is made up of atoms
link |
01:59:38.580
that were forged in the hearts
link |
01:59:40.060
of countless far flung stars billions of years ago.
link |
01:59:45.500
These particles drifted for eons and light years
link |
01:59:48.340
until gravity and change brought them together here now.
link |
01:59:53.180
These atoms now form a conglomerate, your brain,
link |
01:59:57.540
that can not only ponder the very stars they gave at birth,
link |
02:00:00.860
but can also think about its own ability to think
link |
02:00:04.180
and wonder about its own ability to wander.
link |
02:00:07.820
With the arrival of humans, it has been said,
link |
02:00:10.660
the universe has suddenly become conscious of itself.
link |
02:00:14.580
This truly is the greatest mystery of all.
link |
02:00:18.620
Thank you for listening and hope to see you next time.