back to indexTomaso Poggio: Brains, Minds, and Machines | Lex Fridman Podcast #13
link |
The following is a conversation with Tommaso Poggio.
link |
He's a professor at MIT and is a director of the Center
link |
for Brains, Minds, and Machines.
link |
Cited over 100,000 times, his work
link |
has had a profound impact on our understanding
link |
of the nature of intelligence in both biological and artificial
link |
He has been an advisor to many highly impactful researchers
link |
and entrepreneurs in AI, including
link |
Demis Hassabis of DeepMind, Amnon Shashua of Mobileye,
link |
and Christoph Koch of the Allen Institute for Brain Science.
link |
This conversation is part of the MIT course
link |
on artificial general intelligence
link |
and the artificial intelligence podcast.
link |
If you enjoy it, subscribe on YouTube, iTunes,
link |
or simply connect with me on Twitter
link |
at Lex Friedman, spelled F R I D.
link |
And now, here's my conversation with Tommaso Poggio.
link |
You've mentioned that in your childhood,
link |
you've developed a fascination with physics, especially
link |
the theory of relativity.
link |
And that Einstein was also a childhood hero to you.
link |
What aspect of Einstein's genius, the nature of his genius,
link |
do you think was essential for discovering
link |
the theory of relativity?
link |
You know, Einstein was a hero to me,
link |
and I'm sure to many people, because he
link |
was able to make, of course, a major, major contribution
link |
to physics with simplifying a bit just a gedanken experiment,
link |
a thought experiment, you know, imagining communication
link |
with lights between a stationary observer
link |
and somebody on a train.
link |
And I thought, you know, the fact
link |
that just with the force of his thought, of his thinking,
link |
of his mind, he could get to something so deep
link |
in terms of physical reality, how time
link |
depend on space and speed, it was something
link |
absolutely fascinating.
link |
It was the power of intelligence,
link |
the power of the mind.
link |
Do you think the ability to imagine,
link |
to visualize as he did, as a lot of great physicists do,
link |
do you think that's in all of us human beings?
link |
Or is there something special to that one particular human
link |
I think, you know, all of us can learn and have, in principle,
link |
similar breakthroughs.
link |
There are lessons to be learned from Einstein.
link |
He was one of five PhD students at ETA,
link |
the Eidgenössische Technische Hochschule in Zurich,
link |
And he was the worst of the five,
link |
the only one who did not get an academic position when
link |
he graduated, when he finished his PhD.
link |
And he went to work, as everybody knows,
link |
for the patent office.
link |
And so it's not so much that he worked for the patent office,
link |
but the fact that obviously he was smart,
link |
but he was not a top student, obviously
link |
was the anti conformist.
link |
He was not thinking in the traditional way that probably
link |
his teachers and the other students were doing.
link |
So there is a lot to be said about trying
link |
to do the opposite or something quite different from what
link |
other people are doing.
link |
That's certainly true for the stock market.
link |
Never buy if everybody's buying.
link |
And also true for science.
link |
So you've also mentioned, staying
link |
on the theme of physics, that you were excited at a young age
link |
by the mysteries of the universe that physics could uncover.
link |
Such, as I saw mentioned, the possibility of time travel.
link |
So the most out of the box question,
link |
I think I'll get to ask today, do you
link |
think time travel is possible?
link |
Well, it would be nice if it were possible right now.
link |
In science, you never say no.
link |
But your understanding of the nature of time.
link |
It's very likely that it's not possible to travel in time.
link |
We may be able to travel forward in time
link |
if we can, for instance, freeze ourselves or go
link |
on some spacecraft traveling close to the speed of light.
link |
But in terms of actively traveling, for instance,
link |
back in time, I find probably very unlikely.
link |
So do you still hold the underlying dream
link |
of the engineering intelligence that
link |
will build systems that are able to do such huge leaps,
link |
like discovering the kind of mechanism that would be
link |
required to travel through time?
link |
Do you still hold that dream or echoes of it
link |
from your childhood?
link |
I don't think whether there are certain problems that probably
link |
cannot be solved, depending what you believe
link |
about the physical reality, like maybe totally impossible
link |
to create energy from nothing or to travel back in time,
link |
but about making machines that can think as well as we do
link |
or better, or more likely, especially
link |
in the short and midterm, help us think better,
link |
which is, in a sense, is happening already
link |
with the computers we have.
link |
And it will happen more and more.
link |
But that I certainly believe.
link |
And I don't see, in principle, why computers at some point
link |
could not become more intelligent than we are,
link |
although the word intelligence is a tricky one
link |
and one we should discuss.
link |
What I mean with that.
link |
Intelligence, consciousness, words like love,
link |
all these need to be disentangled.
link |
So you've mentioned also that you believe
link |
the problem of intelligence is the greatest problem
link |
in science, greater than the origin of life
link |
and the origin of the universe.
link |
You've also, in the talk I've listened to,
link |
said that you're open to arguments against you.
link |
So what do you think is the most captivating aspect
link |
of this problem of understanding the nature of intelligence?
link |
Why does it captivate you as it does?
link |
Well, originally, I think one of the motivation
link |
that I had as, I guess, a teenager when I was infatuated
link |
with theory of relativity was really
link |
that I found that there was the problem of time and space
link |
and general relativity.
link |
But there were so many other problems
link |
of the same level of difficulty and importance
link |
that I could, even if I were Einstein,
link |
it was difficult to hope to solve all of them.
link |
So what about solving a problem whose solution allowed
link |
me to solve all the problems?
link |
And this was, what if we could find the key to an intelligence
link |
10 times better or faster than Einstein?
link |
So that's sort of seeing artificial intelligence
link |
as a tool to expand our capabilities.
link |
But is there just an inherent curiosity in you
link |
in just understanding what it is in here
link |
that makes it all work?
link |
Yes, absolutely, you're right.
link |
So I started saying this was the motivation when
link |
But soon after, I think the problem of human intelligence
link |
became a real focus of my science and my research
link |
because I think for me, the most interesting problem
link |
is really asking who we are.
link |
It's asking not only a question about science,
link |
but even about the very tool we are using to do science, which
link |
How does our brain work?
link |
From where does it come from?
link |
What are its limitations?
link |
Can we make it better?
link |
And that, in many ways, is the ultimate question
link |
that underlies this whole effort of science.
link |
So you've made significant contributions
link |
in both the science of intelligence
link |
and the engineering of intelligence.
link |
In a hypothetical way, let me ask,
link |
how far do you think we can get in creating intelligence
link |
systems without understanding the biological,
link |
the understanding how the human brain creates intelligence?
link |
Put another way, do you think we can
link |
build a strong AI system without really getting at the core
link |
understanding the functional nature of the brain?
link |
Well, this is a real difficult question.
link |
We did solve problems like flying
link |
without really using too much our knowledge
link |
about how birds fly.
link |
It was important, I guess, to know that you could have
link |
things heavier than air being able to fly, like birds.
link |
But beyond that, probably we did not learn very much, some.
link |
The Brothers Wright did learn a lot of observation
link |
about birds and designing their aircraft.
link |
But you can argue we did not use much of biology
link |
in that particular case.
link |
Now, in the case of intelligence,
link |
I think that it's a bit of a bet right now.
link |
If you ask, OK, we all agree we'll get at some point,
link |
maybe soon, maybe later, to a machine that
link |
is indistinguishable from my secretary,
link |
say, in terms of what I can ask the machine to do.
link |
I think we'll get there.
link |
And now the question is, you can ask people,
link |
do you think we'll get there without any knowledge
link |
about the human brain?
link |
Or that the best way to get there
link |
is to understand better the human brain?
link |
OK, this is, I think, an educated bet
link |
that different people with different backgrounds
link |
will decide in different ways.
link |
The recent history of the progress
link |
in AI in the last, I would say, five years or 10 years
link |
has been that the main breakthroughs,
link |
the main recent breakthroughs, really start from neuroscience.
link |
I can mention reinforcement learning as one.
link |
It's one of the algorithms at the core of AlphaGo,
link |
which is the system that beat the kind of an official world
link |
champion of Go, Lee Sedol, two, three years ago in Seoul.
link |
And that started really with the work of Pavlov in 1900,
link |
Marvin Minsky in the 60s, and many other neuroscientists
link |
And deep learning started, which is at the core, again,
link |
of AlphaGo and systems like autonomous driving
link |
systems for cars, like the systems that Mobileye,
link |
which is a company started by one of my ex postdocs,
link |
Amnon Shashua, did.
link |
So that is at the core of those things.
link |
And deep learning, really, the initial ideas
link |
in terms of the architecture of these layered
link |
hierarchical networks started with work of Torsten Wiesel
link |
and David Hubel at Harvard up the river in the 60s.
link |
So recent history suggests that neuroscience played a big role
link |
in these breakthroughs.
link |
My personal bet is that there is a good chance they continue
link |
to play a big role.
link |
Maybe not in all the future breakthroughs,
link |
but in some of them.
link |
At least in inspiration.
link |
At least in inspiration, absolutely, yes.
link |
So you studied both artificial and biological neural networks.
link |
You said these mechanisms that underlie deep learning
link |
and reinforcement learning.
link |
But there is nevertheless significant differences
link |
between biological and artificial neural networks
link |
as they stand now.
link |
So between the two, what do you find
link |
is the most interesting, mysterious, maybe even
link |
beautiful difference as it currently
link |
stands in our understanding?
link |
I must confess that until recently, I
link |
found that the artificial networks, too simplistic
link |
relative to real neural networks.
link |
But recently, I've been starting to think that, yes,
link |
there is a very big simplification of what
link |
you find in the brain.
link |
But on the other hand, they are much closer
link |
in terms of the architecture to the brain
link |
than other models that we had, that computer science used
link |
as model of thinking, which were mathematical logics, LISP,
link |
Prologue, and those kind of things.
link |
So in comparison to those, they're
link |
much closer to the brain.
link |
You have networks of neurons, which
link |
is what the brain is about.
link |
And the artificial neurons in the models, as I said,
link |
caricature of the biological neurons.
link |
But they're still neurons, single units communicating
link |
with other units, something that is absent
link |
in the traditional computer type models of mathematics,
link |
reasoning, and so on.
link |
So what aspect would you like to see
link |
in artificial neural networks added over time
link |
as we try to figure out ways to improve them?
link |
So one of the main differences and problems
link |
in terms of deep learning today, and it's not only
link |
deep learning, and the brain, is the need for deep learning
link |
techniques to have a lot of labeled examples.
link |
For instance, for ImageNet, you have
link |
like a training set, which is 1 million images, each one
link |
labeled by some human in terms of which object is there.
link |
And it's clear that in biology, a baby
link |
may be able to see millions of images
link |
in the first years of life, but will not
link |
have millions of labels given to him or her by parents
link |
So how do you solve that?
link |
I think there is this interesting challenge
link |
that today, deep learning and related techniques
link |
are all about big data, big data meaning
link |
a lot of examples labeled by humans,
link |
whereas in nature, you have this big data
link |
is n going to infinity.
link |
That's the best, n meaning labeled data.
link |
But I think the biological world is more n going to 1.
link |
A child can learn from a very small number
link |
of labeled examples.
link |
Like you tell a child, this is a car.
link |
You don't need to say, like in ImageNet, this is a car,
link |
this is a car, this is not a car, this is not a car,
link |
And of course, with AlphaGo, or at least the AlphaZero
link |
variants, because the world of Go
link |
is so simplistic that you can actually
link |
learn by yourself through self play,
link |
you can play against each other.
link |
In the real world, the visual system
link |
that you've studied extensively is a lot more complicated
link |
than the game of Go.
link |
On the comment about children, which
link |
are fascinatingly good at learning new stuff,
link |
how much of it do you think is hardware,
link |
and how much of it is software?
link |
Yeah, that's a good, deep question.
link |
In a sense, it's the old question of nurture and nature,
link |
how much is in the gene, and how much
link |
is in the experience of an individual.
link |
Obviously, it's both that play a role.
link |
And I believe that the way evolution gives,
link |
puts prior information, so to speak, hardwired,
link |
is not really hardwired.
link |
But that's essentially an hypothesis.
link |
I think what's going on is that evolution has almost
link |
necessarily, if you believe in Darwin, is very opportunistic.
link |
And think about our DNA and the DNA of Drosophila.
link |
Our DNA does not have many more genes than Drosophila.
link |
The fly, the fruit fly.
link |
Now, we know that the fruit fly does not
link |
learn very much during its individual existence.
link |
It looks like one of these machinery
link |
that it's really mostly, not 100%, but 95%,
link |
hardcoded by the genes.
link |
But since we don't have many more genes than Drosophila,
link |
evolution could encode in as a general learning machinery,
link |
and then had to give very weak priors.
link |
Like, for instance, let me give a specific example,
link |
which is recent work by a member of our Center for Brains,
link |
Minds, and Machines.
link |
We know because of work of other people in our group
link |
and other groups, that there are cells
link |
in a part of our brain, neurons, that are tuned to faces.
link |
They seem to be involved in face recognition.
link |
Now, this face area seems to be present in young children
link |
And one question is, is there from the beginning?
link |
Is hardwired by evolution?
link |
Or somehow it's learned very quickly.
link |
So what's your, by the way, a lot of the questions I'm asking,
link |
the answer is we don't really know.
link |
But as a person who has contributed
link |
some profound ideas in these fields,
link |
you're a good person to guess at some of these.
link |
So of course, there's a caveat before a lot of the stuff
link |
But what is your hunch?
link |
Is the face, the part of the brain
link |
that seems to be concentrated on face recognition,
link |
are you born with that?
link |
Or you just is designed to learn that quickly,
link |
like the face of the mother and so on?
link |
My hunch, my bias was the second one, learned very quickly.
link |
And it turns out that Marge Livingstone at Harvard
link |
has done some amazing experiments in which she raised
link |
baby monkeys, depriving them of faces
link |
during the first weeks of life.
link |
So they see technicians, but the technician have a mask.
link |
And so when they looked at the area
link |
in the brain of these monkeys that were usually
link |
defined faces, they found no face preference.
link |
So my guess is that what evolution does in this case
link |
is there is a plastic area, which
link |
is plastic, which is kind of predetermined
link |
to be imprinted very easily.
link |
But the command from the gene is not a detailed circuitry
link |
for a face template.
link |
Could be, but this will require probably a lot of bits.
link |
You had to specify a lot of connection of a lot of neurons.
link |
Instead, the command from the gene
link |
is something like imprint, memorize what you see most
link |
often in the first two weeks of life,
link |
especially in connection with food and maybe nipples.
link |
Well, source of food.
link |
And so that area is very plastic at first and then solidifies.
link |
It'd be interesting if a variant of that experiment
link |
would show a different kind of pattern associated
link |
with food than a face pattern, whether that could stick.
link |
There are indications that during that experiment,
link |
what the monkeys saw quite often were
link |
the blue gloves of the technicians that were giving
link |
to the baby monkeys the milk.
link |
And some of the cells, instead of being face sensitive
link |
in that area, are hand sensitive.
link |
That's fascinating.
link |
Can you talk about what are the different parts of the brain
link |
and, in your view, sort of loosely,
link |
and how do they contribute to intelligence?
link |
Do you see the brain as a bunch of different modules,
link |
and they together come in the human brain
link |
to create intelligence?
link |
Or is it all one mush of the same kind
link |
of fundamental architecture?
link |
Yeah, that's an important question.
link |
And there was a phase in neuroscience back in the 1950
link |
or so in which it was believed for a while
link |
that the brain was equipotential.
link |
This was the term.
link |
You could cut out a piece, and nothing special
link |
happened apart a little bit less performance.
link |
There was a surgeon, Lashley, who
link |
did a lot of experiments of this type with mice and rats
link |
and concluded that every part of the brain
link |
was essentially equivalent to any other one.
link |
It turns out that that's really not true.
link |
There are very specific modules in the brain, as you said.
link |
And people may lose the ability to speak
link |
if you have a stroke in a certain region,
link |
or may lose control of their legs in another region.
link |
So they're very specific.
link |
The brain is also quite flexible and redundant,
link |
so often it can correct things and take over functions
link |
from one part of the brain to the other.
link |
But really, there are specific modules.
link |
So the answer that we know from this old work, which
link |
was basically based on lesions, either on animals,
link |
or very often there was a mine of very interesting data
link |
coming from the war, from different types of injuries
link |
that soldiers had in the brain.
link |
And more recently, functional MRI,
link |
which allow you to check which part of the brain
link |
are active when you are doing different tasks,
link |
can replace some of this.
link |
You can see that certain parts of the brain are involved,
link |
are active in certain tasks.
link |
Vision, language, yeah, that's right.
link |
But sort of taking a step back to that part of the brain
link |
that discovers that specializes in the face
link |
and how that might be learned, what's your intuition behind?
link |
Is it possible that from a physicist perspective,
link |
when you get lower and lower, that it's all the same stuff
link |
and it just, when you're born, it's plastic
link |
and quickly figures out this part is going to be about vision,
link |
this is going to be about language,
link |
this is about common sense reasoning.
link |
Do you have an intuition that that kind of learning
link |
is going on really quickly, or is it really
link |
kind of solidified in hardware?
link |
That's a great question.
link |
So there are parts of the brain like the cerebellum
link |
or the hippocampus that are quite different from each other.
link |
They clearly have different anatomy,
link |
different connectivity.
link |
Then there is the cortex, which is the most developed part
link |
of the brain in humans.
link |
And in the cortex, you have different regions
link |
of the cortex that are responsible for vision,
link |
for audition, for motor control, for language.
link |
Now, one of the big puzzles of this
link |
is that in the cortex is the cortex is the cortex.
link |
Looks like it is the same in terms of hardware,
link |
in terms of type of neurons and connectivity
link |
across these different modalities.
link |
So for the cortex, I think aside these other parts
link |
of the brain like spinal cord, hippocampus,
link |
cerebellum, and so on, for the cortex,
link |
I think your question about hardware and software
link |
and learning and so on, I think is rather open.
link |
And I find it very interesting for Risa
link |
to think about an architecture, computer architecture, that
link |
is good for vision and at the same time is good for language.
link |
Seems to be so different problem areas that you have to solve.
link |
But the underlying mechanism might be the same.
link |
And that's really instructive for artificial neural networks.
link |
So we've done a lot of great work in vision,
link |
in human vision, computer vision.
link |
And you mentioned the problem of human vision
link |
is really as difficult as the problem of general intelligence.
link |
And maybe that connects to the cortex discussion.
link |
Can you describe the human visual cortex
link |
and how the humans begin to understand the world
link |
through the raw sensory information?
link |
What's, for folks who are not familiar,
link |
especially on the computer vision side,
link |
we don't often actually take a step back except saying
link |
with a sentence or two that one is inspired by the other.
link |
What is it that we know about the human visual cortex?
link |
That's interesting.
link |
We know quite a bit.
link |
At the same time, we don't know a lot.
link |
But the bit we know, in a sense, we know a lot of the details.
link |
And many we don't know.
link |
And we know a lot of the top level,
link |
the answer to the top level question.
link |
But we don't know some basic ones,
link |
even in terms of general neuroscience, forgetting vision.
link |
It's such a basic question.
link |
And we really don't have an answer to that.
link |
So taking a step back on that.
link |
So sleep, for example, is fascinating.
link |
Do you think that's a neuroscience question?
link |
Or if we talk about abstractions, what do you
link |
think is an interesting way to study intelligence
link |
or most effective on the levels of abstraction?
link |
Is it chemical, is it biological,
link |
is it electrophysical, mathematical,
link |
as you've done a lot of excellent work on that side?
link |
Which psychology, at which level of abstraction do you think?
link |
Well, in terms of levels of abstraction,
link |
I think we need all of them.
link |
It's like if you ask me, what does it
link |
mean to understand a computer?
link |
That's much simpler.
link |
But in a computer, I could say, well,
link |
I understand how to use PowerPoint.
link |
That's my level of understanding a computer.
link |
It gives me some power to produce slides
link |
and beautiful slides.
link |
Now, you can ask somebody else.
link |
He says, well, I know how the transistors work
link |
that are inside the computer.
link |
I can write the equation for transistor and diodes
link |
and circuits, logical circuits.
link |
And I can ask this guy, do you know how to operate PowerPoint?
link |
So do you think if we discovered computers walking amongst us
link |
full of these transistors that are also operating
link |
under windows and have PowerPoint,
link |
do you think it's digging in a little bit more?
link |
How useful is it to understand the transistor in order
link |
to be able to understand PowerPoint
link |
and these higher level intelligent processes?
link |
So I think in the case of computers,
link |
because they were made by engineers, by us,
link |
this different level of understanding
link |
are rather separate on purpose.
link |
They are separate modules so that the engineer that
link |
designed the circuit for the chips does not
link |
need to know what is inside PowerPoint.
link |
And somebody can write the software translating
link |
from one to the other.
link |
So in that case, I don't think understanding the transistor
link |
helps you understand PowerPoint, or very little.
link |
If you want to understand the computer, this question,
link |
I would say you have to understand it
link |
at different levels.
link |
If you really want to build one, right?
link |
But for the brain, I think these levels of understanding,
link |
so the algorithms, which kind of computation,
link |
the equivalent of PowerPoint, and the circuits,
link |
the transistors, I think they are much more
link |
intertwined with each other.
link |
There is not a neatly level of the software separate
link |
from the hardware.
link |
And so that's why I think in the case of the brain,
link |
the problem is more difficult and more than for computers
link |
requires the interaction, the collaboration
link |
between different types of expertise.
link |
The brain is a big hierarchical mess.
link |
You can't just disentangle levels.
link |
I think you can, but it's much more difficult.
link |
And it's not completely obvious.
link |
And as I said, I think it's one of the, personally,
link |
I think is the greatest problem in science.
link |
So I think it's fair that it's difficult.
link |
That's a difficult one.
link |
That said, you do talk about compositionality
link |
and why it might be useful.
link |
And when you discuss why these neural networks,
link |
in artificial or biological sense, learn anything,
link |
you talk about compositionality.
link |
See, there's a sense that nature can be disentangled.
link |
Or, well, all aspects of our cognition
link |
could be disentangled to some degree.
link |
So why do you think, first of all,
link |
how do you see compositionality?
link |
And why do you think it exists at all in nature?
link |
I spoke about, I use the term compositionality
link |
when we looked at deep neural networks, multilayers,
link |
and trying to understand when and why they are more powerful
link |
than more classical one layer networks,
link |
like linear classifier, kernel machines, so called.
link |
And what we found is that in terms
link |
of approximating or learning or representing
link |
a function, a mapping from an input to an output,
link |
like from an image to the label in the image,
link |
if this function has a particular structure,
link |
then deep networks are much more powerful than shallow networks
link |
to approximate the underlying function.
link |
And the particular structure is a structure of compositionality.
link |
If the function is made up of functions of function,
link |
so that you need to look on when you are interpreting an image,
link |
classifying an image, you don't need
link |
to look at all pixels at once.
link |
But you can compute something from small groups of pixels.
link |
And then you can compute something
link |
on the output of this local computation and so on,
link |
which is similar to what you do when you read a sentence.
link |
You don't need to read the first and the last letter.
link |
But you can read syllables, combine them in words,
link |
combine the words in sentences.
link |
So this is this kind of structure.
link |
So that's as part of a discussion
link |
of why deep neural networks may be more
link |
effective than the shallow methods.
link |
And is your sense, for most things
link |
we can use neural networks for, those problems
link |
are going to be compositional in nature, like language,
link |
How far can we get in this kind of way?
link |
So here is almost philosophy.
link |
Well, let's go there.
link |
Yeah, let's go there.
link |
So a friend of mine, Max Tegmark, who is a physicist at MIT.
link |
I've talked to him on this thing.
link |
Yeah, and he disagrees with you, right?
link |
Yeah, we agree on most.
link |
But the conclusion is a bit different.
link |
His conclusion is that for images, for instance,
link |
the compositional structure of this function
link |
that we have to learn or to solve these problems
link |
comes from physics, comes from the fact
link |
that you have local interactions in physics
link |
between atoms and other atoms, between particle
link |
of matter and other particles, between planets
link |
and other planets, between stars and other.
link |
But you could push this argument a bit further.
link |
Not this argument, actually.
link |
You could argue that maybe that's part of the truth.
link |
But maybe what happens is kind of the opposite,
link |
is that our brain is wired up as a deep network.
link |
So it can learn, understand, solve
link |
problems that have this compositional structure
link |
and it cannot solve problems that don't have
link |
this compositional structure.
link |
So the problems we are accustomed to, we think about,
link |
we test our algorithms on, are this compositional structure
link |
because our brain is made up.
link |
And that's, in a sense, an evolutionary perspective
link |
So the ones that didn't have, that weren't
link |
dealing with the compositional nature of reality died off?
link |
Yes, but also could be maybe the reason
link |
why we have this local connectivity in the brain,
link |
like simple cells in cortex looking
link |
only at the small part of the image, each one of them,
link |
and then other cells looking at the small number
link |
of these simple cells and so on.
link |
The reason for this may be purely
link |
that it was difficult to grow long range connectivity.
link |
So suppose it's for biology.
link |
It's possible to grow short range connectivity but not
link |
long range also because there is a limited number of long range
link |
And so you have this limitation from the biology.
link |
And this means you build a deep convolutional network.
link |
This would be something like a deep convolutional network.
link |
And this is great for solving certain class of problems.
link |
These are the ones we find easy and important for our life.
link |
And yes, they were enough for us to survive.
link |
And you can start a successful business
link |
on solving those problems with Mobileye.
link |
Driving is a compositional problem.
link |
So on the learning task, we don't
link |
know much about how the brain learns
link |
in terms of optimization.
link |
So the thing that's stochastic gradient descent
link |
is what artificial neural networks use for the most part
link |
to adjust the parameters in such a way that it's
link |
able to deal based on the label data,
link |
it's able to solve the problem.
link |
So what's your intuition about why it works at all?
link |
How hard of a problem it is to optimize
link |
a neural network, artificial neural network?
link |
Is there other alternatives?
link |
Just in general, your intuition is
link |
behind this very simplistic algorithm
link |
that seems to do pretty good, surprisingly so.
link |
So I find neuroscience, the architecture of cortex,
link |
is really similar to the architecture of deep networks.
link |
So there is a nice correspondence there
link |
between the biology and this kind
link |
of local connectivity, hierarchical architecture.
link |
The stochastic gradient descent, as you said,
link |
is a very simple technique.
link |
It seems pretty unlikely that biology could do that
link |
from what we know right now about cortex and neurons
link |
So it's a big question open whether there
link |
are other optimization learning algorithms that
link |
can replace stochastic gradient descent.
link |
And my guess is yes, but nobody has found yet a real answer.
link |
I mean, people are trying, still trying,
link |
and there are some interesting ideas.
link |
The fact that stochastic gradient descent
link |
is so successful, this has become clearly not so
link |
And the reason is that it's an interesting fact.
link |
It's a change, in a sense, in how
link |
people think about statistics.
link |
And this is the following, is that typically when
link |
you had data and you had, say, a model with parameters,
link |
you are trying to fit the model to the data,
link |
to fit the parameter.
link |
Typically, the kind of crowd wisdom type idea
link |
was you should have at least twice the number of data
link |
than the number of parameters.
link |
Maybe 10 times is better.
link |
Now, the way you train neural networks these days
link |
is that they have 10 or 100 times more parameters
link |
than data, exactly the opposite.
link |
And it has been one of the puzzles about neural networks.
link |
How can you get something that really works
link |
when you have so much freedom?
link |
From that little data, it can generalize somehow.
link |
Do you think the stochastic nature of it
link |
is essential, the randomness?
link |
So I think we have some initial understanding
link |
But one nice side effect of having
link |
this overparameterization, more parameters than data,
link |
is that when you look for the minima of a loss function,
link |
like stochastic gradient descent is doing,
link |
you find I made some calculations based
link |
on some old basic theorem of algebra called the Bezu
link |
theorem that gives you an estimate of the number
link |
of solution of a system of polynomial equation.
link |
Anyway, the bottom line is that there are probably
link |
more minima for a typical deep networks
link |
than atoms in the universe.
link |
Just to say, there are a lot because
link |
of the overparameterization.
link |
A more global minimum, zero minimum, good minimum.
link |
A more global minima.
link |
Yeah, a lot of them.
link |
So you have a lot of solutions.
link |
So it's not so surprising that you can find them
link |
relatively easily.
link |
And this is because of the overparameterization.
link |
The overparameterization sprinkles that entire space
link |
with solutions that are pretty good.
link |
It's not so surprising, right?
link |
It's like if you have a system of linear equation
link |
and you have more unknowns than equations, then you have,
link |
we know, you have an infinite number of solutions.
link |
And the question is to pick one.
link |
That's another story.
link |
But you have an infinite number of solutions.
link |
So there are a lot of value of your unknowns
link |
that satisfy the equations.
link |
But it's possible that there's a lot of those solutions that
link |
What's surprising is that they're pretty good.
link |
So that's a good question.
link |
Why can you pick one that generalizes well?
link |
That's a separate question with separate answers.
link |
One theorem that people like to talk about that kind of
link |
inspires imagination of the power of neural networks
link |
is the universality, universal approximation theorem,
link |
that you can approximate any computable function
link |
with just a finite number of neurons
link |
in a single hidden layer.
link |
Do you find this theorem one surprising?
link |
Do you find it useful, interesting, inspiring?
link |
No, this one, I never found it very surprising.
link |
It was known since the 80s, since I entered the field,
link |
because it's basically the same as Weierstrass theorem, which
link |
says that I can approximate any continuous function
link |
with a polynomial of sufficiently,
link |
with a sufficient number of terms, monomials.
link |
So basically the same.
link |
And the proofs are very similar.
link |
So your intuition was there was never
link |
any doubt that neural networks in theory
link |
could be very strong approximators.
link |
The question, the interesting question,
link |
is that if this theorem says you can approximate, fine.
link |
But when you ask how many neurons, for instance,
link |
or in the case of polynomial, how many monomials,
link |
I need to get a good approximation.
link |
Then it turns out that that depends
link |
on the dimensionality of your function,
link |
how many variables you have.
link |
But it depends on the dimensionality
link |
of your function in a bad way.
link |
It's, for instance, suppose you want
link |
an error which is no worse than 10% in your approximation.
link |
You come up with a network that approximate your function
link |
Then it turns out that the number of units you need
link |
are in the order of 10 to the dimensionality, d,
link |
how many variables.
link |
So if you have two variables, these two words,
link |
you have 100 units and OK.
link |
But if you have, say, 200 by 200 pixel images,
link |
now this is 40,000, whatever.
link |
We again go to the size of the universe pretty quickly.
link |
Exactly, 10 to the 40,000 or something.
link |
And so this is called the curse of dimensionality,
link |
not quite appropriately.
link |
And the hope is with the extra layers,
link |
you can remove the curse.
link |
What we proved is that if you have deep layers,
link |
hierarchical architecture with the local connectivity
link |
of the type of convolutional deep learning,
link |
and if you're dealing with a function that
link |
has this kind of hierarchical architecture,
link |
then you avoid completely the curse.
link |
You've spoken a lot about supervised deep learning.
link |
What are your thoughts, hopes, views
link |
on the challenges of unsupervised learning
link |
with GANs, with Generative Adversarial Networks?
link |
Do you see those as distinct?
link |
The power of GANs, do you see those
link |
as distinct from supervised methods in neural networks,
link |
or are they really all in the same representation ballpark?
link |
GANs is one way to get estimation of probability
link |
densities, which is a somewhat new way that people have not
link |
I don't know whether this will really play an important role
link |
Or it's interesting.
link |
I'm less enthusiastic about it than many people in the field.
link |
I have the feeling that many people in the field
link |
are really impressed by the ability
link |
of producing realistic looking images in this generative way.
link |
Which describes the popularity of the methods.
link |
But you're saying that while that's exciting and cool
link |
to look at, it may not be the tool that's useful for it.
link |
So you describe it kind of beautifully.
link |
Current supervised methods go n to infinity
link |
in terms of number of labeled points.
link |
And we really have to figure out how to go to n to 1.
link |
And you're thinking GANs might help,
link |
but they might not be the right.
link |
I don't think for that problem, which I really think
link |
is important, I think they may help.
link |
They certainly have applications,
link |
for instance, in computer graphics.
link |
And I did work long ago, which was
link |
a little bit similar in terms of saying, OK, I have a network.
link |
And I present images.
link |
And I can input its images.
link |
And output is, for instance, the pose of the image.
link |
A face, how much is smiling, is rotated 45 degrees or not.
link |
What about having a network that I train with the same data
link |
set, but now I invert input and output.
link |
Now the input is the pose or the expression, a number,
link |
And the output is the image.
link |
And we did pretty good, interesting results
link |
in terms of producing very realistic looking images.
link |
It was a less sophisticated mechanism.
link |
But the output was pretty less than GANs.
link |
But the output was pretty much of the same quality.
link |
So I think for a computer graphics type application,
link |
yeah, definitely GANs can be quite useful.
link |
And not only for that, but for helping,
link |
for instance, on this problem of unsupervised example
link |
of reducing the number of labeled examples.
link |
I think people, it's like they think they can get out
link |
more than they put in.
link |
There's no free lunch, as you said.
link |
What do you think, what's your intuition?
link |
How can we slow the growth of N to infinity in supervised,
link |
N to infinity in supervised learning?
link |
So for example, Mobileye has very successfully,
link |
I mean, essentially annotated large amounts of data
link |
to be able to drive a car.
link |
Now one thought is, so we're trying
link |
to teach machines, school of AI.
link |
And we're trying to, so how can we become better teachers,
link |
Because again, one caricature of the history of computer
link |
science, you could say, begins with programmers, expensive.
link |
Continuous labelers, cheap.
link |
And the future will be schools, like we have for kids.
link |
Currently, the labeling methods were not
link |
selective about which examples we teach networks with.
link |
So I think the focus of making networks that learn much faster
link |
is often on the architecture side.
link |
But how can we pick better examples with which to learn?
link |
Do you have intuitions about that?
link |
Well, that's part of the problem.
link |
But the other one is, if we look at biology,
link |
a reasonable assumption, I think,
link |
is in the same spirit that I said,
link |
evolution is opportunistic and has weak priors.
link |
The way I think the intelligence of a child,
link |
the baby may develop is by bootstrapping weak priors
link |
For instance, you can assume that you
link |
have in most organisms, including human babies,
link |
built in some basic machinery to detect motion
link |
and relative motion.
link |
And in fact, we know all insects from fruit flies
link |
to other animals, they have this,
link |
even in the retinas, in the very peripheral part.
link |
It's very conserved across species, something
link |
that evolution discovered early.
link |
It may be the reason why babies tend
link |
to look in the first few days to moving objects
link |
and not to not moving objects.
link |
Now, moving objects means, OK, they're attracted by motion.
link |
But motion also means that motion
link |
gives automatic segmentation from the background.
link |
So because of motion boundaries, either the object
link |
is moving or the eye of the baby is tracking the moving object
link |
and the background is moving, right?
link |
Yeah, so just purely on the visual characteristics
link |
of the scene, that seems to be the most useful.
link |
Right, so it's like looking at an object without background.
link |
It's ideal for learning the object.
link |
Otherwise, it's really difficult because you
link |
have so much stuff.
link |
So suppose you do this at the beginning, first weeks.
link |
Then after that, you can recognize object.
link |
Now they are imprinted, the number one,
link |
even in the background, even without motion.
link |
So that's, by the way, I just want
link |
to ask on the object recognition problem.
link |
So there is this being responsive to movement
link |
and doing edge detection, essentially.
link |
What's the gap between being effective at visually
link |
recognizing stuff, detecting where it is,
link |
and understanding the scene?
link |
Is this a huge gap in many layers, or is it close?
link |
No, I think that's a huge gap.
link |
I think present algorithm with all the success that we have
link |
and the fact that there are a lot of very useful,
link |
I think we are in a golden age for applications
link |
of low level vision and low level speech recognition
link |
and so on, Alexa and so on.
link |
There are many more things of similar level
link |
to be done, including medical diagnosis and so on.
link |
But we are far from what we call understanding
link |
of a scene, of language, of actions, of people.
link |
That is, despite the claims, that's, I think, very far.
link |
We're a little bit off.
link |
So in popular culture and among many researchers,
link |
some of which I've spoken with, the Stuart Russell
link |
and Elon Musk, in and out of the AI field,
link |
there's a concern about the existential threat of AI.
link |
And how do you think about this concern?
link |
And is it valuable to think about large scale, long term,
link |
unintended consequences of intelligent systems
link |
I always think it's better to worry first, early,
link |
I'm not against worrying at all.
link |
Personally, I think that it will take a long time
link |
before there is real reason to be worried.
link |
But as I said, I think it's good to put in place
link |
and think about possible safety against.
link |
What I find a bit misleading are things
link |
like that have been said by people I know, like Elon Musk,
link |
and what is Bostrom in particular,
link |
and what is his first name?
link |
Nick Bostrom, right.
link |
And a couple of other people that, for instance, AI
link |
is more dangerous than nuclear weapons.
link |
I think that's really wrong.
link |
That can be misleading.
link |
Because in terms of priority, we should still
link |
be more worried about nuclear weapons
link |
and what people are doing about it and so on than AI.
link |
And you've spoken about Demis Hassabis
link |
and yourself saying that you think
link |
you'll be about 100 years out before we
link |
have a general intelligence system that's
link |
on par with a human being.
link |
Do you have any updates for those predictions?
link |
Well, I think he said.
link |
He said 20, I think.
link |
He said 20, right.
link |
This was a couple of years ago.
link |
I have not asked him again.
link |
Your own prediction, what's your prediction
link |
about when you'll be truly surprised?
link |
And what's the confidence interval on that?
link |
It's so difficult to predict the future and even
link |
the present sometimes.
link |
It's pretty hard to predict.
link |
But I would be, as I said, this is completely,
link |
I would be more like Rod Brooks.
link |
I think he's about 200 years.
link |
When we have this kind of AGI system,
link |
artificial general intelligence system,
link |
you're sitting in a room with her, him, it.
link |
Do you think the underlying design of such a system
link |
is something we'll be able to understand?
link |
It will be simple?
link |
Do you think it'll be explainable,
link |
understandable by us?
link |
Your intuition, again, we're in the realm of philosophy
link |
Well, probably no.
link |
But again, it depends what you really
link |
mean for understanding.
link |
So I think we don't understand how deep networks work.
link |
I think we are beginning to have a theory now.
link |
But in the case of deep networks,
link |
or even in the case of the simpler kernel machines
link |
or linear classifier, we really don't understand
link |
the individual units or so.
link |
But we understand what the computation and the limitations
link |
and the properties of it are.
link |
It's similar to many things.
link |
What does it mean to understand how a fusion bomb works?
link |
How many of us understand the basic principle?
link |
And some of us may understand deeper details.
link |
In that sense, understanding is, as a community,
link |
as a civilization, can we build another copy of it?
link |
And in that sense, do you think there
link |
will need to be some evolutionary component where
link |
it runs away from our understanding?
link |
Or do you think it could be engineered from the ground up,
link |
the same way you go from the transistor to PowerPoint?
link |
So many years ago, this was actually 40, 41 years ago,
link |
I wrote a paper with David Marr, who
link |
was one of the founding fathers of computer vision,
link |
computational vision.
link |
I wrote a paper about levels of understanding,
link |
which is related to the question we discussed earlier
link |
about understanding PowerPoint, understanding transistors,
link |
And in that kind of framework, we
link |
had the level of the hardware and the top level
link |
of the algorithms.
link |
We did not have learning.
link |
Recently, I updated adding levels.
link |
And one level I added to those three was learning.
link |
And you can imagine, you could have a good understanding
link |
of how you construct a learning machine, like we do.
link |
But being unable to describe in detail what the learning
link |
machines will discover, right?
link |
Now, that would be still a powerful understanding,
link |
if I can build a learning machine,
link |
even if I don't understand in detail every time it
link |
Just like our children, if they start
link |
listening to a certain type of music,
link |
I don't know, Miley Cyrus or something,
link |
you don't understand why they came
link |
to that particular preference.
link |
But you understand the learning process.
link |
That's very interesting.
link |
So on learning for systems to be part of our world,
link |
it has a certain, one of the challenging things
link |
that you've spoken about is learning ethics, learning
link |
And how hard do you think is the problem of, first of all,
link |
humans understanding our ethics?
link |
What is the origin on the neural on the low level of ethics?
link |
What is it at the higher level?
link |
Is it something that's learnable from machines
link |
in your intuition?
link |
I think, yeah, ethics is learnable, very likely.
link |
I think it's one of these problems where
link |
I think understanding the neuroscience of ethics,
link |
people discuss there is an ethics of neuroscience.
link |
How a neuroscientist should or should not behave.
link |
Can you think of a neurosurgeon and the ethics
link |
rule he has to be or he, she has to be.
link |
But I'm more interested on the neuroscience of ethics.
link |
You're blowing my mind right now.
link |
The neuroscience of ethics is very meta.
link |
Yeah, and I think that would be important to understand also
link |
for being able to design machines that
link |
are ethical machines in our sense of ethics.
link |
And you think there is something in neuroscience,
link |
there's patterns, tools in neuroscience
link |
that could help us shed some light on ethics?
link |
Or is it mostly on the psychologists of sociology
link |
in which higher level?
link |
No, there is psychology.
link |
But there is also, in the meantime,
link |
there is evidence, fMRI, of specific areas of the brain
link |
that are involved in certain ethical judgment.
link |
And not only this, you can stimulate those area
link |
with magnetic fields and change the ethical decisions.
link |
So that's work by a colleague of mine, Rebecca Sachs.
link |
And there is other researchers doing similar work.
link |
And I think this is the beginning.
link |
But ideally, at some point, we'll
link |
have an understanding of how this works.
link |
And why it evolved, right?
link |
The big why question.
link |
Yeah, it must have some purpose.
link |
Yeah, obviously it has some social purposes, probably.
link |
If neuroscience holds the key to at least illuminate
link |
some aspect of ethics, that means
link |
it could be a learnable problem.
link |
And as we're getting into harder and harder questions,
link |
let's go to the hard problem of consciousness.
link |
Is this an important problem for us
link |
to think about and solve on the engineering of intelligence
link |
side of your work, of our dream?
link |
So again, this is a deep problem,
link |
partly because it's very difficult to define
link |
And there is a debate among neuroscientists
link |
about whether consciousness and philosophers, of course,
link |
whether consciousness is something that requires
link |
flesh and blood, so to speak.
link |
Or could be that we could have silicon devices that
link |
are conscious, or up to statement
link |
like everything has some degree of consciousness
link |
and some more than others.
link |
This is like Giulio Tonioni and phi.
link |
We just recently talked to Christoph Koch.
link |
Christoph was my first graduate student.
link |
Do you think it's important to illuminate
link |
aspects of consciousness in order
link |
to engineer intelligence systems?
link |
Do you think an intelligent system would ultimately
link |
have consciousness?
link |
Are they interlinked?
link |
Most of the people working in artificial intelligence,
link |
I think, would answer, we don't strictly
link |
need consciousness to have an intelligent system.
link |
That's sort of the easier question,
link |
because it's a very engineering answer to the question.
link |
Pass the Turing test, we don't need consciousness.
link |
But if you were to go, do you think
link |
it's possible that we need to have
link |
that kind of self awareness?
link |
So for instance, I personally think
link |
that when test a machine or a person in a Turing test,
link |
in an extended Turing test, I think
link |
consciousness is part of what we require in that test,
link |
implicitly, to say that this is intelligent.
link |
Christoph disagrees.
link |
Despite many other romantic notions he holds,
link |
he disagrees with that one.
link |
Yes, that's right.
link |
Do you think, as a quick question,
link |
Ernest Becker's fear of death, do you
link |
think mortality and those kinds of things
link |
are important for consciousness and for intelligence?
link |
The finiteness of life, finiteness of existence,
link |
or is that just a side effect of evolution,
link |
evolutionary side effect that's useful for natural selection?
link |
Do you think this kind of thing that this interview is
link |
going to run out of time soon, our life
link |
will run out of time soon, do you
link |
think that's needed to make this conversation good and life
link |
I never thought about it.
link |
It's a very interesting question.
link |
I think Steve Jobs, in his commencement speech
link |
at Stanford, argued that having a finite life
link |
was important for stimulating achievements.
link |
So it was different.
link |
Yeah, live every day like it's your last, right?
link |
So rationally, I don't think strictly you need mortality
link |
for consciousness.
link |
They seem to go together in our biological system, right?
link |
You've mentioned before, and students are associated with,
link |
AlphaGo immobilized the big recent success stories in AI.
link |
And I think it's captivated the entire world of what AI can do.
link |
So what do you think will be the next breakthrough?
link |
And what's your intuition about the next breakthrough?
link |
Of course, I don't know where the next breakthrough is.
link |
I think that there is a good chance, as I said before,
link |
that the next breakthrough will also
link |
be inspired by neuroscience.
link |
But which one, I don't know.
link |
And there's, so MIT has this quest for intelligence.
link |
And there's a few moon shots, which in that spirit,
link |
which ones are you excited about?
link |
Which projects kind of?
link |
Well, of course, I'm excited about one
link |
of the moon shots, which is our Center for Brains, Minds,
link |
and Machines, which is the one which is fully funded by NSF.
link |
And it is about visual intelligence.
link |
And that one is particularly about understanding.
link |
Visual intelligence, so the visual cortex,
link |
and visual intelligence in the sense
link |
of how we look around ourselves and understand
link |
the world around ourselves, meaning what is going on,
link |
how we could go from here to there without hitting
link |
obstacles, whether there are other agents,
link |
people in the environment.
link |
These are all things that we perceive very quickly.
link |
And it's something actually quite close to being conscious,
link |
But there is this interesting experiment
link |
that was run at Google X, which is in a sense
link |
is just a virtual reality experiment,
link |
but in which they had a subject sitting, say,
link |
in a chair with goggles, like Oculus and so on, earphones.
link |
And they were seeing through the eyes of a robot
link |
nearby to cameras, microphones for receiving.
link |
So their sensory system was there.
link |
And the impression of all the subject, very strong,
link |
they could not shake it off, was that they
link |
were where the robot was.
link |
They could look at themselves from the robot
link |
and still feel they were where the robot is.
link |
They were looking at their body.
link |
Theirself had moved.
link |
So some aspect of scene understanding
link |
has to have ability to place yourself,
link |
have a self awareness about your position in the world
link |
and what the world is.
link |
So we may have to solve the hard problem of consciousness
link |
On their way, yes.
link |
It's quite a moonshine.
link |
So you've been an advisor to some incredible minds,
link |
including Demis Hassabis, Krzysztof Koch, Amna Shashua,
link |
All went on to become seminal figures
link |
in their respective fields.
link |
From your own success as a researcher
link |
and from perspective as a mentor of these researchers,
link |
having guided them in the way of advice,
link |
what does it take to be successful in science
link |
and engineering careers?
link |
Whether you're talking to somebody in their teens,
link |
20s, and 30s, what does that path look like?
link |
It's curiosity and having fun.
link |
And I think it's important also having
link |
fun with other curious minds.
link |
It's the people you surround with too,
link |
so fun and curiosity.
link |
Is there, you mentioned Steve Jobs,
link |
is there also an underlying ambition
link |
that's unique that you saw?
link |
Or does it really does boil down
link |
to insatiable curiosity and fun?
link |
Well of course, it's being curious
link |
in an active and ambitious way, yes.
link |
But I think sometime in science,
link |
there are friends of mine who are like this.
link |
There are some of the scientists
link |
like to work by themselves
link |
and kind of communicate only when they complete their work
link |
or discover something.
link |
I think I always found the actual process
link |
of discovering something is more fun
link |
if it's together with other intelligent
link |
and curious and fun people.
link |
So if you see the fun in that process,
link |
the side effect of that process
link |
will be that you'll actually end up
link |
discovering some interesting things.
link |
So as you've led many incredible efforts here,
link |
what's the secret to being a good advisor,
link |
mentor, leader in a research setting?
link |
Is it a similar spirit?
link |
Or yeah, what advice could you give
link |
to people, young faculty and so on?
link |
It's partly repeating what I said
link |
about an environment that should be friendly
link |
and fun and ambitious.
link |
And I think I learned a lot
link |
from some of my advisors and friends
link |
and some who are physicists.
link |
And there was, for instance,
link |
this behavior that was encouraged
link |
of when somebody comes with a new idea in the group,
link |
you are, unless it's really stupid,
link |
but you are always enthusiastic.
link |
And then, and you're enthusiastic for a few minutes,
link |
Then you start asking critically a few questions,
link |
But this is a process that is,
link |
I think it's very good.
link |
You have to be enthusiastic.
link |
Sometimes people are very critical from the beginning.
link |
Yes, you have to give it a chance
link |
for that seed to grow.
link |
That said, with some of your ideas,
link |
which are quite revolutionary,
link |
so there's a witness, especially in the human vision side
link |
and neuroscience side,
link |
there could be some pretty heated arguments.
link |
Do you enjoy these?
link |
Is that a part of science and academic pursuits
link |
Is that something that happens in your group as well?
link |
I also spent some time in Germany.
link |
Again, there is this tradition
link |
in which people are more forthright,
link |
less kind than here.
link |
So in the U.S., when you write a bad letter,
link |
you still say, this guy's nice.
link |
Yeah, here in America, it's degrees of nice.
link |
It's all just degrees of nice, yeah.
link |
So as long as this does not become personal,
link |
and it's really like a football game
link |
with these rules, that's great.
link |
So if you somehow found yourself in a position
link |
to ask one question of an oracle,
link |
like a genie, maybe a god,
link |
and you're guaranteed to get a clear answer,
link |
what kind of question would you ask?
link |
What would be the question you would ask?
link |
In the spirit of our discussion,
link |
it could be, how could I become 10 times more intelligent?
link |
And so, but see, you only get a clear short answer.
link |
So do you think there's a clear short answer to that?
link |
And that's the answer you'll get.
link |
Okay, so you've mentioned Flowers of Algernon.
link |
As a story that inspires you in your childhood,
link |
as this story of a mouse,
link |
human achieving genius level intelligence,
link |
and then understanding what was happening
link |
while slowly becoming not intelligent again,
link |
and this tragedy of gaining intelligence
link |
and losing intelligence,
link |
do you think in that spirit, in that story,
link |
do you think intelligence is a gift or a curse
link |
from the perspective of happiness and meaning of life?
link |
You try to create an intelligent system
link |
that understands the universe,
link |
but on an individual level, the meaning of life,
link |
do you think intelligence is a gift?
link |
It's a good question.
link |
As one of the, as one people consider
link |
the smartest people in the world,
link |
in some dimension, at the very least, what do you think?
link |
I don't know, it may be invariant to intelligence,
link |
that degree of happiness.
link |
It would be nice if it were.
link |
You could be smart and happy and clueless and happy.
link |
As always, on the discussion of the meaning of life,
link |
it's probably a good place to end.
link |
Tommaso, thank you so much for talking today.
link |
Thank you, this was great.