back to index

Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53


small model | large model

link |
00:00:00.000
The following is a conversation with Noam Chomsky.
link |
00:00:03.800
He's truly one of the great minds of our time
link |
00:00:06.760
and is one of the most cited scholars
link |
00:00:08.400
in the history of our civilization.
link |
00:00:10.840
He has spent over 60 years at MIT
link |
00:00:13.400
and recently also joined the University of Arizona
link |
00:00:16.280
where we met for this conversation.
link |
00:00:18.600
But it was at MIT about four and a half years ago
link |
00:00:21.760
when I first met Noam.
link |
00:00:23.400
In my first few days there,
link |
00:00:24.720
I remember getting into an elevator status center,
link |
00:00:27.400
pressing the button for whatever floor,
link |
00:00:29.480
looking up and realizing it was just me
link |
00:00:32.080
and Noam Chomsky riding the elevator.
link |
00:00:35.480
Just me and one of the seminal figures of linguistics,
link |
00:00:38.400
cognitive science, philosophy,
link |
00:00:40.000
and political thought in the past century, if not ever.
link |
00:00:43.920
I tell that silly story because I think
link |
00:00:46.600
life is made up of funny little defining moments
link |
00:00:49.200
that you never forget for reasons
link |
00:00:51.520
that may be too poetic to try and explain.
link |
00:00:54.880
That was one of mine.
link |
00:00:57.320
Noam has been an inspiration to me
link |
00:00:59.320
and millions of others.
link |
00:01:00.920
It was truly an honor for me
link |
00:01:02.560
to sit down with him in Arizona.
link |
00:01:04.600
I traveled there just for this conversation.
link |
00:01:07.480
And in a rare heartbreaking moment,
link |
00:01:10.120
after everything was set up and tested,
link |
00:01:12.680
the camera was moved and accidentally
link |
00:01:14.480
the recording button was pressed,
link |
00:01:15.920
stopping the recording.
link |
00:01:18.520
So I have good audio of both of us,
link |
00:01:20.360
but no video of Noam.
link |
00:01:22.160
Just the video of me and my sleep deprived
link |
00:01:25.120
but excited face that I get to keep
link |
00:01:27.960
as a reminder of my failures.
link |
00:01:30.480
Most people just listen to this audio version
link |
00:01:32.440
for the podcast as opposed to watching it on YouTube.
link |
00:01:35.760
But still, it's heartbreaking for me.
link |
00:01:39.000
I hope you understand and still enjoy
link |
00:01:40.960
this conversation as much as I did.
link |
00:01:43.120
The depth of intellect that Noam showed
link |
00:01:45.320
and his willingness to truly listen to me,
link |
00:01:48.320
a silly looking Russian in a suit.
link |
00:01:51.160
It was humbling and something I'm deeply grateful for.
link |
00:01:55.560
As some of you know,
link |
00:01:56.840
this podcast is a side project for me.
link |
00:01:59.640
Where my main journey and dream is to build AI systems
link |
00:02:03.600
that do some good for the world.
link |
00:02:05.480
This latter effort takes up most of my time,
link |
00:02:07.840
but for the moment has been mostly private.
link |
00:02:10.560
But the former, the podcast,
link |
00:02:12.840
is something I put my heart and soul into.
link |
00:02:15.400
And I hope you feel that, even when I screw things up.
link |
00:02:19.560
I recently started doing ads
link |
00:02:21.160
at the end of the introduction.
link |
00:02:22.880
I'll do one or two minutes after introducing the episode
link |
00:02:25.680
and never any ads in the middle
link |
00:02:27.440
that break the flow of the conversation.
link |
00:02:29.760
I hope that works for you.
link |
00:02:31.200
It doesn't hurt the listening experience.
link |
00:02:34.000
This is the Artificial Intelligence Podcast.
link |
00:02:37.240
If you enjoy it, subscribe on YouTube,
link |
00:02:39.840
give it five stars on Apple Podcast,
link |
00:02:41.880
support it on Patreon,
link |
00:02:43.280
or simply connect with me on Twitter.
link |
00:02:45.440
Alex Friedman, spelled F R I D M A N.
link |
00:02:49.480
This show is presented by Cash App,
link |
00:02:51.760
the number one finance app in the App Store.
link |
00:02:54.280
I personally use Cash App to send money to friends,
link |
00:02:56.840
but you can also use it to buy, sell,
link |
00:02:58.800
and deposit Bitcoin in just seconds.
link |
00:03:01.360
Cash App also has a new investing feature.
link |
00:03:04.200
You can buy fractions of a stock,
link |
00:03:05.800
say $1 worth, no matter what the stock price is.
link |
00:03:09.240
Brokerage services are provided by Cash App Investing,
link |
00:03:12.000
a subsidiary of Square, and member SIPC.
link |
00:03:15.600
I'm excited to be working with Cash App
link |
00:03:17.680
to support one of my favorite organizations called the First,
link |
00:03:20.920
best known for their first robotics and Lego competitions.
link |
00:03:24.240
They educate and inspire hundreds of thousands of students
link |
00:03:27.600
in over 110 countries
link |
00:03:29.520
and have a perfect rating on Charity Navigator,
link |
00:03:31.720
which means the donated money is used
link |
00:03:34.240
to maximum effectiveness.
link |
00:03:36.440
When you get Cash App from the App Store,
link |
00:03:38.600
Google Play, and use code LEX Podcast,
link |
00:03:42.720
you'll get $10 and Cash App will also donate $10 to First,
link |
00:03:47.040
which again is an organization that I've personally seen
link |
00:03:49.680
inspire girls and boys to dream of engineering a better world.
link |
00:03:54.440
And now, here's my conversation with Noam Chomsky.
link |
00:03:59.720
I apologize for the absurd philosophical question,
link |
00:04:04.040
but if an alien species were to visit Earth,
link |
00:04:08.080
do you think we would be able to find a common language
link |
00:04:10.840
or protocol of communication with them?
link |
00:04:13.600
There are arguments to the effect that we could.
link |
00:04:18.240
In fact, one of them was Marv Minsky's.
link |
00:04:22.400
Back about 20 or 30 years ago,
link |
00:04:24.680
he performed a brief experiment with a student of his,
link |
00:04:30.000
Dan Bobrow, who essentially ran the simplest possible
link |
00:04:35.000
touring machines, just for you to see what would happen.
link |
00:04:39.520
And most of them crashed,
link |
00:04:42.520
either got into an infinite loop or stopped.
link |
00:04:47.720
The few that persisted essentially gave something
link |
00:04:53.680
like arithmetic, and his conclusion from that was that
link |
00:04:59.920
if some alien species developed higher intelligence,
link |
00:05:05.760
they would at least have arithmetic.
link |
00:05:07.440
They would at least have what the simplest computer would do.
link |
00:05:12.880
And in fact, he didn't know that at the time,
link |
00:05:16.000
but the core principles of natural language
link |
00:05:20.720
are based on operations which yield something like arithmetic
link |
00:05:26.200
in the limiting case and the minimal case.
link |
00:05:29.320
So it's conceivable that a mode of communication
link |
00:05:34.000
could be established based on the core properties
link |
00:05:38.520
of human language and the core properties of arithmetic,
link |
00:05:41.440
which maybe are universally shared.
link |
00:05:44.880
So it's conceivable.
link |
00:05:46.680
What is the structure of that language,
link |
00:05:50.800
of language as an internal system inside our mind
link |
00:05:55.160
versus an external system as it's expressed?
link |
00:05:58.920
It's not an alternative.
link |
00:06:00.880
It's two different concepts of language.
link |
00:06:02.920
Different.
link |
00:06:03.680
It's a simple fact that there's something about you,
link |
00:06:07.240
a trait of yours, part of the organism you,
link |
00:06:11.680
that determines that you're talking English
link |
00:06:14.600
and not Tagalog, let's say.
link |
00:06:17.000
So there is an inner system.
link |
00:06:19.480
It determines the sound and meaning
link |
00:06:22.960
of the infinite number of expressions of your language.
link |
00:06:27.120
It's localized.
link |
00:06:28.680
It's not in your foot, obviously.
link |
00:06:30.360
It's in your brain.
link |
00:06:31.760
If you look more closely, it's in specific configurations
link |
00:06:35.240
of your brain.
link |
00:06:36.480
And that's essentially like the internal structure
link |
00:06:40.560
of your laptop, whatever programs it has are in there.
link |
00:06:44.960
Now, one of the things you can do with language,
link |
00:06:47.800
it's a marginal thing, in fact, is use it to externalize
link |
00:06:53.360
what's in your head.
link |
00:06:54.800
Actually, most of your use of language
link |
00:06:56.640
is thought, internal thought.
link |
00:06:58.760
But you can do what you and I are now doing.
link |
00:07:00.920
We can externalize it.
link |
00:07:02.640
Well, the set of things that we're externalizing
link |
00:07:05.640
are an external system that there are noises in the atmosphere.
link |
00:07:11.120
And you can call that language in some other sense of the word.
link |
00:07:14.320
But it's not a set of alternatives.
link |
00:07:16.760
These are just different concepts.
link |
00:07:18.960
So how deep do the roots of language go in our brain?
link |
00:07:23.480
Our mind.
link |
00:07:24.400
Is it yet another feature like vision,
link |
00:07:26.760
or is it something more fundamental from which everything
link |
00:07:29.240
else springs in the human mind?
link |
00:07:31.480
Well, in a way, it's like vision.
link |
00:07:33.680
And there's something about our genetic endowment
link |
00:07:38.600
that determines that we have a mammalian rather
link |
00:07:41.960
than an insect visual system.
link |
00:07:44.720
And there's something in our genetic endowment
link |
00:07:48.080
that determines that we have a human language faculty.
link |
00:07:51.480
No other organism has anything remotely similar.
link |
00:07:55.200
So in that sense, it's internal.
link |
00:07:58.280
Now, there is a long tradition, which I think
link |
00:08:00.200
is valid going back centuries, to the early scientific
link |
00:08:04.800
revolution, at least, that holds that language
link |
00:08:09.320
is the core of human cognitive nature.
link |
00:08:13.640
It's the source.
link |
00:08:14.600
It's the mode for constructing thoughts
link |
00:08:18.080
and expressing them.
link |
00:08:19.640
That is what forms thought.
link |
00:08:22.760
And it's got fundamental creative capacities.
link |
00:08:27.200
It's free, independent, unbounded, and so on.
link |
00:08:31.280
And undoubtedly, I think the basis
link |
00:08:34.840
for our creative capacities and the other remarkable human
link |
00:08:42.920
capacities that lead to the unique achievements
link |
00:08:47.480
and not so great achievements of the species.
link |
00:08:51.280
The capacity to think and reason.
link |
00:08:53.600
Do you think that's deeply linked with language?
link |
00:08:56.200
Do you think the way the internal language system
link |
00:08:59.840
is essentially the mechanism by which we also reason
link |
00:09:03.000
internally?
link |
00:09:04.120
It is undoubtedly the mechanism by which we reason.
link |
00:09:06.920
There may also be other fact there are undoubtedly
link |
00:09:10.840
other faculties involved in reasoning.
link |
00:09:14.720
We have a kind of scientific faculty.
link |
00:09:17.520
Nobody knows what it is.
link |
00:09:18.800
But whatever it is that enables us
link |
00:09:20.880
to pursue certain lines of endeavor and inquiry
link |
00:09:25.440
and to decide what makes sense and doesn't make sense
link |
00:09:29.720
and to achieve a certain degree of understanding
link |
00:09:32.960
of the world, that uses language but goes beyond it.
link |
00:09:37.400
Just as using our capacity for arithmetic
link |
00:09:42.000
is not the same as having the capacity.
link |
00:09:44.880
The idea of capacity, our biology, evolution,
link |
00:09:49.360
you've talked about it defining essentially our capacity,
link |
00:09:52.520
our limit, and our scope.
link |
00:09:55.200
Can you try to define what limit and scope are?
link |
00:09:58.840
And the bigger question, do you think
link |
00:10:01.640
it's possible to find the limit of human cognition?
link |
00:10:07.560
Well, that's an interesting question.
link |
00:10:09.640
It's commonly believed, most scientists believe,
link |
00:10:13.080
that a human intelligence can answer any question in principle.
link |
00:10:19.360
I think that's a very strange belief.
link |
00:10:21.800
If we're biological organisms, which are not angels,
link |
00:10:26.280
then our capacities ought to have scope and limits,
link |
00:10:33.240
which are interrelated.
link |
00:10:34.920
Can you define those two terms?
link |
00:10:36.640
Well, let's take a concrete example.
link |
00:10:40.840
Your genetic endowment determines
link |
00:10:44.040
that you can have a male in visual system, arms and legs
link |
00:10:47.960
and so on.
link |
00:10:50.040
And therefore become a rich, complex organism.
link |
00:10:53.480
But if you look at that same genetic endowment,
link |
00:10:56.280
it prevents you from developing in other directions.
link |
00:11:00.040
There's no kind of experience which
link |
00:11:02.040
would yield the embryo to develop an insect visual system
link |
00:11:08.800
or to develop wings instead of arms.
link |
00:11:12.040
So the very endowment that confers richness and complexity
link |
00:11:18.520
also sets bounds on what can be attained.
link |
00:11:23.600
Now, I assume that our cognitive capacities
link |
00:11:27.440
are part of the organic world.
link |
00:11:29.720
Therefore, they should have the same properties.
link |
00:11:32.280
If they had no built in capacity to develop
link |
00:11:36.600
a rich and complex structure, we would have understand nothing.
link |
00:11:41.920
Just as if your genetic endowment did not
link |
00:11:47.080
compel you to develop arms and legs,
link |
00:11:50.320
you would just be some kind of a random amoeboid creature
link |
00:11:54.000
with no structure at all.
link |
00:11:56.080
So I think it's plausible to assume that there are limits.
link |
00:12:00.200
And I think we even have some evidence as to what they are.
link |
00:12:03.680
So for example, there's a classic moment
link |
00:12:06.640
in the history of science.
link |
00:12:09.080
At the time of Newton, there was from Galileo to Newton,
link |
00:12:13.880
modern science, developed on a fundamental assumption, which
link |
00:12:18.120
Newton also accepted, namely that the world,
link |
00:12:22.440
as the entire universe, is a mechanical object.
link |
00:12:26.320
And by mechanical, they meant something
link |
00:12:29.000
like the kinds of artifacts that were being developed
link |
00:12:31.600
by skilled artisans all over Europe, the Gears, the Leavers,
link |
00:12:35.800
and so on.
link |
00:12:37.120
And their belief was, well, the world
link |
00:12:39.760
is just a more complex variant of this.
link |
00:12:42.960
Newton, to his astonishment and distress,
link |
00:12:48.400
proved that there are no machines,
link |
00:12:50.960
that there's interaction without contact.
link |
00:12:54.320
His contemporaries, like Leibniz and Huygens,
link |
00:12:57.680
just dismissed this as returning to the mysticism
link |
00:13:02.560
of the Neoscholastics.
link |
00:13:04.000
And Newton agreed.
link |
00:13:05.880
He said, it is totally absurd.
link |
00:13:08.280
No person of any scientific intelligence
link |
00:13:11.120
could ever accept this for a moment.
link |
00:13:13.760
In fact, he spent the rest of his life
link |
00:13:15.320
trying to get around it somehow, as did many other scientists.
link |
00:13:20.360
That was the very criterion of intelligibility,
link |
00:13:24.080
for say, Galileo or Newton theory did not
link |
00:13:29.280
produce an intelligible world unless you
link |
00:13:31.640
get duplicated in a machine.
link |
00:13:34.080
He said, you can't.
link |
00:13:35.120
There are no machines.
link |
00:13:36.400
And finally, after a long struggle, took a long time,
link |
00:13:41.240
scientists just accepted this as common sense.
link |
00:13:45.200
But that's a significant moment.
link |
00:13:47.360
That means they abandoned the search
link |
00:13:49.320
for an intelligible world.
link |
00:13:51.760
And the great philosophers of the time
link |
00:13:54.760
understood that very well.
link |
00:13:57.000
So for example, David Hume, in his encomium to Newton,
link |
00:14:02.280
wrote that he was the greatest thinker ever and so on.
link |
00:14:05.520
He said that he unveiled many of the secrets of nature.
link |
00:14:10.480
But by showing the imperfections of the mechanical philosophy,
link |
00:14:15.840
mechanical science, he left us with,
link |
00:14:19.240
he showed that there are mysteries which ever will remain.
link |
00:14:23.520
And science just changed its goals.
link |
00:14:26.720
It abandoned the mystery.
link |
00:14:28.520
He said, can't solve it.
link |
00:14:29.760
We'll put it aside.
link |
00:14:31.400
We only look for intelligible theories.
link |
00:14:34.720
Newton's theories were intelligible.
link |
00:14:36.680
It's just what they described wasn't.
link |
00:14:39.080
Well, Locke said the same thing.
link |
00:14:42.800
I think they're basically right.
link |
00:14:44.800
And if so, that should something about the limits
link |
00:14:47.840
of human cognition.
link |
00:14:49.800
We cannot attain the goal of understanding
link |
00:14:54.560
the world, of finding an intelligible world.
link |
00:14:58.400
This mechanical philosophy, Galileo to Newton,
link |
00:15:03.640
this good case can be made that that's
link |
00:15:06.360
our instinctive conception of how things work.
link |
00:15:11.000
So if say infants are tested with things that, if this moves,
link |
00:15:17.160
and then this moves, they kind of invent something
link |
00:15:20.480
that must be invisible that's in between them
link |
00:15:23.000
that's making the move and so on.
link |
00:15:24.920
Yeah, we like physical contact.
link |
00:15:26.520
Something about our brain seeks.
link |
00:15:28.920
Makes us want a world like that.
link |
00:15:31.560
Just like it wants a world that has regular geometric figures.
link |
00:15:36.640
So for example, Descartes pointed this out,
link |
00:15:38.920
that if you have an infant who's never seen a triangle before
link |
00:15:45.160
and you draw a triangle, the infant
link |
00:15:48.360
will see a distorted triangle.
link |
00:15:52.280
Not whatever crazy figure it actually is.
link |
00:15:56.360
Three lines not coming quite together.
link |
00:15:58.440
One of them a little bit curved and so on.
link |
00:16:00.320
We just impose a conception of the world
link |
00:16:04.520
in terms of geometric, perfect geometric objects.
link |
00:16:09.320
It's now been shown that goes way beyond that.
link |
00:16:12.160
That if you show on a tachistoscope,
link |
00:16:15.440
let's say a couple of lights shining,
link |
00:16:18.560
you do it three or four times in a row,
link |
00:16:20.880
what people actually see is a rigid object in motion,
link |
00:16:25.280
not whatever is there.
link |
00:16:28.200
We all know that from a television set basically.
link |
00:16:31.840
So that gives us hints of potential limits to our cognition.
link |
00:16:35.920
I think it does, but it's a very contested view.
link |
00:16:39.400
If you do a poll among scientists, it's impossible.
link |
00:16:43.720
We can understand anything.
link |
00:16:46.320
Let me ask and give me a chance with this.
link |
00:16:48.680
So I just spent a day at a company called Neuralink.
link |
00:16:52.520
And what they do is try to design what's called a brain machine,
link |
00:16:57.800
brain computer interface.
link |
00:16:59.600
So they try to do thousands readings in the brain,
link |
00:17:03.320
be able to read what the neurons are firing,
link |
00:17:05.600
and then stimulate back, so two way.
link |
00:17:08.560
Do you think their dream is to expand the capacity of the brain
link |
00:17:14.400
to attain information, sort of increase the bandwidth
link |
00:17:18.160
to which we can search Google kind of thing?
link |
00:17:22.480
Do you think our cognitive capacity
link |
00:17:24.960
might be expanded, our linguistic capacity,
link |
00:17:28.280
our ability to reason might be expanded
link |
00:17:30.360
by adding a machine into the picture?
link |
00:17:33.200
It can be expanded in a certain sense,
link |
00:17:35.640
but a sense that was known thousands of years ago.
link |
00:17:40.320
A book expands your cognitive capacity.
link |
00:17:44.080
So this could expand it too.
link |
00:17:46.080
But it's not a fundamental expansion.
link |
00:17:47.960
It's not totally new things could be understood.
link |
00:17:51.000
Well, nothing that goes beyond our native cognitive capacities,
link |
00:17:56.480
just like you can't turn the visual system
link |
00:17:58.640
into an insect system.
link |
00:18:00.680
Well, I mean, the thought is perhaps you can't directly,
link |
00:18:06.840
but you can map.
link |
00:18:08.400
You could, but we already know that without this experiment.
link |
00:18:12.400
You could map what a bee sees and present it in a form
link |
00:18:16.720
so that we could follow it.
link |
00:18:17.960
In fact, every bee scientist does it.
link |
00:18:21.000
But you don't think there's something greater than bees
link |
00:18:25.400
that we can map and then all of a sudden discover something,
link |
00:18:29.720
be able to understand a quantum world, quantum mechanics,
link |
00:18:33.800
be able to start to be able to make sense.
link |
00:18:36.040
Students at MIT study and understand quantum mechanics.
link |
00:18:41.680
But they always reduce it to the infant, the physical.
link |
00:18:45.160
I mean, they don't really understand it.
link |
00:18:46.880
Oh, there's a thing.
link |
00:18:48.240
That may be another area where there's just
link |
00:18:50.840
a limit to understanding.
link |
00:18:52.720
We understand the theories, but the world that it describes
link |
00:18:56.720
doesn't make any sense.
link |
00:18:58.440
So the experiment, Schrodinger's cat, for example,
link |
00:19:02.360
can understand the theory.
link |
00:19:03.680
But as Schrodinger pointed out, it's an unintelligible world.
link |
00:19:09.280
One of the reasons why Einstein was always
link |
00:19:12.320
very skeptical about quantum theory.
link |
00:19:15.760
He described himself as a classical realist in one's
link |
00:19:21.360
intelligibility.
link |
00:19:23.040
He has something in common with infants in that way.
link |
00:19:27.440
So back to linguistics.
link |
00:19:30.960
If you could humor me, what are the most beautiful
link |
00:19:34.000
or fascinating aspects of language or ideas
link |
00:19:36.680
in linguistics or cognitive science
link |
00:19:38.720
that you've seen in a lifetime of studying language
link |
00:19:42.040
and studying the human mind?
link |
00:19:44.160
Well, I think the deepest property of language
link |
00:19:50.160
and puzzling property that's been discovered
link |
00:19:52.840
is what is sometimes called structure dependence.
link |
00:19:57.560
We now understand it pretty well,
link |
00:19:59.600
but it was puzzling for a long time.
link |
00:20:01.960
I'll give you a concrete example.
link |
00:20:03.600
So suppose you say the guy who fixed the car carefully
link |
00:20:09.960
packed his tools, it's ambiguous, he could fix the car
link |
00:20:14.720
carefully or carefully pack his tools.
link |
00:20:17.920
Suppose you put carefully in front,
link |
00:20:21.040
carefully the guy who fixed the car packed his tools,
link |
00:20:25.840
then it's carefully packed, not carefully fixed.
link |
00:20:29.360
And in fact, you do that even if it makes no sense.
link |
00:20:32.280
So suppose you say carefully the guy who fixed the car is tall.
link |
00:20:39.320
You have to interpret it as carefully he's tall,
link |
00:20:41.840
even though that doesn't make any sense.
link |
00:20:44.280
And notice that that's a very puzzling fact,
link |
00:20:47.200
because you're relating carefully not
link |
00:20:50.360
to the linearly closest verb, but to the linearly more
link |
00:20:55.480
remote verb.
link |
00:20:57.480
Linear approach closeness is an easy computation.
link |
00:21:02.320
But here you're doing much more of what
link |
00:21:04.000
looks like a more complex computation.
link |
00:21:06.800
You're doing something that's taking you essentially
link |
00:21:10.200
to the more remote thing.
link |
00:21:14.680
If you look at the actual structure of the sentence,
link |
00:21:17.880
where the phrases are and so on, turns out
link |
00:21:20.680
you're picking out the structurally closest thing,
link |
00:21:24.200
but the linearly more remote thing.
link |
00:21:27.960
But notice that what's linear is 100% of what you hear.
link |
00:21:32.520
You never hear structure.
link |
00:21:34.080
Can't.
link |
00:21:35.160
So what you're doing is, instantly,
link |
00:21:37.920
this is universal, all constructions, all languages.
link |
00:21:42.120
And what we're compelled to do is
link |
00:21:44.840
carry out what looks like the more complex computation
link |
00:21:48.680
on material that we never hear.
link |
00:21:52.240
And we ignore 100% of what we hear
link |
00:21:55.400
and the simplest computation.
link |
00:21:57.520
But by now, there's even a neural basis
link |
00:22:00.080
for this that's somewhat understood.
link |
00:22:02.880
And there's good theories by now that explain
link |
00:22:05.000
why it's true.
link |
00:22:06.800
That's a deep insight into the surprising nature
link |
00:22:11.000
of language with many consequences.
link |
00:22:14.120
Let me ask you about a field of machine learning,
link |
00:22:17.560
deep learning.
link |
00:22:18.960
There's been a lot of progress in neural networks based,
link |
00:22:22.080
neural network based machine learning in the recent decade.
link |
00:22:26.400
Of course, neural network research
link |
00:22:28.200
goes back many decades.
link |
00:22:30.800
What do you think are the limits of deep learning,
link |
00:22:35.600
of neural network based machine learning?
link |
00:22:38.480
Well, to give a real answer to that,
link |
00:22:41.160
you'd have to understand the exact processes that
link |
00:22:45.040
are taking place.
link |
00:22:46.000
And those are pretty opaque.
link |
00:22:47.960
So it's pretty hard to prove a theorem about what can be done
link |
00:22:52.160
and what can't be done.
link |
00:22:54.080
But I think it's reasonably clear.
link |
00:22:56.840
I mean, putting technicalities aside,
link |
00:22:59.200
what deep learning is doing is taking huge numbers of examples
link |
00:23:05.360
and finding some patterns.
link |
00:23:07.720
OK, that could be interesting in some areas it is.
link |
00:23:11.960
But we have to ask here a certain question.
link |
00:23:15.080
Is it engineering or is it science?
link |
00:23:18.200
Engineering in the sense of just trying
link |
00:23:20.560
to build something that's useful,
link |
00:23:22.840
or science in the sense that it's
link |
00:23:24.520
trying to understand something about elements of the world.
link |
00:23:28.760
So it takes a Google parser.
link |
00:23:31.800
We can ask that question.
link |
00:23:34.040
Is it useful?
link |
00:23:35.360
It's pretty useful.
link |
00:23:36.840
I use a Google translator.
link |
00:23:39.360
So on engineering grounds, it's kind of worth having,
link |
00:23:43.400
like a bulldozer.
link |
00:23:45.680
Does it tell you anything about human language?
link |
00:23:49.000
Zero.
link |
00:23:50.920
Nothing.
link |
00:23:51.560
And in fact, it's very striking.
link |
00:23:54.920
From the very beginning, it's just totally remote from science.
link |
00:24:00.360
So what is a Google parser doing?
link |
00:24:02.640
It's taking an enormous text, let's say,
link |
00:24:05.480
the Wall Street Journal corpus, and asking,
link |
00:24:09.000
how close can we come to getting the right description
link |
00:24:14.120
of every sentence in the corpus?
link |
00:24:16.440
Well, every sentence in the corpus
link |
00:24:18.520
is essentially an experiment.
link |
00:24:21.560
Each sentence that you produce is an experiment,
link |
00:24:24.760
which is, am I a grammatical sentence?
link |
00:24:27.800
The answer is usually yes.
link |
00:24:29.680
So most of the stuff in the corpus is grammatical sentences.
link |
00:24:33.240
But now ask yourself, is there any science
link |
00:24:36.880
which takes random experiments, which
link |
00:24:40.440
are carried out for no reason whatsoever,
link |
00:24:43.760
and tries to find out something from them?
link |
00:24:46.560
Like if you're, say, a chemistry PhD student,
link |
00:24:49.680
you want to get a thesis, can you say, well,
link |
00:24:52.000
I'm just going to mix a lot of things together, no purpose.
link |
00:24:57.440
And maybe I'll find something.
link |
00:24:59.720
It'd be left out of the department.
link |
00:25:02.480
Science tries to find critical experiments, ones
link |
00:25:06.560
that answer some theoretical question.
link |
00:25:09.160
Doesn't care about coverage of millions of experiments.
link |
00:25:13.000
So it just begins by being very remote from science,
link |
00:25:16.240
and it continues like that.
link |
00:25:18.280
So the usual question that's asked
link |
00:25:21.360
about, say, a Google parser, is how well does it do,
link |
00:25:25.240
or some parser, how well does it do on a corpus?
link |
00:25:28.360
But there's another question that's never asked.
link |
00:25:31.160
How well does it do on something that violates
link |
00:25:33.840
all the rules of language?
link |
00:25:36.120
So for example, take the structure dependence case
link |
00:25:38.720
that I mentioned.
link |
00:25:39.720
Suppose there was a language in which
link |
00:25:42.240
you used linear proximity as the mode of interpretation.
link |
00:25:49.080
These deep learning would work very easily on that.
link |
00:25:51.760
In fact, much more easily on an actual language.
link |
00:25:54.840
Is that a success?
link |
00:25:55.960
No, that's a failure.
link |
00:25:57.640
From a scientific point of view, it's a failure.
link |
00:26:00.760
It shows that we're not discovering
link |
00:26:03.520
the nature of the system at all, because it does just as well,
link |
00:26:07.000
or even better on things that violate the structure
link |
00:26:09.840
of the system.
link |
00:26:10.920
And it goes on from there.
link |
00:26:12.760
It's not an argument against doing it.
link |
00:26:14.800
It is useful to have devices like this.
link |
00:26:17.200
So yes, neural networks are kind of approximators that look.
link |
00:26:21.560
There's echoes of the behavioral debates, right?
link |
00:26:24.360
Behavioralism.
link |
00:26:26.160
More than echoes.
link |
00:26:27.600
Many of the people in deep learning
link |
00:26:30.080
say they've vindicated Terry Sanyosky, for example,
link |
00:26:34.560
in his recent books.
link |
00:26:36.320
This vindicates skinnerian behaviors.
link |
00:26:39.520
It doesn't have anything to do with it.
link |
00:26:41.440
Yes, but I think there's something actually fundamentally
link |
00:26:45.720
different when the data set is huge.
link |
00:26:48.280
But your point is extremely well taken.
link |
00:26:51.160
But do you think we can learn, approximate,
link |
00:26:55.400
that interesting complex structure of language
link |
00:26:58.800
with neural networks that will somehow help us
link |
00:27:01.320
understand the science?
link |
00:27:03.640
It's possible.
link |
00:27:04.480
I mean, you find patterns that you hadn't noticed, let's say.
link |
00:27:08.720
Could be.
link |
00:27:09.760
In fact, it's very much like a kind of linguistics
link |
00:27:13.600
that's done, what's called corpus linguistics.
link |
00:27:18.080
When you suppose you have some language where all the speakers
link |
00:27:22.560
have died out, but you have records.
link |
00:27:25.120
So you just look at the records and see
link |
00:27:28.560
what you can figure out from that.
link |
00:27:30.560
It's much better than, it's much better
link |
00:27:32.440
to have actual speakers where you can do critical experiments.
link |
00:27:36.040
But if they're all dead, you can't do them.
link |
00:27:38.480
So you have to try to see what you
link |
00:27:39.920
can find out from just looking at the data that's around.
link |
00:27:43.800
You can learn things.
link |
00:27:45.000
Actually, paleoanthropology is very much like that.
link |
00:27:48.400
You can't do a critical experiment
link |
00:27:50.560
on what happened two million years ago.
link |
00:27:53.480
So you kind of force just to take what data is around
link |
00:27:56.480
and see what you can figure out from it.
link |
00:27:59.200
OK, it's a serious study.
link |
00:28:01.400
So let me venture into another whole body of work
link |
00:28:05.560
and philosophical question.
link |
00:28:08.400
You've said that evil in society arises from institutions,
link |
00:28:13.040
not inherently from our nature.
link |
00:28:15.520
Do you think most human beings are good?
link |
00:28:17.760
They have good intent?
link |
00:28:19.560
Or do most have the capacity for intentional evil
link |
00:28:22.840
that depends on their upbringing,
link |
00:28:24.560
depends on their environment, on context?
link |
00:28:27.160
I wouldn't say that they don't arise from our nature.
link |
00:28:30.920
Anything we do arises from our nature.
link |
00:28:33.960
And the fact that we have certain institutions, not others,
link |
00:28:38.040
is one mode in which human nature has expressed itself.
link |
00:28:43.680
But as far as we know, human nature
link |
00:28:46.240
could yield many different kinds of institutions.
link |
00:28:50.200
The particular ones that have developed
link |
00:28:53.120
have to do with historical contingency, who conquered whom,
link |
00:28:58.040
and that sort of thing.
link |
00:29:00.200
They're not rooted in our nature in the sense
link |
00:29:04.320
that they're essential to our nature.
link |
00:29:06.720
So it's commonly argued that these days that something
link |
00:29:11.360
like market systems is just part of our nature.
link |
00:29:15.640
But we know from a huge amount of evidence
link |
00:29:18.600
that that's not true.
link |
00:29:19.440
There's all kinds of other structures.
link |
00:29:21.680
It's a particular fact of a moment of modern history.
link |
00:29:26.200
Others have argued that the roots of classical liberalism
link |
00:29:30.680
actually argue that what's called sometimes
link |
00:29:34.360
an instinct for freedom, an instinct
link |
00:29:37.440
to be free of domination by illegitimate authority
link |
00:29:42.120
is the core of our nature.
link |
00:29:43.600
That would be the opposite of this.
link |
00:29:45.600
And we don't know.
link |
00:29:47.480
We just know that human nature can accommodate both kinds.
link |
00:29:52.200
If you look back at your life, is there
link |
00:29:55.240
a moment in your intellectual life, or life in general,
link |
00:29:59.040
that jumps from memory that brought you happiness,
link |
00:30:02.040
that you would love to relive again?
link |
00:30:05.080
Sure.
link |
00:30:06.440
Falling in love, having children.
link |
00:30:10.160
What about, so you have put forward into the world
link |
00:30:13.840
a lot of incredible ideas in linguistics,
link |
00:30:17.640
in cognitive science, in terms of ideas
link |
00:30:22.280
that just excites you when it first came to you,
link |
00:30:26.040
that you would love to relive those moments?
link |
00:30:28.880
Well, I mean, when you make a discovery about something
link |
00:30:33.000
that's exciting, like, say, even the observation
link |
00:30:38.960
of structured dependence and on from that,
link |
00:30:42.680
the explanation for it.
link |
00:30:44.400
But the major things just seem like common sense.
link |
00:30:49.480
So if you go back to take your question
link |
00:30:53.160
about external and internal language,
link |
00:30:55.800
you go back to, say, the 1950s, almost entirely languages
link |
00:31:01.320
regarded an external object, something outside the mind.
link |
00:31:06.280
It just seemed obvious that that can't be true.
link |
00:31:10.720
Like I said, there's something about you
link |
00:31:13.240
that determines you're talking English, not Swahili or something.
link |
00:31:18.600
But that's not really a discovery.
link |
00:31:20.280
That's just an observation that's transparent.
link |
00:31:24.240
You might say it's kind of like the 17th century,
link |
00:31:30.680
the beginnings of modern science, 17th century.
link |
00:31:33.480
They came from being willing to be puzzled about things
link |
00:31:38.600
that seemed obvious.
link |
00:31:40.400
So it seems obvious that a heavy ball of ladle
link |
00:31:44.560
fall faster than a light ball of ladle.
link |
00:31:47.560
But Galileo was not impressed by the fact
link |
00:31:50.880
that it seemed obvious.
link |
00:31:52.680
So he wanted to know if it's true.
link |
00:31:54.920
He carried out experiments, actually thought experiments,
link |
00:31:59.120
never actually carried them out, which
link |
00:32:01.440
could that can't be true.
link |
00:32:04.480
And out of things like that, observations of that kind,
link |
00:32:11.360
why does a ball fall to the ground instead of rising,
link |
00:32:15.760
let's say, seems obvious.
link |
00:32:18.480
Do you start thinking about it?
link |
00:32:20.040
Because why does it, why does steam rise, let's say?
link |
00:32:23.920
And I think the beginnings of modern linguistics, roughly
link |
00:32:27.600
in the 50s, are kind of like that,
link |
00:32:30.040
just being willing to be puzzled about phenomena that
link |
00:32:33.800
looked, from some point of view, obvious.
link |
00:32:38.040
For example, a kind of doctrine, almost official doctrine,
link |
00:32:42.680
of structural linguistics in the 50s
link |
00:32:46.040
was that languages can differ from one another
link |
00:32:50.480
in arbitrary ways.
link |
00:32:52.640
And each one has to be studied on its own
link |
00:32:56.440
without any presuppositions.
link |
00:32:58.880
In fact, there were similar views among biologists
link |
00:33:02.320
about the nature of organisms, that each one is,
link |
00:33:05.840
they're so different when you look at them,
link |
00:33:07.760
that almost anything, you could be almost anything.
link |
00:33:10.960
Well, in both domains, it's been learned
link |
00:33:13.080
that that's very far from true.
link |
00:33:15.480
They're very narrow constraints on what
link |
00:33:17.600
could be an organism or what could be a language.
link |
00:33:21.560
But these are, that's just the nature of inquiry.
link |
00:33:26.960
Science in general, yeah, inquiry.
link |
00:33:29.320
So one of the peculiar things about us human beings
link |
00:33:33.360
is our mortality.
link |
00:33:35.240
Ernest Becker explored it in general.
link |
00:33:38.160
Do you ponder the value of mortality?
link |
00:33:40.400
Do you think about your own mortality?
link |
00:33:43.360
I used to when I was about 12 years old.
link |
00:33:47.960
I wondered, I didn't care much about my own mortality,
link |
00:33:51.880
but I was worried about the fact that if my consciousness
link |
00:33:56.320
disappeared, would the entire universe disappear?
link |
00:34:00.240
That was frightening.
link |
00:34:01.520
Did you ever find an answer to that question?
link |
00:34:03.680
No, nobody's ever found an answer.
link |
00:34:05.840
But I stopped being bothered by it.
link |
00:34:07.800
It's kind of like Woody Allen in one of his films,
link |
00:34:10.360
you may recall, he starts, he goes to a shrink
link |
00:34:14.080
when he's a child and the shrink asks him,
link |
00:34:16.520
what's your problem?
link |
00:34:17.480
He says, I just learned that the universe is expanding.
link |
00:34:21.600
I can't handle that.
link |
00:34:24.320
And then another absurd question is,
link |
00:34:27.200
what do you think is the meaning of our existence here,
link |
00:34:32.560
our life on Earth, our brief little moment in time?
link |
00:34:35.760
It's something we answer by our own activities.
link |
00:34:40.560
There's no general answer.
link |
00:34:42.320
We determine what the meaning of it is.
link |
00:34:46.560
The action determine the meaning.
link |
00:34:48.680
Meaning in the sense of significance,
link |
00:34:50.520
not meaning in the sense that a chair means this,
link |
00:34:55.360
but the significance of your life is something you create.
link |
00:35:01.040
No, thank you so much for talking today.
link |
00:35:02.520
It was a huge honor.
link |
00:35:04.120
Thank you so much.
link |
00:35:05.920
Thanks for listening to this conversation with Noam Chomsky
link |
00:35:08.680
and thank you to our presenting sponsor, Cash App.
link |
00:35:11.960
Download it, use code LEX Podcast.
link |
00:35:14.760
You'll get $10 and $10 will go to first.
link |
00:35:17.960
A STEM education nonprofit that inspires hundreds
link |
00:35:20.880
of thousands of young minds to learn
link |
00:35:23.200
and to dream of engineering our future.
link |
00:35:26.000
If you enjoy this podcast, subscribe on YouTube,
link |
00:35:28.600
give us five stars on Apple Podcasts,
link |
00:35:30.600
support on Patreon, or connect with me on Twitter.
link |
00:35:34.240
Thank you for listening and hope to see you next time.