back to index

Noam Chomsky: Language, Cognition, and Deep Learning | Lex Fridman Podcast #53


small model | large model

link |
00:00:00.000
The following is a conversation with Noam Chomsky.
link |
00:00:03.800
He's truly one of the great minds of our time
link |
00:00:06.760
and is one of the most cited scholars
link |
00:00:08.400
in the history of our civilization.
link |
00:00:10.760
He has spent over 60 years at MIT
link |
00:00:13.400
and recently also joined the University of Arizona,
link |
00:00:16.320
where we met for this conversation.
link |
00:00:18.640
But it was at MIT about four and a half years ago
link |
00:00:21.760
when I first met Noam.
link |
00:00:23.440
My first few days there,
link |
00:00:24.760
I remember getting into an elevator at Stata Center,
link |
00:00:27.400
pressing the button for whatever floor,
link |
00:00:29.480
looking up and realizing it was just me and Noam Chomsky
link |
00:00:33.560
riding the elevator,
link |
00:00:35.480
just me and one of the seminal figures of linguistics,
link |
00:00:38.400
cognitive science, philosophy,
link |
00:00:40.000
and political thought in the past century, if not ever.
link |
00:00:43.920
I tell that silly story because I think life is made up
link |
00:00:47.320
of funny little defining moments that you never forget
link |
00:00:50.740
for reasons that may be too poetic to try and explain.
link |
00:00:54.880
That was one of mine.
link |
00:00:56.140
Noam has been an inspiration to me and millions of others.
link |
00:00:59.940
It was truly an honor for me
link |
00:01:01.500
to sit down with him in Arizona.
link |
00:01:03.580
I traveled there just for this conversation.
link |
00:01:06.500
And in a rare, heartbreaking moment,
link |
00:01:09.140
after everything was set up and tested,
link |
00:01:11.740
the camera was moved and accidentally,
link |
00:01:13.500
the recording button was pressed, stopping the recording.
link |
00:01:17.540
So I have good audio of both of us, but no video of Noam.
link |
00:01:21.140
Just the video of me and my sleep deprived but excited face
link |
00:01:25.340
that I get to keep as a reminder of my failures.
link |
00:01:29.620
Most people just listen to this audio version
link |
00:01:31.500
for the podcast as opposed to watching it on YouTube.
link |
00:01:34.860
But still, it's heartbreaking for me.
link |
00:01:38.140
I hope you understand and still enjoy this conversation
link |
00:01:40.900
as much as I did.
link |
00:01:42.220
The depth of intellect that Noam showed
link |
00:01:44.460
and his willingness to truly listen to me,
link |
00:01:47.460
a silly looking Russian in a suit.
link |
00:01:50.300
It was humbling and something I'm deeply grateful for.
link |
00:01:55.580
As some of you know, this podcast is a side project for me,
link |
00:01:59.660
where my main journey and dream is to build AI systems
link |
00:02:03.580
that do some good for the world.
link |
00:02:05.480
This latter effort takes up most of my time,
link |
00:02:07.820
but for the moment has been mostly private.
link |
00:02:10.540
But the former, the podcast,
link |
00:02:12.820
is something I put my heart and soul into.
link |
00:02:15.380
And I hope you feel that, even when I screw things up.
link |
00:02:18.600
I recently started doing ads
link |
00:02:21.120
at the end of the introduction.
link |
00:02:22.840
I'll do one or two minutes after introducing the episode
link |
00:02:25.640
and never any ads in the middle
link |
00:02:27.400
that break the flow of the conversation.
link |
00:02:29.720
I hope that works for you
link |
00:02:31.160
and doesn't hurt the listening experience.
link |
00:02:33.960
This is the Artificial Intelligence Podcast.
link |
00:02:37.200
If you enjoy it, subscribe on YouTube,
link |
00:02:39.800
give it five stars on Apple Podcast,
link |
00:02:41.840
support it on Patreon,
link |
00:02:43.240
or simply connect with me on Twitter,
link |
00:02:45.400
at Lex Friedman, spelled F R I D M A N.
link |
00:02:49.440
This show is presented by Cash App,
link |
00:02:51.720
the number one finance app in the App Store.
link |
00:02:54.240
I personally use Cash App to send money to friends,
link |
00:02:56.800
but you can also use it to buy, sell,
link |
00:02:58.760
and deposit Bitcoin in just seconds.
link |
00:03:01.320
Cash App also has a new investing feature.
link |
00:03:04.160
You can buy fractions of a stock, say $1 worth,
link |
00:03:07.120
no matter what the stock price is.
link |
00:03:09.200
Broker services are provided by Cash App Investing,
link |
00:03:11.960
a subsidiary of Square and member SIPC.
link |
00:03:15.560
I'm excited to be working with Cash App
link |
00:03:17.680
to support one of my favorite organizations called The First,
link |
00:03:20.920
best known for their FIRST Robotics and Lego competitions.
link |
00:03:24.360
They educate and inspire hundreds of thousands of students
link |
00:03:27.720
in over 110 countries
link |
00:03:29.640
and have a perfect rating on Charity Navigator,
link |
00:03:31.840
which means the donated money
link |
00:03:33.840
is used to maximum effectiveness.
link |
00:03:36.560
When you get Cash App from the App Store,
link |
00:03:38.680
Google Play and use code LexPodcast,
link |
00:03:42.800
you'll get $10 and Cash App will also donate $10 to FIRST,
link |
00:03:47.120
which again is an organization that I've personally seen
link |
00:03:49.800
inspire girls and boys to dream of engineering a better world.
link |
00:03:54.520
And now here's my conversation with Noam Chomsky.
link |
00:03:59.840
I apologize for the absurd philosophical question,
link |
00:04:04.160
but if an alien species were to visit Earth,
link |
00:04:07.120
do you think we would be able to find a common language
link |
00:04:10.920
or protocol of communication with them?
link |
00:04:13.680
There are arguments to the effect that we could.
link |
00:04:18.320
In fact, one of them was Marv Minsky's.
link |
00:04:22.440
Back about 20 or 30 years ago,
link |
00:04:24.760
he performed a brief experiment with a student of his,
link |
00:04:30.040
Dan Bobrow, they essentially ran
link |
00:04:33.760
the simplest possible touring machines,
link |
00:04:36.720
just free to see what would happen.
link |
00:04:39.560
And most of them crashed,
link |
00:04:42.560
either got into an infinite loop or stopped.
link |
00:04:47.800
The few that persisted,
link |
00:04:51.000
essentially gave something like arithmetic.
link |
00:04:55.960
And his conclusion from that was that
link |
00:04:59.980
if some alien species developed higher intelligence,
link |
00:05:04.980
they would at least have arithmetic,
link |
00:05:07.440
they would at least have what the simplest computer would do.
link |
00:05:12.940
And in fact, he didn't know that at the time,
link |
00:05:16.040
but the core principles of natural language
link |
00:05:20.760
are based on operations which yield something
link |
00:05:25.220
like arithmetic in the limiting case, in the minimal case.
link |
00:05:29.360
So it's conceivable that a mode of communication
link |
00:05:34.040
could be established based on the core properties
link |
00:05:38.560
of human language and the core properties of arithmetic,
link |
00:05:41.460
which maybe are universally shared.
link |
00:05:44.920
So it's conceivable.
link |
00:05:46.680
What is the structure of that language,
link |
00:05:50.800
of language as an internal system inside our mind
link |
00:05:55.160
versus an external system as it's expressed?
link |
00:05:58.920
It's not an alternative,
link |
00:06:00.880
it's two different concepts of language.
link |
00:06:02.960
Different.
link |
00:06:03.780
It's a simple fact that there's something about you,
link |
00:06:07.280
a trait of yours, part of the organism, you,
link |
00:06:11.640
that determines that you're talking English
link |
00:06:14.600
and not Tagalog, let's say.
link |
00:06:16.980
So there is an inner system.
link |
00:06:19.480
It determines the sound and meaning
link |
00:06:22.960
of the infinite number of expressions of your language.
link |
00:06:27.120
It's localized.
link |
00:06:28.720
It's not on your foot, obviously, it's in your brain.
link |
00:06:31.760
If you look more closely, it's in specific configurations
link |
00:06:35.240
of your brain.
link |
00:06:36.480
And that's essentially like the internal structure
link |
00:06:40.600
of your laptop, whatever programs it has are in there.
link |
00:06:44.960
Now, one of the things you can do with language,
link |
00:06:47.840
it's a marginal thing, in fact,
link |
00:06:50.040
is use it to externalize what's in your head.
link |
00:06:54.800
Actually, most of your use of language
link |
00:06:56.660
is thought, internal thought.
link |
00:06:58.820
But you can do what you and I are now doing.
link |
00:07:00.920
We can externalize it.
link |
00:07:02.640
Well, the set of things that we're externalizing
link |
00:07:05.640
are an external system.
link |
00:07:07.680
They're noises in the atmosphere.
link |
00:07:11.160
And you can call that language
link |
00:07:12.920
in some other sense of the word.
link |
00:07:14.360
But it's not a set of alternatives.
link |
00:07:16.800
These are just different concepts.
link |
00:07:18.980
So how deep do the roots of language go in our brain?
link |
00:07:23.500
Our mind, is it yet another feature like vision,
link |
00:07:26.780
or is it something more fundamental
link |
00:07:28.480
from which everything else springs in the human mind?
link |
00:07:31.520
Well, in a way, it's like vision.
link |
00:07:33.720
There's something about our genetic endowment
link |
00:07:38.580
that determines that we have a mammalian
link |
00:07:41.600
rather than an insect visual system.
link |
00:07:44.720
And there's something in our genetic endowment
link |
00:07:47.440
that determines that we have a human language faculty.
link |
00:07:51.440
No other organism has anything remotely similar.
link |
00:07:55.200
So in that sense, it's internal.
link |
00:07:58.280
Now there is a long tradition,
link |
00:07:59.720
which I think is valid going back centuries
link |
00:08:03.600
to the early scientific revolution,
link |
00:08:05.500
at least that holds that language
link |
00:08:09.340
is the sort of the core of human cognitive nature.
link |
00:08:13.620
It's the source, it's the mode for constructing thoughts
link |
00:08:18.080
and expressing them.
link |
00:08:19.640
That is what forms thought.
link |
00:08:22.760
And it's got fundamental creative capacities.
link |
00:08:27.200
It's free, independent, unbounded, and so on.
link |
00:08:31.320
And undoubtedly, I think the basis
link |
00:08:34.860
for our creative capacities
link |
00:08:38.480
and the other remarkable human capacities
link |
00:08:43.840
that lead to the unique achievements
link |
00:08:47.500
and not so great achievements of the species.
link |
00:08:51.300
The capacity to think and reason,
link |
00:08:53.620
do you think that's deeply linked with language?
link |
00:08:56.240
Do you think the way we,
link |
00:08:58.200
the internal language system is essentially the mechanism
link |
00:09:01.560
by which we also reason internally?
link |
00:09:04.120
It is undoubtedly the mechanism by which we reason.
link |
00:09:06.920
There may also be other fact,
link |
00:09:09.320
there are undoubtedly other faculties involved in reasoning.
link |
00:09:14.720
We have a kind of scientific faculty,
link |
00:09:17.520
nobody knows what it is,
link |
00:09:18.840
but whatever it is that enables us
link |
00:09:20.920
to pursue certain lines of endeavor and inquiry
link |
00:09:25.440
and to decide what makes sense and doesn't make sense
link |
00:09:29.760
and to achieve a certain degree
link |
00:09:32.200
of understanding of the world,
link |
00:09:33.640
that uses language, but goes beyond it.
link |
00:09:37.440
Just as using our capacity for arithmetic
link |
00:09:42.000
is not the same as having the capacity.
link |
00:09:44.880
The idea of capacity, our biology, evolution,
link |
00:09:49.360
you've talked about it defining essentially our capacity,
link |
00:09:52.520
our limit and our scope.
link |
00:09:55.200
Can you try to define what limit and scope are?
link |
00:09:58.840
And the bigger question,
link |
00:10:01.160
do you think it's possible to find the limit
link |
00:10:04.800
of human cognition?
link |
00:10:07.560
Well, that's an interesting question.
link |
00:10:09.880
It's commonly believed, most scientists believe
link |
00:10:13.080
that human intelligence can answer any question
link |
00:10:17.640
in principle.
link |
00:10:19.340
I think that's a very strange belief.
link |
00:10:21.800
If we're biological organisms,
link |
00:10:24.280
which are not angels,
link |
00:10:26.160
then our capacities ought to have scope
link |
00:10:31.800
and limits which are interrelated.
link |
00:10:34.920
Can you define those two terms?
link |
00:10:36.620
Well, let's take a concrete example.
link |
00:10:40.740
Your genetic endowment determines
link |
00:10:44.040
that you can have a male in visual system,
link |
00:10:46.840
arms and legs and so on,
link |
00:10:49.260
but it therefore become a rich, complex organism.
link |
00:10:53.440
But if you look at that same genetic endowment,
link |
00:10:56.240
it prevents you from developing in other directions.
link |
00:10:59.980
There's no kind of experience
link |
00:11:01.720
which would yield the embryo
link |
00:11:05.760
to develop an insect visual system
link |
00:11:08.760
or to develop wings instead of arms.
link |
00:11:11.960
So the very endowment that confers richness and complexity
link |
00:11:16.960
also sets bounds on what can be attained.
link |
00:11:23.600
Now, I assume that our cognitive capacities
link |
00:11:27.440
are part of the organic world.
link |
00:11:29.640
Therefore, they should have the same properties.
link |
00:11:32.260
If they had no built in capacity
link |
00:11:35.700
to develop a rich and complex structure,
link |
00:11:39.180
we would understand nothing.
link |
00:11:41.920
Just as if your genetic endowment
link |
00:11:46.080
did not compel you to develop arms and legs,
link |
00:11:50.320
you would just be some kind of random amoeboid creature
link |
00:11:54.000
with no structure at all.
link |
00:11:56.080
So I think it's plausible to assume that there are limits
link |
00:12:00.240
and I think we even have some evidence as to what they are.
link |
00:12:03.680
So for example, there's a classic moment
link |
00:12:06.660
in the history of science at the time of Newton.
link |
00:12:11.620
There was a from Galileo to Newton modern science
link |
00:12:15.360
developed on a fundamental assumption
link |
00:12:17.920
which Newton also accepted.
link |
00:12:20.120
Namely that the world is an entire universe
link |
00:12:24.100
is a mechanical object.
link |
00:12:26.340
And by mechanical, they meant something like
link |
00:12:29.260
the kinds of artifacts that were being developed
link |
00:12:31.640
by skilled artisans all over Europe,
link |
00:12:34.260
the gears, levers and so on.
link |
00:12:37.120
And their belief was well,
link |
00:12:39.320
the world is just a more complex variant of this.
link |
00:12:42.000
Newton, to his astonishment and distress,
link |
00:12:48.400
proved that there are no machines,
link |
00:12:50.960
that there's interaction without contact.
link |
00:12:54.300
His contemporaries like Leibniz and Huygens
link |
00:12:57.640
just dismissed this as returning to the mysticism
link |
00:13:02.560
of the neo scholastics.
link |
00:13:03.960
And Newton agreed.
link |
00:13:05.840
He said it is totally absurd.
link |
00:13:08.240
No person of any scientific intelligence
link |
00:13:11.120
could ever accept this for a moment.
link |
00:13:13.760
In fact, he spent the rest of his life
link |
00:13:15.280
trying to get around it somehow,
link |
00:13:17.720
as did many other scientists.
link |
00:13:20.320
That was the very criterion of intelligibility
link |
00:13:24.040
for say Galileo or Newton.
link |
00:13:27.640
Theory did not produce an intelligible world
link |
00:13:31.260
unless you could duplicate it in a machine.
link |
00:13:34.040
He showed you can't, there are no machines, any.
link |
00:13:37.520
Finally, after a long struggle, took a long time,
link |
00:13:41.240
scientists just accepted this as common sense.
link |
00:13:45.240
But that's a significant moment.
link |
00:13:47.400
That means they abandoned the search
link |
00:13:49.360
for an intelligible world.
link |
00:13:51.780
And the great philosophers of the time
link |
00:13:54.800
understood that very well.
link |
00:13:57.000
So for example, David Hume in his encomium to Newton
link |
00:14:02.320
wrote that who was the greatest thinker ever and so on.
link |
00:14:05.560
He said that he unveiled many of the secrets of nature,
link |
00:14:10.520
but by showing the imperfections
link |
00:14:13.320
of the mechanical philosophy, mechanical science,
link |
00:14:17.520
he left us with, he showed that there are mysteries
link |
00:14:21.240
which ever will remain.
link |
00:14:23.560
And science just changed its goals.
link |
00:14:26.760
It abandoned the mysteries.
link |
00:14:28.720
It can't solve it, we'll put it aside.
link |
00:14:31.480
We only look for intelligible theories.
link |
00:14:34.740
Newton's theories were intelligible.
link |
00:14:36.680
It's just what they described wasn't.
link |
00:14:39.080
Well, Locke said the same thing.
link |
00:14:42.780
I think they're basically right.
link |
00:14:44.800
And if so, that showed something
link |
00:14:47.000
about the limits of human cognition.
link |
00:14:49.680
We cannot attain the goal of understanding the world,
link |
00:14:55.680
of finding an intelligible world.
link |
00:14:58.400
This mechanical philosophy Galileo to Newton,
link |
00:15:02.540
there's a good case that can be made
link |
00:15:05.260
that that's our instinctive conception of how things work.
link |
00:15:10.980
So if say infants are tested with things that,
link |
00:15:16.220
if this moves and then this moves,
link |
00:15:18.660
they kind of invent something that must be invisible
link |
00:15:22.060
that's in between them that's making them move and so on.
link |
00:15:24.940
Yeah, we like physical contact.
link |
00:15:26.500
Something about our brain seeks.
link |
00:15:28.940
Makes us want a world like that.
link |
00:15:31.540
Just like it wants a world
link |
00:15:32.940
that has regular geometric figures.
link |
00:15:36.620
So for example, Descartes pointed this out
link |
00:15:38.900
that if you have an infant
link |
00:15:41.820
who's never seen a triangle before and you draw a triangle,
link |
00:15:47.660
the infant will see a distorted triangle,
link |
00:15:52.220
not whatever crazy figure it actually is.
link |
00:15:56.340
Three lines not coming quite together,
link |
00:15:58.420
one of them a little bit curved and so on.
link |
00:16:00.300
We just impose a conception of the world
link |
00:16:04.500
in terms of geometric, perfect geometric objects.
link |
00:16:09.340
It's now been shown that goes way beyond that.
link |
00:16:12.140
That if you show on a tachistoscope,
link |
00:16:15.420
let's say a couple of lights shining,
link |
00:16:18.540
you do it three or four times in a row.
link |
00:16:20.860
What people actually see is a rigid object in motion,
link |
00:16:25.240
not whatever's there.
link |
00:16:26.900
We all know that from a television set basically.
link |
00:16:31.900
So that gives us hints of potential limits
link |
00:16:34.660
to our cognition.
link |
00:16:35.980
I think it does, but it's a very contested view.
link |
00:16:39.420
If you do a poll among scientists,
link |
00:16:42.300
it's impossible we can understand anything.
link |
00:16:46.360
Let me ask and give me a chance with this.
link |
00:16:48.620
So I just spent a day at a company called Neuralink
link |
00:16:52.540
and what they do is try to design
link |
00:16:56.380
what's called the brain machine, brain computer interface.
link |
00:16:59.580
So they try to do thousands readings in the brain,
link |
00:17:03.300
be able to read what the neurons are firing
link |
00:17:05.580
and then stimulate back, so two way.
link |
00:17:08.540
Do you think their dream is to expand the capacity
link |
00:17:12.780
of the brain to attain information,
link |
00:17:16.660
sort of increase the bandwidth
link |
00:17:18.180
of which we can search Google kind of thing?
link |
00:17:22.460
Do you think our cognitive capacity might be expanded
link |
00:17:26.260
our linguistic capacity, our ability to reason
link |
00:17:29.340
might be expanded by adding a machine into the picture?
link |
00:17:33.180
Can be expanded in a certain sense,
link |
00:17:35.620
but a sense that was known thousands of years ago.
link |
00:17:39.860
A book expands your cognitive capacity.
link |
00:17:43.700
Okay, so this could expand it too.
link |
00:17:46.060
But it's not a fundamental expansion.
link |
00:17:47.940
It's not totally new things could be understood.
link |
00:17:50.940
Well, nothing that goes beyond
link |
00:17:53.060
their native cognitive capacities.
link |
00:17:56.500
Just like you can't turn the visual system
link |
00:17:58.620
into an insect system.
link |
00:18:00.660
Well, I mean, the thought is,
link |
00:18:04.220
the thought is perhaps you can't directly,
link |
00:18:06.820
but you can map sort of.
link |
00:18:08.660
You couldn't, but we already,
link |
00:18:10.060
we know that without this experiment.
link |
00:18:12.380
You could map what a bee sees and present it in a form
link |
00:18:16.740
so that we could follow it.
link |
00:18:17.980
In fact, every bee scientist does that.
link |
00:18:19.940
But you don't think there's something greater than bees
link |
00:18:25.380
that we can map and then all of a sudden discover something,
link |
00:18:29.700
be able to understand a quantum world, quantum mechanics,
link |
00:18:33.780
be able to start to be able to make sense.
link |
00:18:35.980
Students at MIT study and understand quantum mechanics.
link |
00:18:41.660
But they always reduce it to the infant, the physical.
link |
00:18:45.100
I mean, they don't really understand.
link |
00:18:46.900
Oh, you don't, there's thing, that may be another area
link |
00:18:50.260
where there's just a limit to understanding.
link |
00:18:52.740
We understand the theories,
link |
00:18:54.500
but the world that it describes doesn't make any sense.
link |
00:18:58.420
So, you know, the experiment, Schrodinger's cat,
link |
00:19:01.460
for example, can understand the theory,
link |
00:19:03.700
but as Schrodinger pointed out,
link |
00:19:05.780
it's an unintelligible world.
link |
00:19:09.300
One of the reasons why Einstein
link |
00:19:11.140
was always very skeptical about quantum theory,
link |
00:19:14.420
was that he described himself as a classical realist,
link |
00:19:19.340
in one's intelligibility.
link |
00:19:23.060
He has something in common with infants in that way.
link |
00:19:27.460
So, back to linguistics.
link |
00:19:30.940
If you could humor me, what are the most beautiful
link |
00:19:34.020
or fascinating aspects of language
link |
00:19:36.100
or ideas in linguistics or cognitive science
link |
00:19:38.740
that you've seen in a lifetime of studying language
link |
00:19:42.060
and studying the human mind?
link |
00:19:44.180
Well, I think the deepest property of language
link |
00:19:50.140
and puzzling property that's been discovered
link |
00:19:52.820
is what is sometimes called structure dependence.
link |
00:19:57.180
We now understand it pretty well,
link |
00:19:59.580
but it was puzzling for a long time.
link |
00:20:01.940
I'll give you a concrete example.
link |
00:20:03.620
So, suppose you say the guy who fixed the car
link |
00:20:09.420
carefully packed his tools, it's ambiguous.
link |
00:20:13.100
He could fix the car carefully or carefully pack his tools.
link |
00:20:17.940
Suppose you put carefully in front,
link |
00:20:21.060
carefully the guy who fixed the car packed his tools,
link |
00:20:25.860
then it's carefully packed, not carefully fixed.
link |
00:20:29.380
And in fact, you do that even if it makes no sense.
link |
00:20:32.300
So, suppose you say carefully,
link |
00:20:34.620
the guy who fixed the car is tall.
link |
00:20:39.340
You have to interpret it as carefully he's tall,
link |
00:20:41.860
even though that doesn't make any sense.
link |
00:20:44.300
And notice that that's a very puzzling fact
link |
00:20:47.180
because you're relating carefully
link |
00:20:50.060
not to the linearly closest verb,
link |
00:20:53.620
but to the linearly more remote verb.
link |
00:20:57.300
A linear closeness is an easy computation,
link |
00:21:02.340
but here you're doing a much more,
link |
00:21:03.700
what looks like a more complex computation.
link |
00:21:06.780
You're doing something that's taking you essentially
link |
00:21:10.180
to the more remote thing.
link |
00:21:13.580
It's now, if you look at the actual structure
link |
00:21:16.620
of the sentence, where the phrases are and so on,
link |
00:21:20.140
turns out you're picking out the structurally closest thing,
link |
00:21:24.180
but the linearly more remote thing.
link |
00:21:27.940
But notice that what's linear is 100% of what you hear.
link |
00:21:32.500
You never hear structure, can't.
link |
00:21:35.140
So, what you're doing is,
link |
00:21:37.300
and certainly this is universal, all constructions,
link |
00:21:40.660
all languages, and what we're compelled to do
link |
00:21:44.660
is carry out what looks like the more complex computation
link |
00:21:48.660
on material that we never hear,
link |
00:21:52.260
and we ignore 100% of what we hear
link |
00:21:55.380
and the simplest computation.
link |
00:21:57.500
By now, there's even a neural basis for this
link |
00:22:00.700
that's somewhat understood,
link |
00:22:02.860
and there's good theories by now
link |
00:22:04.380
that explain why it's true.
link |
00:22:06.620
That's a deep insight into the surprising nature of language
link |
00:22:11.580
with many consequences.
link |
00:22:14.100
Let me ask you about a field of machine learning,
link |
00:22:17.540
deep learning.
link |
00:22:18.940
There's been a lot of progress in neural networks based,
link |
00:22:22.060
neural network based machine learning in the recent decade.
link |
00:22:26.380
Of course, neural network research goes back many decades.
link |
00:22:30.780
What do you think are the limits of deep learning,
link |
00:22:35.580
of neural network based machine learning?
link |
00:22:38.500
Well, to give a real answer to that,
link |
00:22:41.140
you'd have to understand the exact processes
link |
00:22:44.940
that are taking place, and those are pretty opaque.
link |
00:22:47.940
So, it's pretty hard to prove a theorem
link |
00:22:50.260
about what can be done and what can't be done,
link |
00:22:54.060
but I think it's reasonably clear.
link |
00:22:56.820
I mean, putting technicalities aside,
link |
00:22:59.180
what deep learning is doing
link |
00:23:01.940
is taking huge numbers of examples
link |
00:23:05.340
and finding some patterns.
link |
00:23:07.740
Okay, that could be interesting in some areas it is,
link |
00:23:11.980
but we have to ask here a certain question.
link |
00:23:15.060
Is it engineering or is it science?
link |
00:23:18.220
Engineering in the sense of just trying
link |
00:23:20.580
to build something that's useful,
link |
00:23:22.820
or science in the sense that it's trying
link |
00:23:24.780
to understand something about elements of the world.
link |
00:23:28.740
So, take, say, a Google parser.
link |
00:23:31.820
We can ask that question.
link |
00:23:33.860
Is it useful, yeah, it's pretty useful.
link |
00:23:36.860
I use a Google translator, so on engineering grounds,
link |
00:23:41.860
it's kind of worth having, like a bulldozer.
link |
00:23:45.740
Does it tell you anything about human language?
link |
00:23:48.900
Zero, nothing, and in fact, it's very striking.
link |
00:23:54.940
From the very beginning,
link |
00:23:56.780
it's just totally remote from science.
link |
00:24:00.300
So, what is a Google parser doing?
link |
00:24:02.580
It's taking an enormous text,
link |
00:24:05.140
let's say the Wall Street Journal corpus,
link |
00:24:07.700
and asking how close can we come
link |
00:24:10.540
to getting the right description
link |
00:24:14.100
of every sentence in the corpus.
link |
00:24:16.380
Well, every sentence in the corpus
link |
00:24:18.500
is essentially an experiment.
link |
00:24:21.540
Each sentence that you produce is an experiment
link |
00:24:24.740
which says, am I a grammatical sentence?
link |
00:24:27.700
The answer is usually yes.
link |
00:24:29.620
So, most of the stuff in the corpus
link |
00:24:31.300
is grammatical sentences.
link |
00:24:33.220
But now, ask yourself, is there any science
link |
00:24:36.820
which takes random experiments
link |
00:24:40.140
which are carried out for no reason whatsoever
link |
00:24:43.700
and tries to find out something from them?
link |
00:24:46.500
Like if you're, say, a chemistry PhD student,
link |
00:24:49.660
you wanna get a thesis, can you say,
link |
00:24:51.180
well, I'm just gonna mix a lot of things together,
link |
00:24:55.380
no purpose, and maybe I'll find something.
link |
00:24:59.660
You'd be laughed out of the department.
link |
00:25:02.420
Science tries to find critical experiments,
link |
00:25:06.180
ones that answer some theoretical question.
link |
00:25:09.100
Doesn't care about coverage of millions of experiments.
link |
00:25:12.940
So, it just begins by being very remote from science
link |
00:25:16.220
and it continues like that.
link |
00:25:18.220
So, the usual question that's asked about,
link |
00:25:21.620
say, a Google parser is how well does it do,
link |
00:25:25.180
or some parser, how well does it do on a corpus?
link |
00:25:28.300
But there's another question that's never asked.
link |
00:25:31.100
How well does it do on something
link |
00:25:32.900
that violates all the rules of language?
link |
00:25:36.060
So, for example, take the structure dependence case
link |
00:25:38.700
that I mentioned.
link |
00:25:39.660
Suppose there was a language
link |
00:25:41.580
in which you used linear proximity
link |
00:25:45.780
as the mode of interpretation.
link |
00:25:49.020
These deep learning would work very easily on that.
link |
00:25:51.700
In fact, much more easily on an actual language.
link |
00:25:54.740
Is that a success?
link |
00:25:55.900
No, that's a failure from a scientific point of view.
link |
00:25:59.020
It's a failure.
link |
00:26:00.340
It shows that we're not discovering
link |
00:26:03.460
the nature of the system at all,
link |
00:26:05.780
because it does just as well or even better
link |
00:26:07.700
on things that violate the structure of the system.
link |
00:26:10.820
And it goes on from there.
link |
00:26:12.660
It's not an argument against doing it.
link |
00:26:14.740
It is useful to have devices like this.
link |
00:26:17.140
So, yes, so neural networks are kind of approximators
link |
00:26:20.620
that look, there's echoes of the behavioral debates, right?
link |
00:26:24.340
Behavioralism.
link |
00:26:26.140
More than echoes.
link |
00:26:27.580
Many of the people in deep learning
link |
00:26:30.060
say they've vindicated Terry Sanyosky, for example,
link |
00:26:34.540
in his recent books,
link |
00:26:35.700
as this vindicates Skinnerian behaviors.
link |
00:26:39.500
It doesn't have anything to do with it.
link |
00:26:41.420
Yes, but I think there's something
link |
00:26:44.620
actually fundamentally different
link |
00:26:46.140
when the data set is huge.
link |
00:26:48.260
But your point is extremely well taken.
link |
00:26:51.140
But do you think we can learn, approximate
link |
00:26:55.380
that interesting complex structure of language
link |
00:26:58.780
with neural networks
link |
00:27:00.100
that will somehow help us understand the science?
link |
00:27:03.620
It's possible.
link |
00:27:04.460
I mean, you find patterns that you hadn't noticed,
link |
00:27:07.220
let's say, could be.
link |
00:27:09.700
In fact, it's very much like a kind of linguistics
link |
00:27:13.580
that's done, what's called corpus linguistics.
link |
00:27:18.060
When you, suppose you have some language
link |
00:27:21.060
where all the speakers have died out,
link |
00:27:23.420
but you have records.
link |
00:27:25.100
So you just look at the records
link |
00:27:28.060
and see what you can figure out from that.
link |
00:27:30.580
It's much better than,
link |
00:27:31.900
it's much better to have actual speakers
link |
00:27:33.660
where you can do critical experiments.
link |
00:27:36.060
But if they're all dead, you can't do them.
link |
00:27:38.500
So you have to try to see what you can find out
link |
00:27:40.780
from just looking at the data that's around.
link |
00:27:43.860
You can learn things.
link |
00:27:45.020
Actually, paleoanthropology is very much like that.
link |
00:27:48.380
You can't do a critical experiment on
link |
00:27:51.220
what happened two million years ago.
link |
00:27:53.500
So you're kind of forced just to take what data's around
link |
00:27:56.540
and see what you can figure out from it.
link |
00:27:59.220
Okay, it's a serious study.
link |
00:28:01.420
So let me venture into another whole body of work
link |
00:28:05.580
and philosophical question.
link |
00:28:08.380
You've said that evil in society arises from institutions,
link |
00:28:13.060
not inherently from our nature.
link |
00:28:15.540
Do you think most human beings are good,
link |
00:28:17.780
they have good intent?
link |
00:28:19.580
Or do most have the capacity for intentional evil
link |
00:28:22.860
that depends on their upbringing,
link |
00:28:24.620
depends on their environment, on context?
link |
00:28:27.180
I wouldn't say that they don't arise from our nature.
link |
00:28:30.960
Anything we do arises from our nature.
link |
00:28:34.020
And the fact that we have certain institutions, not others,
link |
00:28:38.100
is one mode in which human nature has expressed itself.
link |
00:28:43.700
But as far as we know,
link |
00:28:45.420
human nature could yield many different kinds
link |
00:28:48.500
of institutions.
link |
00:28:50.220
The particular ones that have developed
link |
00:28:53.140
have to do with historical contingency,
link |
00:28:56.940
who conquered whom, and that sort of thing.
link |
00:29:00.560
They're not rooted in our nature
link |
00:29:03.840
in the sense that they're essential to our nature.
link |
00:29:06.740
So it's commonly argued that these days
link |
00:29:10.140
that something like market systems
link |
00:29:12.920
is just part of our nature.
link |
00:29:15.580
But we know from a huge amount of evidence
link |
00:29:18.660
that that's not true.
link |
00:29:19.500
There's all kinds of other structures.
link |
00:29:21.740
It's a particular fact of a moment of modern history.
link |
00:29:26.220
Others have argued that the roots of classical liberalism
link |
00:29:30.740
actually argue that what's called sometimes
link |
00:29:34.420
an instinct for freedom,
link |
00:29:36.460
the instinct to be free of domination
link |
00:29:39.920
by illegitimate authority is the core of our nature.
link |
00:29:43.660
That would be the opposite of this.
link |
00:29:45.620
And we don't know.
link |
00:29:47.500
We just know that human nature can accommodate both kinds.
link |
00:29:52.220
If you look back at your life,
link |
00:29:54.900
is there a moment in your intellectual life
link |
00:29:58.100
or life in general that jumps from memory
link |
00:30:00.180
that brought you happiness
link |
00:30:02.100
that you would love to relive again?
link |
00:30:05.100
Sure.
link |
00:30:06.460
Falling in love, having children.
link |
00:30:10.100
What about, so you have put forward into the world
link |
00:30:13.860
a lot of incredible ideas in linguistics,
link |
00:30:17.660
in cognitive science, in terms of ideas
link |
00:30:22.300
that just excites you when it first came to you
link |
00:30:26.060
that you would love to relive those moments.
link |
00:30:28.940
Well, I mean, when you make a discovery
link |
00:30:32.140
about something that's exciting,
link |
00:30:34.100
like, say, even the observation of structure dependence
link |
00:30:40.500
and on from that, the explanation for it.
link |
00:30:44.440
But the major things just seem like common sense.
link |
00:30:49.460
So if you go back to take your question
link |
00:30:53.160
about external and internal language,
link |
00:30:55.820
you go back to, say, the 1950s,
link |
00:30:59.140
almost entirely languages regarded an external object,
link |
00:31:03.900
something outside the mind.
link |
00:31:06.260
It just seemed obvious that that can't be true.
link |
00:31:10.700
Like I said, there's something about you
link |
00:31:13.220
that determines you're talking English,
link |
00:31:15.340
not Swahili or something.
link |
00:31:18.640
But that's not really a discovery.
link |
00:31:20.280
That's just an observation, what's transparent.
link |
00:31:24.100
You might say it's kind of like the 17th century,
link |
00:31:30.660
the beginnings of modern science, 17th century.
link |
00:31:33.460
They came from being willing to be puzzled
link |
00:31:37.980
about things that seemed obvious.
link |
00:31:40.400
So it seems obvious that a heavy ball of lead
link |
00:31:44.280
will fall faster than a light ball of lead.
link |
00:31:47.580
But Galileo was not impressed by the fact
link |
00:31:50.900
that it seemed obvious.
link |
00:31:52.680
So he wanted to know if it's true.
link |
00:31:54.900
They carried out experiments, actually thought experiments,
link |
00:31:59.140
never actually carried them out,
link |
00:32:01.180
which that can't be true.
link |
00:32:04.460
And out of things like that, observations of that kind,
link |
00:32:11.600
why does a ball fall to the ground instead of rising,
link |
00:32:15.740
let's say, seems obvious, till you start thinking about it,
link |
00:32:20.060
because why does steam rise, let's say.
link |
00:32:23.900
And I think the beginnings of modern linguistics,
link |
00:32:27.260
roughly in the 50s, are kind of like that,
link |
00:32:30.060
just being willing to be puzzled about phenomena
link |
00:32:33.620
that looked, from some point of view, obvious.
link |
00:32:38.020
And for example, a kind of doctrine,
link |
00:32:41.340
almost official doctrine of structural linguistics
link |
00:32:44.960
in the 50s was that languages can differ
link |
00:32:49.620
from one another in arbitrary ways,
link |
00:32:52.660
and each one has to be studied on its own
link |
00:32:56.460
without any presuppositions.
link |
00:32:58.900
In fact, there were similar views among biologists
link |
00:33:02.380
about the nature of organisms, that each one's,
link |
00:33:05.880
they're so different when you look at them
link |
00:33:07.780
that almost anything, you could be almost anything.
link |
00:33:10.980
Well, in both domains, it's been learned
link |
00:33:13.140
that that's very far from true.
link |
00:33:15.500
There are narrow constraints on what could be an organism
link |
00:33:18.860
or what could be a language.
link |
00:33:21.580
But these are, that's just the nature of inquiry.
link |
00:33:25.540
Inquiry. Science in general, yeah, inquiry.
link |
00:33:29.380
So one of the peculiar things about us human beings
link |
00:33:33.420
is our mortality.
link |
00:33:35.300
Ernest Becker explored it in general.
link |
00:33:38.220
Do you ponder the value of mortality?
link |
00:33:40.460
Do you think about your own mortality?
link |
00:33:43.460
I used to when I was about 12 years old.
link |
00:33:48.100
I wondered, I didn't care much about my own mortality,
link |
00:33:51.940
but I was worried about the fact that
link |
00:33:54.540
if my consciousness disappeared,
link |
00:33:57.700
would the entire universe disappear?
link |
00:34:00.300
That was frightening.
link |
00:34:01.580
Did you ever find an answer to that question?
link |
00:34:03.740
No, nobody's ever found an answer,
link |
00:34:05.900
but I stopped being bothered by it.
link |
00:34:07.900
It's kind of like Woody Allen in one of his films,
link |
00:34:10.380
you may recall, he starts, he goes to a shrink
link |
00:34:14.100
when he's a child and the shrink asks him,
link |
00:34:16.460
what's your problem?
link |
00:34:17.500
He says, I just learned that the universe is expanding.
link |
00:34:21.700
I can't handle that.
link |
00:34:22.740
And then another absurd question is,
link |
00:34:27.220
what do you think is the meaning of our existence here,
link |
00:34:32.620
our life on Earth, our brief little moment in time?
link |
00:34:35.980
That's something we answer by our own activities.
link |
00:34:40.620
There's no general answer.
link |
00:34:42.340
We determine what the meaning of it is.
link |
00:34:46.620
The action determine the meaning.
link |
00:34:48.700
Meaning in the sense of significance,
link |
00:34:50.580
not meaning in the sense that chair means this.
link |
00:34:55.420
But the significance of your life is something you create.
link |
00:35:01.060
No, thank you so much for talking to me today.
link |
00:35:02.540
It was a huge honor.
link |
00:35:04.140
Thank you so much.
link |
00:35:05.940
Thanks for listening to this conversation with Noah Chomsky
link |
00:35:08.660
and thank you to our presenting sponsor, Cash App.
link |
00:35:11.980
Download it, use code LexPodcast, you'll get $10
link |
00:35:16.180
and $10 will go to FIRST, a STEM education nonprofit
link |
00:35:19.660
that inspires hundreds of thousands of young minds
link |
00:35:22.340
to learn and to dream of engineering our future.
link |
00:35:25.980
If you enjoy this podcast, subscribe on YouTube,
link |
00:35:28.620
give it five stars on Apple Podcast, support on Patreon,
link |
00:35:32.060
or connect with me on Twitter.
link |
00:35:34.260
Thank you for listening and hope to see you next time.