back to index

Steven Pinker: AI in the Age of Reason | Lex Fridman Podcast #3


small model | large model

link |
00:00:00.000
You've studied the human mind, cognition, language, vision, evolution, psychology,
link |
00:00:05.280
from child to adult, from the level of individual to the level of our entire civilization.
link |
00:00:11.680
So I feel like I can start with a simple multiple choice question.
link |
00:00:16.240
What is the meaning of life? Is it A. to attain knowledge as Plato said,
link |
00:00:22.400
B. to attain power as Nietzsche said, C. to escape death as Ernest Becker said,
link |
00:00:27.920
D. to propagate our genes as Darwin and others have said,
link |
00:00:33.200
E. there is no meaning as the nihilists have said,
link |
00:00:37.520
F. knowing the meaning of life is beyond our cognitive capabilities as Stephen Pinker said,
link |
00:00:42.960
based on my interpretation 20 years ago, and G. none of the above.
link |
00:00:48.160
I'd say A. comes closest, but I would amend that to
link |
00:00:51.200
C. to attaining not only knowledge but fulfillment more generally, that is life, health, stimulation,
link |
00:01:00.640
access to the living cultural and social world.
link |
00:01:06.080
Now this is our meaning of life. It's not the meaning of life if you were to ask our genes.
link |
00:01:12.160
Their meaning is to propagate copies of themselves, but that is distinct from the
link |
00:01:17.680
meaning that the brain that they lead to sets for itself.
link |
00:01:22.400
So to you knowledge is a small subset or a large subset?
link |
00:01:27.840
It's a large subset, but it's not the entirety of human striving because we also want to
link |
00:01:35.200
interact with people. We want to experience beauty. We want to experience the richness
link |
00:01:39.600
of the natural world, but understanding what makes the universe tick is way up there.
link |
00:01:47.840
For some of us more than others, certainly for me that's one of the top five.
link |
00:01:54.560
So is that a fundamental aspect? Are you just describing your own preference or is this a
link |
00:02:00.080
fundamental aspect of human nature is to seek knowledge? In your latest book you talk about
link |
00:02:05.920
the power, the usefulness of rationality and reason and so on. Is that a fundamental
link |
00:02:11.760
nature of human beings or is it something we should just strive for?
link |
00:02:17.040
Both. We're capable of striving for it because it is one of the things that make us what we are,
link |
00:02:23.920
homo sapiens, wise men. We are unusual among animals in the degree to which we acquire
link |
00:02:32.320
knowledge and use it to survive. We make tools. We strike agreements via language. We extract
link |
00:02:41.600
poisons. We predict the behavior of animals. We try to get at the workings of plants.
link |
00:02:47.760
And when I say we, I don't just mean we in the modern West, but we as a species everywhere,
link |
00:02:52.640
which is how we've managed to occupy every niche on the planet, how we've managed to drive other
link |
00:02:58.160
animals to extinction. And the refinement of reason in pursuit of human wellbeing, of health,
link |
00:03:06.480
happiness, social richness, cultural richness is our main challenge in the present. That is
link |
00:03:14.480
using our intellect, using our knowledge to figure out how the world works, how we work
link |
00:03:19.280
in order to make discoveries and strike agreements that make us all better off in the long run.
link |
00:03:25.200
Right. And you do that almost undeniably and in a data driven way in your recent book,
link |
00:03:31.840
but I'd like to focus on the artificial intelligence aspect of things and not just
link |
00:03:36.400
artificial intelligence, but natural intelligence too. So 20 years ago in a book you've written on
link |
00:03:41.920
how the mind works, you conjecture again, am I right to interpret things? You can correct me
link |
00:03:49.520
if I'm wrong, but you conjecture that human thought in the brain may be a result of a
link |
00:03:54.400
massive network of highly interconnected neurons. So from this interconnectivity emerges thought
link |
00:04:01.280
compared to artificial neural networks, which we use for machine learning today,
link |
00:04:06.160
is there something fundamentally more complex, mysterious, even magical about the biological
link |
00:04:12.640
neural networks versus the ones we've been starting to use over the past 60 years and
link |
00:04:19.440
have become to success in the past 10? There is something a little bit mysterious
link |
00:04:24.720
about the human neural networks, which is that each one of us who is a neural network knows that
link |
00:04:31.600
we ourselves are conscious. Conscious not in the sense of registering our surroundings or even
link |
00:04:36.960
registering our internal state, but in having subjective first person, present tense experience.
link |
00:04:42.720
That is when I see red, it's not just different from green, but there's a redness to it that I
link |
00:04:49.840
feel. Whether an artificial system would experience that or not, I don't know and I don't think I can
link |
00:04:54.960
know. That's why it's mysterious. If we had a perfectly lifelike robot that was behaviorally
link |
00:05:00.480
indistinguishable from a human, would we attribute consciousness to it or ought we to attribute
link |
00:05:06.800
consciousness to it? And that's something that it's very hard to know. But putting that aside,
link |
00:05:12.160
putting aside that largely philosophical question, the question is, is there some difference between
link |
00:05:19.040
the human neural network and the ones that we're building in artificial intelligence will mean
link |
00:05:23.760
that we're on the current trajectory, not going to reach the point where we've got a lifelike
link |
00:05:30.000
robot indistinguishable from a human because the way their so called neural networks are organized
link |
00:05:35.120
are different from the way ours are organized. I think there's overlap, but I think there are
link |
00:05:39.760
some big differences that current neural networks, current so called deep learning systems are in
link |
00:05:48.800
reality not all that deep. That is, they are very good at extracting high order statistical
link |
00:05:53.840
regularities, but most of the systems don't have a semantic level, a level of actual understanding
link |
00:06:00.640
of who did what to whom, why, where, how things work, what causes what else. Do you think that
link |
00:06:07.520
kind of thing can emerge as it does? So artificial neural networks are much smaller, the number of
link |
00:06:11.840
connections and so on than the current human biological networks, but do you think sort of
link |
00:06:18.320
to go to consciousness or to go to this higher level semantic reasoning about things, do you
link |
00:06:22.800
think that can emerge with just a larger network with a more richly weirdly interconnected network?
link |
00:06:29.760
Separate it in consciousness because consciousness is even a matter of complexity.
link |
00:06:33.280
A really weird one.
link |
00:06:34.320
Yeah, you could sensibly ask the question of whether shrimp are conscious, for example,
link |
00:06:38.400
they're not terribly complex, but maybe they feel pain. So let's just put that part of it aside.
link |
00:06:44.400
But I think sheer size of a neural network is not enough to give it structure and knowledge,
link |
00:06:52.480
but if it's suitably engineered, then why not? That is, we're neural networks, natural selection
link |
00:06:59.520
did a kind of equivalent of engineering of our brains. So I don't think there's anything mysterious
link |
00:07:04.880
in the sense that no system made out of silicon could ever do what a human brain can do. I think
link |
00:07:11.760
it's possible in principle. Whether it'll ever happen depends not only on how clever we are
link |
00:07:17.360
in engineering these systems, but whether we even want to, whether that's even a sensible goal.
link |
00:07:22.240
That is, you can ask the question, is there any locomotion system that is as good as a human?
link |
00:07:29.920
Well, we kind of want to do better than a human ultimately in terms of legged locomotion.
link |
00:07:35.520
There's no reason that humans should be our benchmark. They're tools that might be better
link |
00:07:39.760
in some ways. It may be that we can't duplicate a natural system because at some point it's so much
link |
00:07:49.760
cheaper to use a natural system that we're not going to invest more brainpower and resources.
link |
00:07:54.560
So for example, we don't really have an exact substitute for wood. We still build houses out
link |
00:08:00.480
of wood. We still build furniture out of wood. We like the look. We like the feel. It has certain
link |
00:08:05.040
properties that synthetics don't. It's not that there's anything magical or mysterious about wood.
link |
00:08:11.120
It's just that the extra steps of duplicating everything about wood is something we just haven't
link |
00:08:17.600
bothered because we have wood. Likewise, say cotton. I'm wearing cotton clothing now. It feels
link |
00:08:21.600
much better than polyester. It's not that cotton has something magic in it. It's not that we couldn't
link |
00:08:29.760
ever synthesize something exactly like cotton, but at some point it's just not worth it. We've got
link |
00:08:35.920
cotton. Likewise, in the case of human intelligence, the goal of making an artificial system that is
link |
00:08:42.000
exactly like the human brain is a goal that we probably know is going to pursue to the bitter
link |
00:08:47.840
end, I suspect, because if you want tools that do things better than humans, you're not going to
link |
00:08:53.120
care whether it does something like humans. So for example, diagnosing cancer or predicting the
link |
00:08:58.240
weather, why set humans as your benchmark? But in general, I suspect you also believe
link |
00:09:05.840
that even if the human should not be a benchmark and we don't want to imitate humans in their
link |
00:09:10.480
system, there's a lot to be learned about how to create an artificial intelligence system by
link |
00:09:15.520
studying the human. Yeah, I think that's right. In the same way that to build flying machines,
link |
00:09:22.480
we want to understand the laws of aerodynamics, including birds, but not mimic the birds,
link |
00:09:27.760
but they're the same laws. You have a view on AI, artificial intelligence, and safety
link |
00:09:35.440
that, from my perspective, is refreshingly rational or perhaps more importantly, has elements
link |
00:09:47.040
of positivity to it, which I think can be inspiring and empowering as opposed to paralyzing.
link |
00:09:53.600
For many people, including AI researchers, the eventual existential threat of AI is obvious,
link |
00:09:59.840
not only possible, but obvious. And for many others, including AI researchers, the threat
link |
00:10:05.840
is not obvious. So Elon Musk is famously in the highly concerned about AI camp, saying things like
link |
00:10:14.640
AI is far more dangerous than nuclear weapons, and that AI will likely destroy human civilization.
link |
00:10:21.200
Human civilization. So in February, he said that if Elon was really serious about AI, the threat
link |
00:10:29.360
of AI, he would stop building self driving cars that he's doing very successfully as part of Tesla.
link |
00:10:35.760
Then he said, wow, if even Pinker doesn't understand the difference between narrow AI,
link |
00:10:40.800
like a car and general AI, when the latter literally has a million times more compute power
link |
00:10:47.280
and an open ended utility function, humanity is in deep trouble. So first, what did you mean by
link |
00:10:54.160
the statement about Elon Musk should stop building self driving cars if he's deeply concerned?
link |
00:11:00.080
Not the last time that Elon Musk has fired off an intemperate tweet.
link |
00:11:04.320
Well, we live in a world where Twitter has power.
link |
00:11:07.680
Yes. Yeah, I think there are two kinds of existential threat that have been discussed
link |
00:11:16.640
in connection with artificial intelligence, and I think that they're both incoherent.
link |
00:11:20.480
One of them is a vague fear of AI takeover, that just as we subjugated animals and less technologically
link |
00:11:29.520
advanced peoples, so if we build something that's more advanced than us, it will inevitably turn us
link |
00:11:34.640
into pets or slaves or domesticated animal equivalents. I think this confuses intelligence
link |
00:11:42.320
with a will to power, that it so happens that in the intelligence system we are most familiar with,
link |
00:11:49.200
namely homo sapiens, we are products of natural selection, which is a competitive process,
link |
00:11:54.160
and so bundled together with our problem solving capacity are a number of nasty traits like
link |
00:12:00.320
dominance and exploitation and maximization of power and glory and resources and influence.
link |
00:12:08.720
There's no reason to think that sheer problem solving capability will set that as one of its
link |
00:12:13.120
goals. Its goals will be whatever we set its goals as, and as long as someone isn't building a
link |
00:12:18.720
megalomaniacal artificial intelligence, then there's no reason to think that it would naturally
link |
00:12:24.320
evolve in that direction. Now, you might say, well, what if we gave it the goal of maximizing
link |
00:12:28.960
its own power source? That's a pretty stupid goal to give an autonomous system. You don't give it
link |
00:12:34.880
that goal. I mean, that's just self evidently idiotic. So if you look at the history of the
link |
00:12:40.720
world, there's been a lot of opportunities where engineers could instill in a system
link |
00:12:45.120
destructive power and they choose not to because that's the natural process of engineering.
link |
00:12:49.440
Well, except for weapons. I mean, if you're building a weapon, its goal is to destroy people,
link |
00:12:53.680
and so I think there are good reasons to not build certain kinds of weapons. I think building
link |
00:12:58.560
nuclear weapons was a massive mistake. You do. So maybe pause on that because that is one of
link |
00:13:06.480
the serious threats. Do you think that it was a mistake in a sense that it should have been
link |
00:13:12.800
stopped early on? Or do you think it's just an unfortunate event of invention that this was
link |
00:13:19.200
invented? Do you think it's possible to stop? I guess is the question. It's hard to rewind the
link |
00:13:23.280
clock because of course it was invented in the context of World War II and the fear that the
link |
00:13:28.320
Nazis might develop one first. Then once it was initiated for that reason, it was hard to turn
link |
00:13:35.120
off, especially since winning the war against the Japanese and the Nazis was such an overwhelming
link |
00:13:42.080
goal of every responsible person that there's just nothing that people wouldn't have done then
link |
00:13:47.120
to ensure victory. It's quite possible if World War II hadn't happened that nuclear weapons
link |
00:13:52.560
wouldn't have been invented. We can't know, but I don't think it was by any means a necessity,
link |
00:13:57.440
any more than some of the other weapon systems that were envisioned but never implemented,
link |
00:14:02.720
like planes that would disperse poison gas over cities like crop dusters or systems to try to
link |
00:14:10.560
create earthquakes and tsunamis in enemy countries, to weaponize the weather,
link |
00:14:16.000
weaponize solar flares, all kinds of crazy schemes that we thought the better of.
link |
00:14:21.120
I think analogies between nuclear weapons and artificial intelligence are fundamentally
link |
00:14:25.840
misguided because the whole point of nuclear weapons is to destroy things. The point of
link |
00:14:30.560
artificial intelligence is not to destroy things. So the analogy is misleading.
link |
00:14:36.080
So there's two artificial intelligence you mentioned. The first one I guess is highly
link |
00:14:39.920
intelligent or power hungry.
link |
00:14:42.080
Yeah, it's a system that we design ourselves where we give it the goals. Goals are external to
link |
00:14:46.800
the means to attain the goals. If we don't design an artificially intelligent system to
link |
00:14:55.200
maximize dominance, then it won't maximize dominance. It's just that we're so familiar
link |
00:15:00.800
with homo sapiens where these two traits come bundled together, particularly in men,
link |
00:15:06.320
that we are apt to confuse high intelligence with a will to power, but that's just an error.
link |
00:15:15.520
The other fear is that will be collateral damage that will give artificial intelligence a goal
link |
00:15:21.440
like make paper clips and it will pursue that goal so brilliantly that before we can stop it,
link |
00:15:27.440
it turns us into paper clips. We'll give it the goal of curing cancer and it will turn us into
link |
00:15:32.800
guinea pigs for lethal experiments or give it the goal of world peace and its conception of world
link |
00:15:38.720
peace is no people, therefore no fighting and so it will kill us all. Now I think these are utterly
link |
00:15:43.680
fanciful. In fact, I think they're actually self defeating. They first of all assume that we're
link |
00:15:49.040
going to be so brilliant that we can design an artificial intelligence that can cure cancer,
link |
00:15:53.600
but so stupid that we don't specify what we mean by curing cancer in enough detail that it won't
link |
00:15:59.600
kill us in the process and it assumes that the system will be so smart that it can cure cancer,
link |
00:16:06.240
but so idiotic that it can't figure out that what we mean by curing cancer is not killing everyone.
link |
00:16:12.880
I think that the collateral damage scenario, the value alignment problem is also based on
link |
00:16:18.320
a misconception. So one of the challenges, of course, we don't know how to build either system
link |
00:16:23.200
currently or are we even close to knowing? Of course, those things can change overnight,
link |
00:16:27.440
but at this time, theorizing about it is very challenging in either direction. So that's
link |
00:16:33.840
probably at the core of the problem is without that ability to reason about the real engineering
link |
00:16:39.600
things here at hand is your imagination runs away with things. Exactly. But let me sort of ask,
link |
00:16:45.120
what do you think was the motivation, the thought process of Elon Musk? I build autonomous vehicles,
link |
00:16:52.320
I study autonomous vehicles, I study Tesla autopilot. I think it is one of the greatest
link |
00:16:57.680
currently large scale application of artificial intelligence in the world. It has potentially a
link |
00:17:04.400
very positive impact on society. So how does a person who's creating this very good quote unquote
link |
00:17:10.880
narrow AI system also seem to be so concerned about this other general AI? What do you think
link |
00:17:19.280
is the motivation there? What do you think is the thing? Well, you probably have to ask him,
link |
00:17:23.040
but there, and he is notoriously flamboyant, impulsive to the, as we have just seen,
link |
00:17:31.520
to the detriment of his own goals of the health of the company. So I don't know what's going on
link |
00:17:37.360
in his mind. You probably have to ask him, but I don't think the, and I don't think the distinction
link |
00:17:42.560
between special purpose AI and so called general AI is relevant that in the same way that special
link |
00:17:50.240
purpose AI is not going to do anything conceivable in order to attain a goal. All engineering systems
link |
00:17:57.760
are designed to trade off across multiple goals. When we build cars in the first place,
link |
00:18:02.800
we didn't forget to install brakes because the goal of a car is to go fast. It occurred to people,
link |
00:18:08.880
yes, you want it to go fast, but not always. So you would build in brakes too. Likewise,
link |
00:18:13.920
if a car is going to be autonomous and program it to take the shortest route to the airport,
link |
00:18:20.320
it's not going to take the diagonal and mow down people and trees and fences because that's the
link |
00:18:24.560
shortest route. That's not what we mean by the shortest route when we program it. And that's just
link |
00:18:29.120
what an intelligence system is by definition. It takes into account multiple constraints.
link |
00:18:36.000
The same is true, in fact, even more true of so called general intelligence. That is,
link |
00:18:41.280
if it's genuinely intelligent, it's not going to pursue some goal singlemindedly, omitting every
link |
00:18:48.080
other consideration and collateral effect. That's not artificial and general intelligence. That's
link |
00:18:54.400
artificial stupidity. I agree with you, by the way, on the promise of autonomous vehicles for
link |
00:19:01.120
improving human welfare. I think it's spectacular. And I'm surprised at how little press coverage
link |
00:19:06.160
notes that in the United States alone, something like 40,000 people die every year on the highways,
link |
00:19:11.680
vastly more than are killed by terrorists. And we spent a trillion dollars on a war to combat
link |
00:19:18.240
deaths by terrorism, about half a dozen a year. Whereas year in, year out, 40,000 people are
link |
00:19:24.160
massacred on the highways, which could be brought down to very close to zero. So I'm with you on
link |
00:19:29.920
the humanitarian benefit. Let me just mention that as a person who's building these cars,
link |
00:19:34.800
it is a little bit offensive to me to say that engineers would be clueless enough not to engineer
link |
00:19:39.360
safety into systems. I often stay up at night thinking about those 40,000 people that are dying.
link |
00:19:45.280
And everything I tried to engineer is to save those people's lives. So every new invention that
link |
00:19:50.800
I'm super excited about, in all the deep learning literature and CVPR conferences and NIPS, everything
link |
00:19:59.280
I'm super excited about is all grounded in making it safe and help people. So I just don't see how
link |
00:20:08.240
that trajectory can all of a sudden slip into a situation where intelligence will be highly
link |
00:20:13.280
negative. You and I certainly agree on that. And I think that's only the beginning of the
link |
00:20:17.920
potential humanitarian benefits of artificial intelligence. There's been enormous attention to
link |
00:20:24.320
what are we going to do with the people whose jobs are made obsolete by artificial intelligence,
link |
00:20:28.720
but very little attention given to the fact that the jobs that are going to be made obsolete are
link |
00:20:32.560
horrible jobs. The fact that people aren't going to be picking crops and making beds and driving
link |
00:20:38.960
trucks and mining coal, these are soul deadening jobs. And we have a whole literature sympathizing
link |
00:20:45.520
with the people stuck in these menial, mind deadening, dangerous jobs. If we can eliminate
link |
00:20:53.040
them, this is a fantastic boon to humanity. Now granted, you solve one problem and there's another
link |
00:20:58.480
one, namely, how do we get these people a decent income? But if we're smart enough to invent machines
link |
00:21:05.840
that can make beds and put away dishes and handle hospital patients, I think we're smart enough to
link |
00:21:12.400
figure out how to redistribute income to apportion some of the vast economic savings to the human
link |
00:21:19.200
beings who will no longer be needed to make beds. Okay. Sam Harris says that it's obvious that
link |
00:21:26.000
eventually AI will be an existential risk. He's one of the people who says it's obvious.
link |
00:21:31.760
We don't know when the claim goes, but eventually it's obvious. And because we don't know when,
link |
00:21:38.640
we should worry about it now. This is a very interesting argument in my eyes. So how do we
link |
00:21:45.680
think about timescale? How do we think about existential threats when we don't really, we know
link |
00:21:51.120
so little about the threat, unlike nuclear weapons perhaps, about this particular threat, that it
link |
00:21:58.160
could happen tomorrow, right? So, but very likely it won't. Very likely it'd be a hundred years away.
link |
00:22:04.560
So how do we ignore it? How do we talk about it? Do we worry about it? How do we think about those?
link |
00:22:12.480
What is it?
link |
00:22:13.840
A threat that we can imagine. It's within the limits of our imagination,
link |
00:22:18.560
but not within our limits of understanding to accurately predict it.
link |
00:22:24.320
But what is the it that we're afraid of?
link |
00:22:26.880
Sorry. AI being the existential threat.
link |
00:22:30.320
AI. How? Like enslaving us or turning us into paperclips?
link |
00:22:35.120
I think the most compelling from the Sam Harris perspective would be the paperclip situation.
link |
00:22:39.520
Yeah. I mean, I just think it's totally fanciful. I mean, that is don't build a system.
link |
00:22:43.440
Don't give a, don't, first of all, the code of engineering is you don't implement a system with
link |
00:22:50.080
massive control before testing it. Now, perhaps the culture of engineering will radically change.
link |
00:22:55.040
Then I would worry, but I don't see any signs that engineers will suddenly do idiotic things,
link |
00:23:00.320
like put a electric power plant in control of a system that they haven't tested first.
link |
00:23:07.360
Or all of these scenarios, not only imagine almost a magically powered intelligence,
link |
00:23:15.600
including things like cure cancer, which is probably an incoherent goal because there's
link |
00:23:20.240
so many different kinds of cancer or bring about world peace. I mean, how do you even specify that
link |
00:23:25.600
as a goal? But the scenarios also imagine some degree of control of every molecule in the
link |
00:23:31.360
universe, which not only is itself unlikely, but we would not start to connect these systems to
link |
00:23:39.120
infrastructure without testing as we would any kind of engineering system.
link |
00:23:45.680
Now, maybe some engineers will be irresponsible and we need legal and regulatory and legal
link |
00:23:53.920
responsibility implemented so that engineers don't do things that are stupid by their own standards.
link |
00:24:00.640
But the, I've never seen enough of a plausible scenario of existential threat to devote large
link |
00:24:08.560
amounts of brain power to, to forestall it. So you believe in the sort of the power on
link |
00:24:14.240
mass of the engineering of reason, as you argue in your latest book of Reason and Science, to sort of
link |
00:24:20.400
be the very thing that guides the development of new technology so it's safe and also keeps us safe.
link |
00:24:28.000
You know, granted the same culture of safety that currently is part of the engineering mindset for
link |
00:24:34.560
airplanes, for example. So yeah, I don't think that that should be thrown out the window and
link |
00:24:40.480
that untested all powerful systems should be suddenly implemented, but there's no reason to
link |
00:24:45.520
think they are. And in fact, if you look at the progress of artificial intelligence, it's been,
link |
00:24:50.400
you know, it's been impressive, especially in the last 10 years or so, but the idea that suddenly
link |
00:24:54.160
there'll be a step function that all of a sudden before we know it, it will be all powerful,
link |
00:25:00.080
that there'll be some kind of recursive self improvement, some kind of fume is also fanciful.
link |
00:25:06.800
We, certainly by the technology that we, that we're now impresses us, such as deep learning,
link |
00:25:13.200
where you train something on hundreds of thousands or millions of examples,
link |
00:25:18.320
they're not hundreds of thousands of problems of which curing cancer is a typical example.
link |
00:25:26.000
And so the kind of techniques that have allowed AI to increase in the last five years are not the
link |
00:25:31.520
kind that are going to lead to this fantasy of exponential sudden self improvement. I think it's
link |
00:25:40.320
kind of a magical thinking. It's not based on our understanding of how AI actually works.
link |
00:25:45.440
Now give me a chance here. So you said fanciful, magical thinking. In his TED talk,
link |
00:25:51.040
Sam Harris says that thinking about AI killing all human civilization is somehow fun,
link |
00:25:55.760
intellectually. Now I have to say as a scientist engineer, I don't find it fun,
link |
00:26:00.400
but when I'm having beer with my non AI friends, there is indeed something fun and appealing about
link |
00:26:08.560
it. Like talking about an episode of Black Mirror, considering if a large meteor is headed towards
link |
00:26:14.640
Earth, we were just told a large meteor is headed towards Earth, something like this. And can you
link |
00:26:20.640
relate to this sense of fun? And do you understand the psychology of it?
link |
00:26:24.560
Yes. Good question. I personally don't find it fun. I find it kind of actually a waste of time
link |
00:26:32.800
because there are genuine threats that we ought to be thinking about like pandemics, like cyber
link |
00:26:39.760
security vulnerabilities, like the possibility of nuclear war and certainly climate change.
link |
00:26:46.160
You know, this is enough to fill many conversations. And I think Sam did put his
link |
00:26:54.320
finger on something, namely that there is a community, sometimes called the rationality
link |
00:27:00.240
community, that delights in using its brainpower to come up with scenarios that would not occur
link |
00:27:07.280
to mere mortals, to less cerebral people. So there is a kind of intellectual thrill in finding new
link |
00:27:14.560
things to worry about that no one has worried about yet. I actually think, though, that it's
link |
00:27:19.840
not only is it a kind of fun that doesn't give me particular pleasure, but I think there can be a
link |
00:27:25.440
pernicious side to it, namely that you overcome people with such dread, such fatalism, that there
link |
00:27:32.400
are so many ways to die, to annihilate our civilization, that we may as well enjoy life
link |
00:27:39.200
while we can. There's nothing we can do about it. If climate change doesn't do us in, then runaway
link |
00:27:42.880
robots will. So let's enjoy ourselves now. We've got to prioritize. We have to look at threats that
link |
00:27:52.480
are close to certainty, such as climate change, and distinguish those from ones that are merely
link |
00:27:58.160
imaginable but with infinitesimal probabilities. And we have to take into account people's worry
link |
00:28:05.280
budget. You can't worry about everything. And if you sow dread and fear and terror and fatalism,
link |
00:28:12.480
it can lead to a kind of numbness. Well, these problems are overwhelming, and the engineers are
link |
00:28:17.280
just going to kill us all. So let's either destroy the entire infrastructure of science, technology,
link |
00:28:26.640
or let's just enjoy life while we can. So there's a certain line of worry, which I'm worried about
link |
00:28:32.560
a lot of things in engineering. There's a certain line of worry when you cross, you're allowed to
link |
00:28:36.800
cross, that it becomes paralyzing fear as opposed to productive fear. And that's kind of what
link |
00:28:44.240
you're highlighting. Exactly right. And we've seen some, we know that human effort is not
link |
00:28:50.160
well calibrated against risk in that because a basic tenet of cognitive psychology is that
link |
00:28:58.160
perception of risk and hence perception of fear is driven by imaginability, not by data. And so we
link |
00:29:05.680
misallocate vast amounts of resources to avoiding terrorism, which kills on average about six
link |
00:29:11.360
Americans a year with one exception of 9 11. We invade countries, we invent entire new departments
link |
00:29:18.640
of government with massive, massive expenditure of resources and lives to defend ourselves against
link |
00:29:25.120
a trivial risk. Whereas guaranteed risks, one of them you mentioned traffic fatalities and even
link |
00:29:34.720
risks that are not here, but are plausible enough to worry about like pandemics, like nuclear war,
link |
00:29:45.920
receive far too little attention. In presidential debates, there's no discussion of how to minimize
link |
00:29:51.440
the risk of nuclear war. Lots of discussion of terrorism, for example. And so I think it's
link |
00:29:58.240
essential to calibrate our budget of fear, worry, concern, planning to the actual probability of
link |
00:30:08.080
harm. Yep. So let me ask this question. So speaking of imaginability, you said it's important to think
link |
00:30:15.840
about reason and one of my favorite people who likes to dip into the outskirts of reason through
link |
00:30:23.840
fascinating exploration of his imagination is Joe Rogan. Oh yes. So who has through reason used to
link |
00:30:32.000
believe a lot of conspiracies and through reason has stripped away a lot of his beliefs in that
link |
00:30:37.280
way. So it's fascinating actually to watch him through rationality kind of throw away the ideas
link |
00:30:43.120
of Bigfoot and 9 11. I'm not sure exactly. Kim Trails. I don't know what he believes in. Yes.
link |
00:30:50.320
Okay. But he no longer believed in. No, that's right. No, he's become a real force for good.
link |
00:30:55.520
Yep. So you were on the Joe Rogan podcast in February and had a fascinating conversation,
link |
00:31:00.240
but as far as I remember, didn't talk much about artificial intelligence. I will be on his podcast
link |
00:31:05.920
in a couple of weeks. Joe is very much concerned about existential threat of AI. I'm not sure if
link |
00:31:11.520
you're, this is why I was hoping that you would get into that topic. And in this way,
link |
00:31:17.040
he represents quite a lot of people who look at the topic of AI from 10,000 foot level.
link |
00:31:22.480
So as an exercise of communication, you said it's important to be rational and reason
link |
00:31:29.040
about these things. Let me ask, if you were to coach me as an AI researcher about how to speak
link |
00:31:34.080
to Joe and the general public about AI, what would you advise? Well, the short answer would be to
link |
00:31:40.640
read the sections that I wrote in enlightenment now about AI, but a longer reason would be I
link |
00:31:45.200
think to emphasize, and I think you're very well positioned as an engineer to remind people about
link |
00:31:50.480
the culture of engineering, that it really is safety oriented, that another discussion in
link |
00:31:57.040
enlightenment now, I plot rates of accidental death from various causes, plane crashes, car
link |
00:32:04.560
crashes, occupational accidents, even death by lightning strikes. And they all plummet because
link |
00:32:12.640
the culture of engineering is how do you squeeze out the lethal risks, death by fire, death by
link |
00:32:18.320
drowning, death by asphyxiation, all of them drastically declined because of advances in
link |
00:32:24.320
engineering that I got to say, I did not appreciate until I saw those graphs. And it is because
link |
00:32:29.840
exactly, people like you who stay up at night thinking, oh my God, is what I'm inventing likely
link |
00:32:37.520
to hurt people and to deploy ingenuity to prevent that from happening. Now, I'm not an engineer,
link |
00:32:43.760
although I spent 22 years at MIT, so I know something about the culture of engineering.
link |
00:32:48.240
My understanding is that this is the way you think if you're an engineer. And it's essential
link |
00:32:53.680
that that culture not be suddenly switched off when it comes to artificial intelligence. So,
link |
00:32:59.200
I mean, that could be a problem, but is there any reason to think it would be switched off?
link |
00:33:02.560
I don't think so. And one, there's not enough engineers speaking up for this
link |
00:33:06.960
way, for the excitement, for the positive view of human nature, what you're trying to create
link |
00:33:13.440
is positivity. Like everything we try to invent is trying to do good for the world.
link |
00:33:18.240
But let me ask you about the psychology of negativity. It seems just objectively,
link |
00:33:23.600
not considering the topic, it seems that being negative about the future makes you sound smarter
link |
00:33:28.480
than being positive about the future, irregardless of topic. Am I correct in this observation? And
link |
00:33:34.320
if so, why do you think that is? Yeah, I think there is that phenomenon that,
link |
00:33:40.080
as Tom Lehrer, the satirist said, always predict the worst and you'll be hailed as a prophet.
link |
00:33:45.360
It may be part of our overall negativity bias. We are as a species more attuned to the negative
link |
00:33:52.240
than the positive. We dread losses more than we enjoy gains. And that might open up a space for
link |
00:34:02.480
prophets to remind us of harms and risks and losses that we may have overlooked.
link |
00:34:07.680
So I think there is that asymmetry. So you've written some of my favorite books
link |
00:34:16.080
all over the place. So starting from Enlightenment Now to The Better Ages of Our Nature,
link |
00:34:21.600
Blank Slate, How the Mind Works, the one about language, Language Instinct. Bill Gates,
link |
00:34:29.200
big fan too, said of your most recent book that it's my new favorite book of all time.
link |
00:34:37.440
So for you as an author, what was a book early on in your life that had a profound impact on the
link |
00:34:43.840
way you saw the world? Certainly this book, Enlightenment Now, was influenced by David
link |
00:34:49.040
Deutsch's The Beginning of Infinity, a rather deep reflection on knowledge and the power of
link |
00:34:55.920
knowledge to improve the human condition. And with bits of wisdom such as that problems are
link |
00:35:02.320
inevitable but problems are solvable given the right knowledge and that solutions create new
link |
00:35:07.440
problems that have to be solved in their turn. That's I think a kind of wisdom about the human
link |
00:35:11.920
condition that influenced the writing of this book. There are some books that are excellent
link |
00:35:16.080
but obscure, some of which I have on a page on my website. I read a book called The History of Force,
link |
00:35:22.800
self published by a political scientist named James Payne on the historical decline of violence
link |
00:35:27.840
and that was one of the inspirations for The Better Angels of Our Nature.
link |
00:35:33.600
What about early on? If you look back when you were maybe a teenager?
link |
00:35:38.160
I loved a book called One, Two, Three, Infinity. When I was a young adult I read that book by
link |
00:35:43.040
George Gamow, the physicist, which had very accessible and humorous explanations of
link |
00:35:48.960
relativity, of number theory, of dimensionality, high multiple dimensional spaces in a way that I
link |
00:35:59.360
think is still delightful 70 years after it was published. I like the Time Life Science series.
link |
00:36:06.080
These are books that would arrive every month that my mother subscribed to, each one on a different
link |
00:36:11.280
topic. One would be on electricity, one would be on forests, one would be on evolution and then one
link |
00:36:17.280
was on the mind. I was just intrigued that there could be a science of mind and that book I would
link |
00:36:24.240
cite as an influence as well. Then later on... That's when you fell in love with the idea of
link |
00:36:28.480
studying the mind? Was that the thing that grabbed you? It was one of the things I would say. I read
link |
00:36:35.040
as a college student the book Reflections on Language by Noam Chomsky. I spent most of his
link |
00:36:41.360
career here at MIT. Richard Dawkins, two books, The Blind Watchmaker and The Selfish Gene,
link |
00:36:47.440
were enormously influential, mainly for the content but also for the writing style, the
link |
00:36:55.040
ability to explain abstract concepts in lively prose. Stephen Jay Gould's first collection,
link |
00:37:02.480
Ever Since Darwin, also an excellent example of lively writing. George Miller, a psychologist that
link |
00:37:10.240
most psychologists are familiar with, came up with the idea that human memory has a capacity of
link |
00:37:16.160
seven plus or minus two chunks. That's probably his biggest claim to fame. But he wrote a couple
link |
00:37:20.640
of books on language and communication that I read as an undergraduate. Again, beautifully written
link |
00:37:25.840
and intellectually deep. Wonderful. Stephen, thank you so much for taking the time today.
link |
00:37:31.920
My pleasure. Thanks a lot, Lex.