back to index

Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65


small model | large model

link |
00:00:00.000
The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics
link |
00:00:05.680
for his integration of economic science with the psychology of human behavior,
link |
00:00:10.080
judgment, and decision making. He's the author of the popular book Thinking Fast and Slow that
link |
00:00:16.240
summarizes in an accessible way his research of several decades, often in collaboration with
link |
00:00:22.160
Amos Tversky on cognitive biases, prospect theory, and happiness. The central thesis of this work
link |
00:00:29.600
is the dichotomy between two modes of thought. What he calls system one is fast, instinctive,
link |
00:00:35.520
and emotional. System two is slower, more deliberative, and more logical. The book
link |
00:00:41.440
delineates cognitive biases associated with each of these two types of thinking.
link |
00:00:46.960
His study of the human mind and its peculiar and fascinating limitations are both instructive and
link |
00:00:53.040
inspiring for those of us seeking to engineer intelligent systems. This is the Artificial
link |
00:00:59.200
Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast,
link |
00:01:05.120
follow on Spotify, support it on Patreon, or simply connect with me on Twitter at
link |
00:01:10.000
Lex Friedman spelled F R I D M A N. I recently started doing ads at the end of the introduction.
link |
00:01:16.800
I'll do one or two minutes after introducing the episode and never any ads in the middle
link |
00:01:21.280
that can break the flow of the conversation. I hope that works for you and doesn't hurt the
link |
00:01:25.920
listening experience. This show is presented by Cash App, the number one finance app in the App
link |
00:01:32.160
Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell,
link |
00:01:37.440
and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy
link |
00:01:43.280
fractions of a stock, say one dollar's worth, no matter what the stock price is. Broker services
link |
00:01:48.640
are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be
link |
00:01:55.280
working with Cash App to support one of my favorite organizations called First, best known
link |
00:02:00.480
for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands
link |
00:02:05.760
of students in over 110 countries and have a perfect rating at Charity Navigator, which means
link |
00:02:11.360
that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google
link |
00:02:17.120
Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST,
link |
00:02:24.480
which again is an organization that I've personally seen inspire girls and boys to dream
link |
00:02:29.920
of engineering a better world. And now here's my conversation with Daniel Kahneman.
link |
00:02:36.800
You tell a story of an SS soldier early in the war, World War II, in Nazi occupied France in
link |
00:02:43.600
Paris, where you grew up. He picked you up and hugged you and showed you a picture of a boy,
link |
00:02:50.160
Daniel Kahneman. Maybe not realizing that you were Jewish.
link |
00:02:53.840
Not maybe, certainly not.
link |
00:02:56.400
So I told you I'm from the Soviet Union that was significantly impacted by the war as well,
link |
00:03:01.360
and I'm Jewish as well. What do you think World War II taught us about human psychology broadly?
link |
00:03:09.680
Well, I think the only big surprise is the extermination policy, genocide,
link |
00:03:17.520
by the German people. That's when you look back on it, and I think that's a major surprise.
link |
00:03:27.040
It's a surprise because...
link |
00:03:28.240
It's a surprise that they could do it. It's a surprise that enough people
link |
00:03:34.720
willingly participated in that. This is a surprise. Now it's no longer a surprise,
link |
00:03:41.520
but it's changed many people's views, I think, about human beings. Certainly for me,
link |
00:03:50.720
the Ackman trial, that teaches you something because it's very clear that if it could happen
link |
00:03:58.080
in Germany, it could happen anywhere. It's not that the Germans were special.
link |
00:04:04.080
This could happen anywhere.
link |
00:04:05.280
So what do you think that is? Do you think we're all capable of evil? We're all capable of cruelty?
link |
00:04:13.600
I don't think in those terms. I think that what is certainly possible is you can dehumanize people
link |
00:04:23.200
so that you treat them not as people anymore, but as animals. And the same way that you can slaughter
link |
00:04:32.480
animals without feeling much of anything, it can be the same. And when you feel that,
link |
00:04:41.120
I think, the combination of dehumanizing the other side and having uncontrolled power over
link |
00:04:49.360
other people, I think that doesn't bring out the most generous aspect of human nature.
link |
00:04:54.560
So that Nazi soldier, he was a good man. And he was perfectly capable of killing a lot of people,
link |
00:05:08.480
and I'm sure he did.
link |
00:05:10.080
But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of?
link |
00:05:20.160
IA Again, this is surprising that it was so extreme,
link |
00:05:25.120
but it's not one thing in human nature. I don't want to call it evil, but the distinction between
link |
00:05:32.480
the in group and the out group, that is very basic. So that's built in. The loyalty and
link |
00:05:40.160
affection towards in group and the willingness to dehumanize the out group, that is in human nature.
link |
00:05:50.320
That's what I think probably didn't need the Holocaust to teach us that. But the Holocaust is
link |
00:05:57.920
a very sharp lesson of what can happen to people and what people can do.
link |
00:06:05.120
SL. So the effect of the in group and the out group. IA It's clear. Those were people,
link |
00:06:13.600
you could shoot them. They were not human. There was no empathy, or very, very little empathy left.
link |
00:06:23.680
So occasionally, there might have been. And very quickly, by the way, the empathy disappeared,
link |
00:06:32.720
if there was initially. And the fact that everybody around you was doing it,
link |
00:06:39.840
that completely, the group doing it, and everybody shooting Jews, I think that makes it permissible.
link |
00:06:51.120
Now, how much, whether it could happen in every culture, or whether the Germans were just
link |
00:07:01.280
particularly efficient and disciplined, so they could get away with it. It's an interesting
link |
00:07:10.000
question. SL. Are these artifacts of history or is it human nature? IA I think that's really human
link |
00:07:15.360
nature. You put some people in a position of power relative to other people, and then they become
link |
00:07:24.480
less human, they become different. SL. But in general, in war, outside of concentration camps
link |
00:07:32.240
in World War Two, it seems that war brings out darker sides of human nature, but also the beautiful
link |
00:07:39.760
things about human nature. IA Well, I mean, what it brings out is the loyalty among soldiers. I mean,
link |
00:07:49.120
it brings out the bonding, male bonding, I think is a very real thing that happens. And there is
link |
00:07:57.920
a certain thrill to friendship, and there is certainly a certain thrill to friendship under
link |
00:08:03.840
risk and to shared risk. And so people have very profound emotions, up to the point where it gets
link |
00:08:12.400
so traumatic that little is left. SL. So let's talk about psychology a little bit. In your book,
link |
00:08:23.040
Thinking Fast and Slow, you describe two modes of thought, system one, the fast and instinctive,
link |
00:08:31.200
and emotional one, and system two, the slower, deliberate, logical one. At the risk of asking
link |
00:08:37.360
Darwin to discuss theory of evolution, can you describe distinguishing characteristics for people
link |
00:08:46.320
who have not read your book of the two systems? IA Well, I mean, the word system is a bit
link |
00:08:52.800
misleading, but at the same time it's misleading, it's also very useful. But what I call system one,
link |
00:09:01.440
it's easier to think of it as a family of activities. And primarily, the way I describe it
link |
00:09:09.120
is there are different ways for ideas to come to mind. And some ideas come to mind automatically,
link |
00:09:17.920
and the standard example is two plus two, and then something happens to you. And in other cases,
link |
00:09:26.480
you've got to do something, you've got to work in order to produce the idea. And my example,
link |
00:09:32.240
I always give the same pair of numbers as 27 times 14, I think. SL. You have to perform some
link |
00:09:38.000
algorithm in your head, some steps. IA Yes, and it takes time. It's a very difference. Nothing
link |
00:09:44.560
comes to mind except something comes to mind, which is the algorithm, I mean, that you've got
link |
00:09:50.640
to perform. And then it's work, and it engages short term memory, it engages executive function,
link |
00:09:58.000
and it makes you incapable of doing other things at the same time. So the main characteristic of
link |
00:10:04.560
system two is that there is mental effort involved, and there is a limited capacity for mental effort,
link |
00:10:10.960
whereas system one is effortless, essentially. That's the major distinction.
link |
00:10:15.600
SL. So you talk about there, you know, it's really convenient to talk about two systems,
link |
00:10:21.040
but you also mentioned just now and in general that there's no distinct two systems in the brain
link |
00:10:29.120
from a neurobiological, even from a psychology perspective. But why does it seem to, from the
link |
00:10:36.240
experiments you've conducted, there does seem to be kind of emergent two modes of thinking? So
link |
00:10:47.120
at some point, these kinds of systems came into a brain architecture. Maybe mammals share it.
link |
00:10:57.440
Or do you not think of it at all in those terms that it's all a mush and these two things just
link |
00:11:01.520
emerge? RL. Evolutionary theorizing about this is cheap and easy. So it's the way I think about it
link |
00:11:12.560
is that it's very clear that animals have perceptual system, and that includes an ability
link |
00:11:20.720
to understand the world, at least to the extent that they can predict, they can't explain anything,
link |
00:11:27.120
but they can anticipate what's going to happen. And that's a key form of understanding the world.
link |
00:11:34.720
And my crude idea is that what I call system two, well, system two grew out of this.
link |
00:11:45.200
And, you know, there is language and there is the capacity of manipulating ideas and the capacity
link |
00:11:51.840
of imagining futures and of imagining counterfactual things that haven't happened
link |
00:11:58.240
and to do conditional thinking. And there are really a lot of abilities that without language
link |
00:12:06.240
and without the very large brain that we have compared to others would be impossible. Now,
link |
00:12:13.760
system one is more like what the animals are, but system one also can talk. I mean,
link |
00:12:20.960
it has language. It understands language. Indeed, it speaks for us. I mean, you know,
link |
00:12:26.480
I'm not choosing every word as a deliberate process. The words, I have some idea and then
link |
00:12:32.800
the words come out and that's automatic and effortless. And many of the experiments you've
link |
00:12:39.040
done is to show that, listen, system one exists and it does speak for us and we should be careful
link |
00:12:44.480
about the voice it provides. Well, I mean, you know, we have to trust it because it's
link |
00:12:55.280
the speed at which it acts. System two, if we're dependent on system two for survival,
link |
00:13:01.760
we wouldn't survive very long because it's very slow. Yeah. Crossing the street.
link |
00:13:06.480
Crossing the street. I mean, many things depend on their being automatic. One very important aspect
link |
00:13:12.560
of system one is that it's not instinctive. You use the word instinctive. It contains skills that
link |
00:13:20.320
clearly have been learned. So that skilled behavior like driving a car or speaking, in fact,
link |
00:13:28.800
skilled behavior has to be learned. And so it doesn't, you know, you don't come equipped with
link |
00:13:35.920
driving. You have to learn how to drive and you have to go through a period where driving is not
link |
00:13:41.840
automatic before it becomes automatic. So. Yeah. You construct, I mean, this is where you talk
link |
00:13:48.880
about heuristic and biases is you, to make it automatic, you create a pattern and then system
link |
00:13:57.360
one essentially matches a new experience against the previously seen pattern. And when that match
link |
00:14:02.960
is not a good one, that's when the cognitive, all the mess happens, but it's most of the time
link |
00:14:08.160
it works. And so it's pretty. Most of the time, the anticipation of what's going to happen next
link |
00:14:13.840
is correct. And most of the time the plan about what you have to do is correct. And so most of
link |
00:14:22.000
the time everything works just fine. What's interesting actually is that in some sense,
link |
00:14:29.040
system one is much better at what it does than system two is at what it does. That is there is
link |
00:14:36.240
that quality of effortlessly solving enormously complicated problems, which clearly exists so
link |
00:14:44.480
that the chess player, a very good chess player, all the moves that come to their mind are strong
link |
00:14:52.160
moves. So all the selection of strong moves happens unconsciously and automatically and
link |
00:14:58.960
very, very fast. And all that is in system one. So system two verifies.
link |
00:15:07.280
So along this line of thinking, really what we are are machines that construct
link |
00:15:12.480
a pretty effective system one. You could think of it that way. So we're not talking about humans,
link |
00:15:19.360
but if we think about building artificial intelligence systems, robots, do you think
link |
00:15:26.400
all the features and bugs that you have highlighted in human beings are useful
link |
00:15:32.480
for constructing AI systems? So both systems are useful for perhaps instilling in robots?
link |
00:15:39.280
What is happening these days is that actually what is happening in deep learning is more like
link |
00:15:50.320
a system one product than like a system two product. I mean, deep learning matches patterns
link |
00:15:57.120
and anticipate what's going to happen. So it's highly predictive. What deep learning
link |
00:16:05.120
doesn't have and many people think that this is the critical, it doesn't have the ability to
link |
00:16:12.000
reason. So there is no system two there. But I think very importantly, it doesn't have any
link |
00:16:19.040
causality or any way to represent meaning and to represent real interactions. So until that is
link |
00:16:27.520
solved, what can be accomplished is marvelous and very exciting, but limited.
link |
00:16:35.600
That's actually really nice to think of current advances in machine learning as essentially
link |
00:16:40.560
system one advances. So how far can we get with just system one? If we think of deep learning
link |
00:16:46.960
in artificial intelligence systems? I mean, you know, it's very clear that deep mind has already
link |
00:16:52.320
gone way beyond what people thought was possible. I think the thing that has impressed me most about
link |
00:17:00.560
the developments in AI is the speed. It's that things, at least in the context of deep learning,
link |
00:17:07.840
and maybe this is about to slow down, but things moved a lot faster than anticipated.
link |
00:17:14.400
The transition from solving chess to solving Go, that's bewildering how quickly it went.
link |
00:17:25.600
The move from Alpha Go to Alpha Zero is sort of bewildering the speed at which they accomplished
link |
00:17:31.840
that. Now, clearly, there are many problems that you can solve that way, but there are some problems
link |
00:17:41.360
for which you need something else. Something like reasoning.
link |
00:17:45.760
Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is
link |
00:17:54.160
also a critic of AI. I mean, what he points out, and I think he has a point, is that humans learn
link |
00:18:05.920
quickly. Children don't need a million examples, they need two or three examples. So, clearly,
link |
00:18:16.000
there is a fundamental difference. And what enables a machine to learn quickly, what you have
link |
00:18:25.280
to build into the machine, because it's clear that you have to build some expectations or
link |
00:18:30.400
or something in the machine to make it ready to learn quickly. That at the moment seems to be
link |
00:18:38.320
unsolved. I'm pretty sure that DeepMind is working on it, but if they have solved it, I haven't heard
link |
00:18:47.680
yet. They're trying to actually, them and OpenAI are trying to start to get to use neural networks
link |
00:18:54.640
to reason. So, assemble knowledge. Of course, causality is, temporal causality, is out of
link |
00:19:02.960
reach to most everybody. You mentioned the benefits of System 1 is essentially that it's
link |
00:19:09.200
fast, allows us to function in the world.
link |
00:19:10.960
Fast and skilled, yeah.
link |
00:19:13.040
It's skill.
link |
00:19:13.680
And it has a model of the world. You know, in a sense, I mean, there was the early phase of
link |
00:19:19.920
AI attempted to model reasoning. And they were moderately successful, but, you know, reasoning
link |
00:19:29.440
by itself doesn't get you much. Deep learning has been much more successful in terms of, you know,
link |
00:19:37.440
what they can do. But now, it's an interesting question, whether it's approaching its limits.
link |
00:19:43.920
What do you think?
link |
00:19:44.640
I think absolutely. So, I just talked to Gian LeCun. He mentioned, you know, so he thinks
link |
00:19:51.840
that the limits, we're not going to hit the limits with neural networks, that ultimately,
link |
00:19:57.840
this kind of System 1 pattern matching will start to look like System 2 without significant
link |
00:20:06.720
transformation of the architecture. So, I'm more with the majority of the people who think that,
link |
00:20:12.480
yes, neural networks will hit a limit in their capability.
link |
00:20:16.400
He, on the one hand, I have heard him tell them it's a sub, it's essentially that, you know,
link |
00:20:22.960
what they have accomplished is not a big deal, that they have just touched, that basically,
link |
00:20:28.080
you know, they can't do unsupervised learning in an effective way. But you're telling me that he
link |
00:20:35.520
thinks that the current, within the current architecture, you can do causality and reasoning?
link |
00:20:41.520
So, he's very much a pragmatist in a sense that's saying that we're very far away,
link |
00:20:47.200
that there's still, I think there's this idea that he says is, we can only see
link |
00:20:54.240
one or two mountain peaks ahead and there might be either a few more after or
link |
00:20:59.280
thousands more after. Yeah, so that kind of idea.
link |
00:21:01.920
I heard that metaphor.
link |
00:21:03.120
Yeah, right. But nevertheless, it doesn't see the final answer not fundamentally looking like one
link |
00:21:13.520
that we currently have. So, neural networks being a huge part of that.
link |
00:21:18.720
Yeah, I mean, that's very likely because pattern matching is so much of what's going on.
link |
00:21:26.400
And you can think of neural networks as processing information sequentially.
link |
00:21:30.640
Yeah, I mean, you know, there is an important aspect to, for example, you get systems that
link |
00:21:39.680
translate and they do a very good job, but they really don't know what they're talking about.
link |
00:21:45.760
And for that, I'm really quite surprised. For that, you would need an AI that has sensation,
link |
00:21:55.920
an AI that is in touch with the world.
link |
00:21:58.000
Yes, self awareness and maybe even something resembles consciousness kind of ideas.
link |
00:22:04.480
Certainly awareness of, you know, awareness of what's going on so that the words have meaning
link |
00:22:10.640
or can get, are in touch with some perception or some action.
link |
00:22:16.400
Yeah, so that's a big thing for Jan and as what he refers to as grounding to the physical space.
link |
00:22:23.920
So that's what we're talking about the same thing.
link |
00:22:26.160
Yeah, so how do you ground?
link |
00:22:29.360
I mean, the grounding, without grounding, then you get a machine that doesn't know what
link |
00:22:35.200
it's talking about because it is talking about the world ultimately.
link |
00:22:40.240
The question, the open question is what it means to ground. I mean, we're very
link |
00:22:44.880
human centric in our thinking, but what does it mean for a machine to understand what it means
link |
00:22:50.240
to be in this world? Does it need to have a body? Does it need to have a finiteness like we humans
link |
00:22:57.280
have all of these elements? It's a very, it's an open question.
link |
00:23:02.240
You know, I'm not sure about having a body, but having a perceptual system,
link |
00:23:05.920
having a body would be very helpful too. I mean, if you think about human, mimicking human,
link |
00:23:12.480
you know, but having a perception that seems to be essential so that you can build,
link |
00:23:20.080
you can accumulate knowledge about the world. So if you can imagine a human completely paralyzed,
link |
00:23:28.240
and there's a lot that the human brain could learn, you know, with a paralyzed body.
link |
00:23:33.520
So if we got a machine that could do that, that would be a big deal.
link |
00:23:38.640
TK And then the flip side of that, something you see in children and something in machine
link |
00:23:44.960
learning world is called active learning. Maybe it is also in, is being able to play with the world.
link |
00:23:52.640
How important for developing System 1 or System 2 do you think it is to play with the world?
link |
00:23:59.760
To be able to interact with the world?
link |
00:24:00.960
MG A lot of what you learn is you learn to anticipate the outcomes of your actions. I mean,
link |
00:24:08.960
you can see that how babies learn it, you know, with their hands, how they learn, you know,
link |
00:24:15.600
to connect, you know, the movements of their hands with something that clearly is something
link |
00:24:20.640
that happens in the brain and the ability of the brain to learn new patterns. So, you know,
link |
00:24:28.320
it's the kind of thing that you get with artificial limbs, that you connect it and then people learn
link |
00:24:34.880
to operate the artificial limb, you know, really impressively quickly, at least from what I hear.
link |
00:24:44.000
So we have a system that is ready to learn the world through action.
link |
00:24:49.040
TK At the risk of going into way too mysterious of land,
link |
00:24:52.640
what do you think it takes to build a system like that? Obviously, we're very far from understanding
link |
00:25:00.000
how the brain works, but how difficult is it to build this mind of ours?
link |
00:25:08.000
MG You know, I mean, I think that Jan LeCun's answer that we don't know how many mountains
link |
00:25:13.200
there are, I think that's a very good answer. I think that, you know, if you look at what Ray
link |
00:25:20.080
Kurzweil is saying, that strikes me as off the wall. But I think people are much more realistic
link |
00:25:28.800
than that, where actually Demis Hassabis is and Jan is, and so the people are actually doing the
link |
00:25:35.520
work fairly realistic, I think. TK To maybe phrase it another way,
link |
00:25:41.440
from a perspective not of building it, but from understanding it,
link |
00:25:44.960
how complicated are human beings in the following sense? You know, I work with autonomous vehicles
link |
00:25:52.240
and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being,
link |
00:26:00.480
their perception of the world, the two systems they operate under, sufficiently to be able to
link |
00:26:06.080
predict whether the pedestrian is going to cross the road or not?
link |
00:26:09.280
MG I'm, you know, I'm fairly optimistic about that, actually, because what we're talking about
link |
00:26:18.000
is a huge amount of information that every vehicle has, and that feeds into one system,
link |
00:26:26.800
into one gigantic system. And so anything that any vehicle learns becomes part of what the whole
link |
00:26:33.440
system knows. And with a system multiplier like that, there is a lot that you can do.
link |
00:26:41.040
So human beings are very complicated, and the system is going to make mistakes, but human
link |
00:26:48.560
makes mistakes. I think that they'll be able to, I think they are able to anticipate pedestrians,
link |
00:26:56.400
otherwise a lot would happen. They're able to, you know, they're able to get into a roundabout
link |
00:27:04.640
and into traffic, so they must know both to expect or to anticipate how people will react
link |
00:27:14.000
when they're sneaking in. And there's a lot of learning that's involved in that.
link |
00:27:18.800
RL Currently, the pedestrians are treated as things that cannot be hit, and they're not
link |
00:27:28.080
treated as agents with whom you interact in a game theoretic way. So, I mean, it's not,
link |
00:27:37.040
it's a totally open problem, and every time somebody tries to solve it, it seems to be harder
link |
00:27:41.520
than we think. And nobody's really tried to seriously solve the problem of that dance,
link |
00:27:46.640
because I'm not sure if you've thought about the problem of pedestrians, but you're really
link |
00:27:52.080
putting your life in the hands of the driver.
link |
00:27:54.960
RL You know, there is a dance, there's part of the dance that would be quite complicated,
link |
00:28:00.320
but for example, when I cross the street and there is a vehicle approaching, I look the driver
link |
00:28:05.920
in the eye, and I think many people do that. And, you know, that's a signal that I'm sending,
link |
00:28:13.360
and I would be sending that machine to an autonomous vehicle, and it had better understand
link |
00:28:18.480
it, because it means I'm crossing.
link |
00:28:20.720
RL So, and there's another thing you do, that actually, so I'll tell you what you do,
link |
00:28:26.240
because we watched, I've watched hundreds of hours of video on this, is when you step
link |
00:28:31.440
in the street, you do that before you step in the street, and when you step in the street,
link |
00:28:35.440
you actually look away.
link |
00:28:36.400
RL Look away.
link |
00:28:36.960
RL Yeah. Now, what is that? What that's saying is, I mean, you're trusting that the car who
link |
00:28:45.360
hasn't slowed down yet will slow down.
link |
00:28:48.000
RL Yeah. And you're telling him, I'm committed. I mean, this is like in a game of chicken,
link |
00:28:53.680
so I'm committed, and if I'm committed, I'm looking away. So, there is, you just have
link |
00:28:59.840
to stop.
link |
00:29:00.320
RL So, the question is whether a machine that observes that needs to understand mortality.
link |
00:29:06.880
RL Here, I'm not sure that it's got to understand so much as it's got to anticipate. So, and
link |
00:29:17.120
here, but you know, you're surprising me, because here I would think that maybe you
link |
00:29:24.400
can anticipate without understanding, because I think this is clearly what's happening in
link |
00:29:30.560
playing go or in playing chess. There's a lot of anticipation, and there is zero understanding.
link |
00:29:35.600
RL Exactly.
link |
00:29:36.240
RL So, I thought that you didn't need a model of the human and a model of the human mind
link |
00:29:46.400
to avoid hitting pedestrians, but you are suggesting that actually…
link |
00:29:50.880
RL There you go, yeah.
link |
00:29:51.840
RL You do. Then it's a lot harder, I thought.
link |
00:29:56.720
RL And I have a follow up question to see where your intuition lies. It seems that almost
link |
00:30:02.560
every robot human collaboration system is a lot harder than people realize. So, do you
link |
00:30:10.800
think it's possible for robots and humans to collaborate successfully? We talked a little
link |
00:30:17.200
bit about semi autonomous vehicles, like in the Tesla autopilot, but just in tasks in
link |
00:30:23.360
general. If you think we talked about current neural networks being kind of system one,
link |
00:30:30.160
do you think those same systems can borrow humans for system two type tasks and collaborate
link |
00:30:40.240
successfully?
link |
00:30:40.880
RL Well, I think that in any system where humans and the machine interact, the human
link |
00:30:49.520
will be superfluous within a fairly short time. That is, if the machine is advanced
link |
00:30:55.760
enough so that it can really help the human, then it may not need the human for a long
link |
00:31:01.600
time. Now, it would be very interesting if there are problems that for some reason the
link |
00:31:08.320
machine cannot solve, but that people could solve. Then you would have to build into the
link |
00:31:14.240
machine an ability to recognize that it is in that kind of problematic situation and
link |
00:31:22.080
to call the human. That cannot be easy without understanding. That is, it must be very difficult
link |
00:31:30.880
to program a recognition that you are in a problematic situation without understanding
link |
00:31:38.400
the problem.
link |
00:31:39.440
SL. That's very true. In order to understand the full scope of situations that are problematic,
link |
00:31:47.360
you almost need to be smart enough to solve all those problems.
link |
00:31:51.680
RL It's not clear to me how much the machine will need the human. I think the example of
link |
00:32:01.120
chess is very instructive. I mean, there was a time at which Kasparov was saying that human
link |
00:32:06.160
machine combinations will beat everybody. Even stockfish doesn't need people and Alpha
link |
00:32:13.440
Zero certainly doesn't need people.
link |
00:32:15.280
SL. The question is, just like you said, how many problems are like chess and how many
link |
00:32:20.880
problems are not like chess? Every problem probably in the end is like chess. The question
link |
00:32:27.760
is, how long is that transition period?
link |
00:32:29.760
RL That's a question I would ask you. Autonomous vehicle, just driving, is probably a lot more
link |
00:32:38.880
complicated than Go to solve that problem. Because it's open. That's not surprising to
link |
00:32:47.840
me because there is a hierarchical aspect to this, which is recognizing a situation
link |
00:32:58.960
and then within the situation bringing up the relevant knowledge. For that hierarchical
link |
00:33:09.280
type of system to work, you need a more complicated system than we currently have.
link |
00:33:15.760
SL. A lot of people think, because as human beings, this is probably the cognitive biases,
link |
00:33:22.720
they think of driving as pretty simple because they think of their own experience. This is
link |
00:33:28.720
actually a big problem for AI researchers or people thinking about AI because they evaluate
link |
00:33:36.400
how hard a particular problem is based on very limited knowledge, based on how hard
link |
00:33:43.280
it is for them to do the task. And then they take for granted, maybe you can speak to that
link |
00:33:49.120
because most people tell me driving is trivial and humans in fact are terrible at driving
link |
00:33:56.720
is what people tell me. And I see humans and humans are actually incredible at driving
link |
00:34:02.040
and driving is really terribly difficult. Is that just another element of the effects
link |
00:34:08.520
that you've described in your work on the psychology side?
link |
00:34:13.680
No, I mean, I haven't really, I would say that my research has contributed nothing to
link |
00:34:22.000
understanding the ecology and to understanding the structure of situations and the complexity
link |
00:34:27.800
of problems. So all we know is very clear that that goal, it's endlessly complicated,
link |
00:34:38.720
but it's very constrained. And in the real world, there are far fewer constraints and
link |
00:34:46.840
many more potential surprises.
link |
00:34:49.320
SL. So that's obvious because it's not always obvious to people, right? So when you think
link |
00:34:54.720
about…
link |
00:34:55.720
Well, I mean, you know, people thought that reasoning was hard and perceiving was easy,
link |
00:35:02.880
but you know, they quickly learned that actually modeling vision was tremendously complicated
link |
00:35:09.920
and modeling, even proving theorems was relatively straightforward.
link |
00:35:15.960
To push back on that a little bit on the quickly part, it took several decades to learn that
link |
00:35:22.800
and most people still haven't learned that. I mean, our intuition, of course, AI researchers
link |
00:35:28.400
have, but you drift a little bit outside the specific AI field, the intuition is still
link |
00:35:34.760
perceptible to solve that.
link |
00:35:36.320
No, I mean, that's true. Intuitions, the intuitions of the public haven't changed
link |
00:35:41.280
radically. And they are, as you said, they're evaluating the complexity of problems by how
link |
00:35:48.760
difficult it is for them to solve the problems. And that's got very little to do with the
link |
00:35:55.720
complexities of solving them in AI.
link |
00:35:58.360
SL. How do you think from the perspective of an AI researcher, do we deal with the intuitions
link |
00:36:06.120
of the public? So in trying to think, arguably, the combination of hype investment and the
link |
00:36:15.080
public intuition is what led to the AI winters. I'm sure that same could be applied to tech
link |
00:36:21.160
or that the intuition of the public leads to media hype, leads to companies investing
link |
00:36:29.700
in the tech, and then the tech doesn't make the company's money. And then there's a crash.
link |
00:36:36.700
Is there a way to educate people to fight the, let's call it system one thinking?
link |
00:36:43.280
In general, no. I think that's the simple answer. And it's going to take a long time
link |
00:36:54.600
before the understanding of what those systems can do becomes public knowledge. And then
link |
00:37:09.240
the expectations, there are several aspects that are going to be very complicated. The
link |
00:37:20.920
fact that you have a device that cannot explain itself is a major, major difficulty. And we're
link |
00:37:29.720
already seeing that. I mean, this is really something that is happening. So it's happening
link |
00:37:35.520
in the judicial system. So you have system that are clearly better at predicting parole
link |
00:37:43.600
violations than judges, but they can't explain their reasoning. And so people don't want
link |
00:37:54.220
to trust them.
link |
00:37:56.040
We seem to in system one, even use cues to make judgements about our environment. So
link |
00:38:05.400
this explainability point, do you think humans can explain stuff?
link |
00:38:11.040
No, but I mean, there is a very interesting aspect of that. Humans think they can explain
link |
00:38:20.400
themselves. So when you say something and I ask you, why do you believe that? Then reasons
link |
00:38:28.160
will occur to you. But actually, my own belief is that in most cases, the reasons have very
link |
00:38:35.880
little to do with why you believe what you believe. So that the reasons are a story that
link |
00:38:41.880
comes to your mind when you need to explain yourself. But people traffic in those explanations
link |
00:38:50.200
I mean, the human interaction depends on those shared fictions and, and the stories that
link |
00:38:56.680
people tell themselves.
link |
00:38:58.580
You just made me actually realize and we'll talk about stories in a second. That not to
link |
00:39:05.960
be cynical about it, but perhaps there's a whole movement of people trying to do explainable
link |
00:39:11.520
AI. And really, we don't necessarily need to explain AI doesn't need to explain itself.
link |
00:39:19.360
It just needs to tell a convincing story.
link |
00:39:21.880
Yeah, absolutely.
link |
00:39:23.560
It doesn't necessarily, the story doesn't necessarily need to reflect the truth as it
link |
00:39:29.160
might, it just needs to be convincing. There's something to that.
link |
00:39:32.800
You can say exactly the same thing in a way that sounds cynical or doesn't sound cynical.
link |
00:39:38.840
Right.
link |
00:39:39.840
But the objective of having an explanation is to tell a story that will be acceptable
link |
00:39:48.000
to people. And, and, and for it to be acceptable and to be robustly acceptable, it has to have
link |
00:39:56.360
some elements of truth. But, but the objective is for people to accept it.
link |
00:40:04.480
It's quite brilliant, actually. But so on the, on the stories that we tell, sorry to
link |
00:40:11.720
ask me, ask you the question that most people know the answer to, but you talk about two
link |
00:40:18.000
selves in terms of how life is lived, the experienced self and remembering self. Can
link |
00:40:24.780
you describe the distinction between the two?
link |
00:40:26.920
Well, sure. I mean, the, there is an aspect of, of life that occasionally, you know, most
link |
00:40:33.680
of the time we just live and we have experiences and they're better and they're worse and it
link |
00:40:38.520
goes on over time. And mostly we forget everything that happens or we forget most of what happens.
link |
00:40:45.760
Then occasionally you, when something ends or at different points, you evaluate the past
link |
00:40:56.280
and you form a memory and the memory is schematic. It's not that you can roll a film of an interaction.
link |
00:41:03.560
You construct, in effect, the elements of a story about an, about an episode. So there
link |
00:41:12.960
is the experience and there is the story that is created about the experience. And that's
link |
00:41:18.360
what I call the remembering. So I had the image of two selves. So there is a self that
link |
00:41:24.320
lives and there is a self that evaluates life. Now the paradox and the deep paradox in that
link |
00:41:32.200
is that we have one system or one self that does the living, but the other system, the
link |
00:41:41.960
remembering self is all we get to keep. And basically decision making and, and everything
link |
00:41:49.180
that we do is governed by our memories, not by what actually happened. It's, it's governed
link |
00:41:55.000
by, by the story that we told ourselves or by the story that we're keeping. So that's,
link |
00:42:02.280
that's the distinction.
link |
00:42:03.280
I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of
link |
00:42:08.000
that. What are the properties of happiness which emerge from a remembering self?
link |
00:42:14.160
There are, there are properties of how we construct stories that are really important.
link |
00:42:19.160
So that I studied a few, but, but a couple are really very striking. And one is that
link |
00:42:29.720
in stories, time doesn't matter. There's a sequence of events or there are highlights
link |
00:42:37.080
or not. And, and how long it took, you know, they lived happily ever after or three years
link |
00:42:45.240
later or something. It, time really doesn't matter. And in stories, events matter, but
link |
00:42:53.480
time doesn't. That, that leads to a very interesting set of problems because time is all we got
link |
00:43:03.740
to live. I mean, you know, time is the currency of life. And yet time is not represented basically
link |
00:43:11.040
in evaluated memories. So that, that creates a lot of paradoxes that I've thought about.
link |
00:43:18.520
Yeah. They're fascinating. But if you were to give advice on how one lives a happy life
link |
00:43:27.520
based on such properties, what's the optimal?
link |
00:43:33.120
You know, I gave up, I abandoned happiness research because I couldn't solve that problem.
link |
00:43:38.880
I couldn't, I couldn't see. And in the first place, it's very clear that if you do talk
link |
00:43:46.160
in terms of those two selves, then that what makes the remembering self happy and what
link |
00:43:51.520
makes the experiencing self happy are different things. And I, I asked the question of, suppose
link |
00:43:59.320
you're planning a vacation and you're just told that at the end of the vacation, you'll
link |
00:44:04.160
get an amnesic drug, so you remember nothing. And they'll also destroy all your photos.
link |
00:44:10.160
So there'll be nothing. Would you still go to the same vacation? And, and it's, it turns
link |
00:44:20.640
out we go to vacations in large part to construct memories, not to have experiences, but to
link |
00:44:26.600
construct memories. And it turns out that the vacation that you would want for yourself,
link |
00:44:32.520
if you knew, you will not remember is probably not the same vacation that you will want for
link |
00:44:38.080
yourself if you will remember. So I have no solution to these problems, but clearly those
link |
00:44:46.240
are big issues.
link |
00:44:47.240
And you've talked about, you've talked about sort of how many minutes or hours you spend
link |
00:44:53.060
about the vacation. It's an interesting way to think about it because that's how you really
link |
00:44:58.120
experience the vacation outside the being in it. But there's also a modern, I don't
link |
00:45:03.640
know if you think about this or interact with it. There's a modern way to, um, magnify the
link |
00:45:11.440
remembering self, which is by posting on Instagram, on Twitter, on social networks. A lot of people
link |
00:45:17.820
live life for the picture that you take, that you post somewhere. And now thousands of people
link |
00:45:24.680
share and potentially potentially millions. And then you can relive it even much more
link |
00:45:29.040
than just those minutes. Do you think about that magnification much?
link |
00:45:34.280
You know, I'm too old for social networks. I, you know, I've never seen Instagram, so
link |
00:45:41.960
I cannot really speak intelligently about those things. I'm just too old.
link |
00:45:46.640
But it's interesting to watch the exact effects you've described.
link |
00:45:49.840
Make a very big difference. I mean, and it will make, it will also make a difference.
link |
00:45:55.560
And that I don't know whether, uh, it's clear that in some ways the devices that serve us
link |
00:46:06.040
are supplant functions. So you don't have to remember phone numbers. You don't have,
link |
00:46:12.960
you really don't have to know facts. I mean, the number of conversations I'm involved with,
link |
00:46:19.080
somebody says, well, let's look it up. Uh, so it's, it's in a way it's made conversations.
link |
00:46:27.640
Well it's, it means that it's much less important to know things. You know, it used to be very
link |
00:46:33.360
important to know things. This is changing. So the requirements of that, that we have
link |
00:46:43.200
for ourselves and for other people are changing because of all those supports and because,
link |
00:46:50.560
and I have no idea what Instagram does, but it's, uh, well, I'll tell you, I wish I could
link |
00:46:57.600
just have the, my remembering self could enjoy this conversation, but I'll get to enjoy it
link |
00:47:03.600
even more by having watched, by watching it and then talking to others. It'll be about
link |
00:47:08.520
a hundred thousand people as scary as this to say, well, listen or watch this, right?
link |
00:47:14.880
It changes things. It changes the experience of the world that you seek out experiences
link |
00:47:20.320
which could be shared in that way. It's in, and I haven't seen, it's, it's the same effects
link |
00:47:25.920
that you described. And I don't think the psychology of that magnification has been
link |
00:47:30.760
described yet because it's a new world.
link |
00:47:33.240
But the sharing, there was a, there was a time when people read books and, uh, and,
link |
00:47:43.240
and you could assume that your friends had read the same books that you read. So there
link |
00:47:51.140
was kind of invisible sharing. There was a lot of sharing going on and there was a lot
link |
00:47:57.760
of assumed common knowledge and, you know, that was built in. I mean, it was obvious
link |
00:48:03.780
that you had read the New York Times. It was obvious that you had read the reviews. I mean,
link |
00:48:09.520
so a lot was taken for granted that was shared. And, you know, when there were, when there
link |
00:48:17.040
were three television channels, it was obvious that you'd seen one of them probably the same.
link |
00:48:26.000
So sharing, sharing always was always there. It was just different.
link |
00:48:32.400
At the risk of, uh, inviting mockery from you, let me say that I'm also a fan of Sartre
link |
00:48:40.920
and Camus and existentialist philosophers. And, um, I'm joking of course about mockery,
link |
00:48:47.560
but from the perspective of the two selves, what do you think of the existentialist philosophy
link |
00:48:54.180
of life? So trying to really emphasize the experiencing self as the proper way to, or
link |
00:49:03.680
the best way to live life.
link |
00:49:05.960
I don't know enough philosophy to answer that, but it's not, uh, you know, the emphasis on,
link |
00:49:13.600
on experience is also the emphasis in Buddhism.
link |
00:49:16.760
Yeah, right. That's right.
link |
00:49:18.040
So, uh, that's, you just have got to, to experience things and, and, and not to evaluate and not
link |
00:49:27.280
to pass judgment and not to score, not to keep score. So, uh,
link |
00:49:33.560
If, when you look at the grand picture of experience, you think there's something to
link |
00:49:37.760
that, that one, one of the ways to achieve contentment and maybe even happiness is letting
link |
00:49:44.480
go of any of the things, any of the procedures of the remembering self.
link |
00:49:51.800
Well, yeah, I mean, I think, you know, if one could imagine a life in which people don't
link |
00:49:58.080
score themselves, uh, it, it feels as if that would be a better life as if the self scoring
link |
00:50:05.960
and you know, how am I doing a kind of question, uh, is not, is not a very happy thing to have.
link |
00:50:18.040
But I got out of that field because I couldn't solve that problem and, and that was because
link |
00:50:25.360
my intuition was that the experiencing self, that's reality.
link |
00:50:31.500
But then it turns out that what people want for themselves is not experiences. They want
link |
00:50:36.560
memories and they want a good story about their life. And so you cannot have a theory
link |
00:50:41.600
of happiness that doesn't correspond to what people want for themselves. And when I, when
link |
00:50:47.880
I realized that this, this was where things were going, I really sort of left the field
link |
00:50:53.760
of research.
link |
00:50:54.760
Do you think there's something instructive about this emphasis of reliving memories in
link |
00:51:01.100
building AI systems. So currently artificial intelligence systems are more like experiencing
link |
00:51:09.200
self in that they react to the environment. There's some pattern formation like a learning
link |
00:51:16.280
so on, but you really don't construct memories, uh, except in reinforcement learning every
link |
00:51:23.120
once in a while that you replay over and over.
link |
00:51:25.720
Yeah, but you know, that would in principle would not be.
link |
00:51:30.280
Do you think that's useful? Do you think it's a feature or a bug of human beings that we,
link |
00:51:36.000
that we look back?
link |
00:51:37.000
Oh, I think that's definitely a feature. That's not a bug. I mean, you, you have to look back
link |
00:51:43.360
in order to look forward. So, uh, without, without looking back, you couldn't, you couldn't
link |
00:51:50.440
really intelligently look forward.
link |
00:51:53.080
You're looking for the echoes of the same kind of experience in order to predict what
link |
00:51:57.080
the future holds.
link |
00:51:58.080
Yeah.
link |
00:51:59.080
So though Victor Frankel in his book, man's search for meaning, I'm not sure if you've
link |
00:52:05.320
read, describes his experience at the consecration concentration camps during world war two as
link |
00:52:10.720
a way to describe that finding identifying a purpose in life, a positive purpose in life
link |
00:52:18.480
can save one from suffering. First of all, do you connect with the philosophy that he
link |
00:52:23.840
describes there?
link |
00:52:28.420
Not really. I mean, the, so I can, I can really see that somebody who has that feeling of
link |
00:52:37.040
purpose and meaning and so on, that, that could sustain you. Uh, I in general don't
link |
00:52:44.640
have that feeling and I'm pretty sure that if I were in a concentration camp, I'd give
link |
00:52:50.800
up and die, you know? So he talks, he is, he is a survivor.
link |
00:52:56.240
Yeah.
link |
00:52:57.240
And, you know, he survived with that. And I'm, and I'm not sure how essential to survival
link |
00:53:04.000
this sense is, but I do know when I think about myself that I would have given up. Oh,
link |
00:53:12.220
this isn't going anywhere. And there is, there is a sort of character that, that, that manages
link |
00:53:20.140
to survive in conditions like that. And then because they survive, they tell stories and
link |
00:53:26.120
it sounds as if they survive because of what they were doing. We have no idea. They survived
link |
00:53:31.840
because the kind of people that they are and the other kind of people who survives and
link |
00:53:36.240
would tell themselves stories of a particular kind. So I'm not, uh,
link |
00:53:41.800
So you don't think seeking purpose is a significant driver in our being?
link |
00:53:46.840
Oh, I mean, it's, it's a very interesting question because when you ask people whether
link |
00:53:52.400
it's very important to have meaning in their life, they say, oh yes, that's the most important
link |
00:53:56.240
thing. But when you ask people, what kind of a day did you have? And, and you know,
link |
00:54:03.880
what were the experiences that you remember? You don't get much meaning. You get social
link |
00:54:10.320
experiences. Then, uh, and, and some people say that, for example, in, in, in child, you
link |
00:54:21.480
know, in taking care of children, the fact that they are your children and you're taking
link |
00:54:25.720
care of them, uh, makes a very big difference. I think that's entirely true. Uh, but it's
link |
00:54:34.040
more because of a story that we're telling ourselves, which is a very different story
link |
00:54:40.560
when we're taking care of our children or when we're taking care of other things.
link |
00:54:45.140
Jumping around a little bit in doing a lot of experiments, let me ask a question. Most
link |
00:54:50.880
of the work I do, for example, is in the, in the real world, but most of the clean good
link |
00:54:56.840
science that you can do is in the lab. So that distinction, do you think we can understand
link |
00:55:04.480
the fundamentals of human behavior through controlled experiments in the lab? If we talk
link |
00:55:12.680
about pupil diameter, for example, it's much easier to do when you can control lighting
link |
00:55:18.920
conditions, right? So when we look at driving, lighting variation destroys almost completely
link |
00:55:27.680
your ability to use pupil diameter. But in the lab for, as I mentioned, semi autonomous
link |
00:55:34.740
or autonomous vehicles in driving simulators, we can't, we don't capture true, honest, uh,
link |
00:55:43.080
human behavior in that particular domain. So what's your intuition? How much of human
link |
00:55:49.000
behavior can we study in this controlled environment of the lab? A lot, but you'd have to verify
link |
00:55:56.160
it, you know, that your, your conclusions are basically limited to the situation, to
link |
00:56:03.240
the experimental situation. Then you have to jump the big inductive leap to the real
link |
00:56:09.000
world. Uh, so, and, and that's the flare. That's where the difference, I think, between
link |
00:56:17.920
the good psychologists and others that are mediocre is in the sense of that your experiment
link |
00:56:25.840
captures something that's important and something that's real and others are just running experiments.
link |
00:56:33.520
So what is that? Like the birth of an idea to his development in your mind to something
link |
00:56:39.000
that leads to an experiment. Is that similar to maybe like what Einstein or a good physicist
link |
00:56:44.840
do is your intuition. You basically use your intuition to build up.
link |
00:56:48.840
Yeah, but I mean, you know, it's, it's very skilled intuition. I mean, I just had that
link |
00:56:54.280
experience actually. I had an idea that turns out to be very good idea a couple of days
link |
00:57:00.840
ago and, and you, and you have a sense of that building up. So I'm working with a collaborator
link |
00:57:08.400
and he essentially was saying, you know, what, what are you doing? What's, what's going on?
link |
00:57:14.280
And I was, I really, I couldn't exactly explain it, but I knew this is going somewhere, but
link |
00:57:21.000
you know, I've been around that game for a very long time. And so I can, you, you develop
link |
00:57:26.920
that anticipation that yes, this, this is worth following up. That's part of the skill.
link |
00:57:34.640
Is that something you can reduce to words in describing a process in the form of advice
link |
00:57:41.560
to others?
link |
00:57:42.560
No.
link |
00:57:43.560
Follow your heart, essentially.
link |
00:57:45.560
I mean, you know, it's, it's like trying to explain what it's like to drive. It's not,
link |
00:57:51.680
you've got to break it apart and it's not.
link |
00:57:54.140
And then you lose.
link |
00:57:55.140
And then you lose the experience.
link |
00:57:58.080
You mentioned collaboration. You've written about your collaboration with Amos Tversky
link |
00:58:05.140
that this is you writing, the 12 or 13 years in which most of our work was joint were years
link |
00:58:10.780
of interpersonal and intellectual bliss. Everything was interesting. Almost everything
link |
00:58:16.720
was funny. And there was a current joy of seeing an idea take shape. So many times in
link |
00:58:22.080
those years, we shared the magical experience of one of us saying something, which the other
link |
00:58:27.320
one would understand more deeply than the speaker had done. Contrary to the old laws
link |
00:58:32.520
of information theory, it was common for us to find that more information was received
link |
00:58:38.000
than had been sent. I have almost never had the experience with anyone else. If you have
link |
00:58:43.860
not had it, you don't know how marvelous collaboration can be.
link |
00:58:49.120
So let me ask a perhaps a silly question. How does one find and create such a collaboration?
link |
00:58:58.840
That may be asking like, how does one find love?
link |
00:59:01.120
Yeah, you have to be lucky. And I think you have to have the character for that because
link |
00:59:10.600
I've had many collaborations. I mean, none were as exciting as with Amos, but I've had
link |
00:59:17.600
and I'm having just very. So it's a skill. I think I'm good at it. Not everybody is good
link |
00:59:27.040
at it. And then it's the luck of finding people who are also good at it.
link |
00:59:32.100
Is there advice in a form for a young scientist who also seeks to violate this law of information
link |
00:59:39.420
theory?
link |
00:59:48.520
I really think it's so much luck is involved. And in those really serious collaborations,
link |
00:59:59.560
at least in my experience, are a very personal experience. And I have to like the person
link |
01:00:06.660
I'm working with. Otherwise, I mean, there is that kind of collaboration, which is like
link |
01:00:13.280
an exchange, a commercial exchange of giving this, you give me that. But the real ones
link |
01:00:21.880
are interpersonal. They're between people who like each other and who like making each
link |
01:00:28.080
other think and who like the way that the other person responds to your thoughts. You
link |
01:00:34.400
have to be lucky.
link |
01:00:37.080
But I already noticed that even just me showing up here, you've quickly started to digging
link |
01:00:43.760
in on a particular problem I'm working on and already new information started to emerge.
link |
01:00:49.840
Is that a process, just the process of curiosity of talking to people about problems and seeing?
link |
01:00:56.420
I'm curious about anything to do with AI and robotics. And I knew you were dealing with
link |
01:01:03.400
that. So I was curious.
link |
01:01:05.240
Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding
link |
01:01:13.100
terminology of replication crisis, but really just the, at times, this effect that at times
link |
01:01:24.960
studies do not, are not fully generalizable. They don't.
link |
01:01:29.240
You are being polite. It's worse than that.
link |
01:01:33.040
Is it? So I'm actually not fully familiar to the degree how bad it is, right? So what
link |
01:01:39.360
do you think is the source? Where do you think?
link |
01:01:41.520
I think I know what's going on actually. I mean, I have a theory about what's going on
link |
01:01:47.520
and what's going on is that there is, first of all, a very important distinction between
link |
01:01:55.460
two types of experiments. And one type is within subject. So it's the same person has
link |
01:02:03.120
two experimental conditions. And the other type is between subjects where some people
link |
01:02:09.200
are this condition, other people are that condition. They're different worlds. And between
link |
01:02:14.160
subject experiments are much harder to predict and much harder to anticipate. And the reason,
link |
01:02:25.560
and they're also more expensive because you need more people. And it's just, so between
link |
01:02:31.880
subject experiments is where the problem is. It's not so much in within subject experiments,
link |
01:02:38.600
it's really between. And there is a very good reason why the intuitions of researchers about
link |
01:02:46.920
between subject experiments are wrong. And that's because when you are a researcher,
link |
01:02:54.180
you're in a within subject situation. That is you are imagining the two conditions and
link |
01:03:00.560
you see the causality and you feel it. But in the between subject condition, they live
link |
01:03:09.680
in one condition and the other one is just nowhere. So our intuitions are very weak about
link |
01:03:18.440
between subject experiments. And that I think is something that people haven't realized.
link |
01:03:26.520
And in addition, because of that, we have no idea about the power of manipulations of
link |
01:03:34.800
experimental manipulations because the same manipulation is much more powerful when you
link |
01:03:42.420
are in the two conditions than when you live in only one condition. And so the experimenters
link |
01:03:48.880
have very poor intuitions about between subject experiments. And there is something else which
link |
01:03:56.760
is very important, I think, which is that almost all psychological hypotheses are true.
link |
01:04:04.080
That is in the sense that, you know, directionally, if you have a hypothesis that A really causes
link |
01:04:13.200
B, that it's not true that A causes the opposite of B. Maybe A just has very little effect,
link |
01:04:21.000
but hypotheses are true mostly, except mostly they're very weak. They're much weaker than
link |
01:04:28.840
you think when you are having images. So the reason I'm excited about that is that I recently
link |
01:04:38.000
heard about some friends of mine who they essentially funded 53 studies of behavioral
link |
01:04:50.560
change by 20 different teams of people with a very precise objective of changing the number
link |
01:04:59.420
of times that people go to the gym. And the success rate was zero. Not one of the 53 studies
link |
01:05:12.600
worked. Now, what's interesting about that is those are the best people in the field
link |
01:05:18.160
and they have no idea what's going on. So they're not calibrated. They think that it's
link |
01:05:24.440
going to be powerful because they can imagine it, but actually it's just weak because you
link |
01:05:30.760
are focusing on your manipulation and it feels powerful to you. There's a thing that I've
link |
01:05:37.880
written about that's called the focusing illusion. That is that when you think about something,
link |
01:05:43.480
it looks very important, more important than it really is.
link |
01:05:48.400
More important than it really is. But if you don't see that effect, the 53 studies, doesn't
link |
01:05:53.800
that mean you just report that? So what was, I guess, the solution to that?
link |
01:05:59.320
Well, I mean, the solution is for people to trust their intuitions less or to try out
link |
01:06:07.600
their intuitions before. I mean, experiments have to be pre registered and by the time
link |
01:06:14.760
you run an experiment, you have to be committed to it and you have to run the experiment seriously
link |
01:06:20.960
enough and in a public. And so this is happening. The interesting thing is what happens before
link |
01:06:32.800
and how do people prepare themselves and how they run pilot experiments. It's going to
link |
01:06:37.920
train the way psychology is done and it's already happening.
link |
01:06:41.360
Do you have a hope for, this might connect to the study sample size.
link |
01:06:48.520
Yeah.
link |
01:06:49.520
Do you have a hope for the internet?
link |
01:06:51.320
Well, I mean, you know, this is really happening. MTurk, everybody's running experiments on
link |
01:06:59.040
MTurk and it's very cheap and very effective.
link |
01:07:03.640
Do you think that changes psychology essentially? Because you're thinking you cannot run 10,000
link |
01:07:09.200
subjects.
link |
01:07:10.200
Eventually it will. I mean, I, you know, I can't put my finger on how exactly, but it's,
link |
01:07:18.480
that's been true in psychology with whenever an important new method came in, it changes
link |
01:07:24.880
the field. So, and MTurk is really a method because it makes it very much easier to do
link |
01:07:33.160
something, to do some things.
link |
01:07:35.520
Is there a undergrad students who'll ask me, you know, how big a neural network should
link |
01:07:40.680
be for a particular problem? So let me ask you an equivalent question. How big, how many
link |
01:07:49.080
subjects does the study have for it to have a conclusive result?
link |
01:07:53.560
Well, it depends on the strength of the effect. So if you're studying visual perception or
link |
01:08:00.760
the perception of color, many of the classic results in visual, in color perception were
link |
01:08:08.600
done on three or four people. And I think one of them was colorblind, but partly colorblind,
link |
01:08:14.600
but on vision, you know, it's highly reliable. Many people don't need a lot of replications
link |
01:08:24.820
for some type of neurological experiment. When you're studying weaker phenomena and
link |
01:08:35.800
especially when you're studying them between subjects, then you need a lot more subjects
link |
01:08:41.120
than people have been running. And that is, that's one of the things that are happening
link |
01:08:47.000
in psychology now is that the power, the statistical power of experiments is increasing rapidly.
link |
01:08:54.220
Does the between subject, as the number of subjects goes to infinity approach?
link |
01:08:59.200
Well, I mean, you know, it goes to infinity is exaggerated, but people, the standard number
link |
01:09:06.440
of subjects for an experiment in psychology were 30 or 40. And for a weak effect, that's
link |
01:09:15.040
simply not enough. And you may need a couple of hundred. I mean, it's that sort of order
link |
01:09:25.720
of magnitude.
link |
01:09:28.760
What are the major disagreements in theories and effects that you've observed throughout
link |
01:09:35.840
your career that still stand today? You've worked on several fields, but what still is
link |
01:09:42.520
out there as a major disagreement that pops into your mind?
link |
01:09:47.320
I've had one extreme experience of, you know, controversy with somebody who really doesn't
link |
01:09:54.840
like the work that Amos Tversky and I did. And he's been after us for 30 years or more,
link |
01:10:01.720
at least.
link |
01:10:02.720
Do you want to talk about it?
link |
01:10:03.720
Well, I mean, his name is Gerd Gigerenzer. He's a well known German psychologist. And
link |
01:10:10.400
that's the one controversy, which I, it's been unpleasant. And no, I don't particularly
link |
01:10:18.960
want to talk about it.
link |
01:10:21.040
But is there is there open questions, even in your own mind, every once in a while? You
link |
01:10:25.680
know, we talked about semi autonomous vehicles. In my own mind, I see what the data says,
link |
01:10:31.640
but I also constantly torn. Do you have things where you or your studies have found something,
link |
01:10:38.200
but you're also intellectually torn about what it means? And there's maybe disagreements
link |
01:10:44.800
within your own mind about particular things.
link |
01:10:47.560
I mean, it's, you know, one of the things that are interesting is how difficult it is
link |
01:10:52.280
for people to change their mind. Essentially, you know, once they are committed, people
link |
01:11:00.440
just don't change their mind about anything that matters. And that is surprisingly, but
link |
01:11:05.600
it's true about scientists. So the controversy that I described, you know, that's been going
link |
01:11:12.240
on like 30 years and it's never going to be resolved. And you build a system and you live
link |
01:11:19.000
within that system and other other systems of ideas look foreign to you and there is
link |
01:11:27.000
very little contact and very little mutual influence. That happens a fair amount.
link |
01:11:33.400
Do you have a hopeful advice or message on that? Thinking about science, thinking about
link |
01:11:41.000
politics, thinking about things that have impact on this world, how can we change our
link |
01:11:47.840
mind?
link |
01:11:49.760
I think that, I mean, on things that matter, which are political or really political or
link |
01:11:56.920
religious and people just don't, don't change their mind. And by and large, and there's
link |
01:12:04.360
very little that you can do about it. The, what does happen is that if leaders change
link |
01:12:13.360
their minds. So for example, the public, the American public doesn't really believe in
link |
01:12:19.840
climate change, doesn't take it very seriously. But if some religious leaders decided this
link |
01:12:26.920
is a major threat to humanity, that would have a big effect. So that we have the opinions
link |
01:12:34.600
that we have, not because we know why we have them, but because we trust some people and
link |
01:12:39.840
we don't trust other people. And so it's much less about evidence than it is about stories.
link |
01:12:49.120
So the way, one way to change your mind isn't at the individual level, is that the leaders
link |
01:12:55.040
of the communities you look up with, the stories change and therefore your mind changes with
link |
01:12:59.640
them. So there's a guy named Alan Turing, came up with a Turing test. What do you think
link |
01:13:08.400
is a good test of intelligence? Perhaps we're drifting in a topic that we're maybe philosophizing
link |
01:13:18.760
about, but what do you think is a good test for intelligence, for an artificial intelligence
link |
01:13:22.240
system?
link |
01:13:23.240
Well, the standard definition of artificial general intelligence is that it can do anything
link |
01:13:32.760
that people can do and it can do them better. What we are seeing is that in many domains,
link |
01:13:39.540
you have domain specific devices or programs or software, and they beat people easily in
link |
01:13:51.360
a specified way. What we are very far from is that general ability, general purpose intelligence.
link |
01:14:04.080
In machine learning, people are approaching something more general. I mean, for Alpha
link |
01:14:08.800
Zero was much more general than Alpha Go, but it's still extraordinarily narrow and
link |
01:14:18.840
specific in what it can do. So we're quite far from something that can, in every domain,
link |
01:14:28.160
think like a human except better.
link |
01:14:30.960
What aspect, so the Turing test has been criticized, it's natural language conversation that is
link |
01:14:36.560
too simplistic. It's easy to quote unquote pass under constraints specified. What aspect
link |
01:14:44.080
of conversation would impress you if you heard it? Is it humor? What would impress the heck
link |
01:14:52.120
out of you if you saw it in conversation?
link |
01:14:55.680
Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just
link |
01:15:06.120
factual conversation, which I think is easy. And allusions would be interesting and metaphors
link |
01:15:17.080
would be interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot
link |
01:15:25.640
that would be sort of impressive that is completely natural in conversation, but that you really
link |
01:15:33.160
wouldn't expect.
link |
01:15:34.160
Does the possibility of creating a human level intelligence or superhuman level intelligence
link |
01:15:40.440
system excite you, scare you? How does it make you feel?
link |
01:15:47.440
I find the whole thing fascinating. Absolutely fascinating.
link |
01:15:51.520
So exciting.
link |
01:15:52.520
I think. And exciting. It's also terrifying, you know, but I'm not going to be around
link |
01:16:00.360
to see it. And so I'm curious about what is happening now, but I also know that predictions
link |
01:16:09.200
about it are silly. We really have no idea what it will look like 30 years from now.
link |
01:16:16.160
No idea.
link |
01:16:18.360
Speaking of silly, bordering on the profound, let me ask the question of, in your view,
link |
01:16:26.480
what is the meaning of it all? The meaning of life? He's a descendant of great apes that
link |
01:16:32.400
we are. Why, what drives us as a civilization, as a human being, as a force behind everything
link |
01:16:40.680
that you've observed and studied? Is there any answer or is it all just a beautiful mess?
link |
01:16:49.920
There is no answer that I can understand and I'm not, and I'm not actively looking for
link |
01:16:58.760
one.
link |
01:16:59.760
Do you think an answer exists?
link |
01:17:02.160
No. There is no answer that we can understand. I'm not qualified to speak about what we cannot
link |
01:17:08.200
understand, but there is, I know that we cannot understand reality, you know. I mean, there
link |
01:17:17.400
are a lot of things that we can do. I mean, you know, gravity waves, I mean, that's a
link |
01:17:22.720
big moment for humanity. And when you imagine that ape, you know, being able to go back
link |
01:17:29.800
to the Big Bang, that's, that's, but...
link |
01:17:34.200
But the why.
link |
01:17:35.200
Yeah, the why.
link |
01:17:36.200
It's bigger than us.
link |
01:17:37.200
The why is hopeless, really.
link |
01:17:40.200
Danny, thank you so much. It was an honor. Thank you for speaking today.
link |
01:17:43.640
Thank you.
link |
01:17:44.640
Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash
link |
01:17:49.480
App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education
link |
01:17:56.720
nonprofit that inspires hundreds of thousands of young minds to become future leaders and
link |
01:18:01.880
innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast,
link |
01:18:08.280
follow on Spotify, support it on Patreon, or simply connect with me on Twitter.
link |
01:18:13.880
And now, let me leave you with some words of wisdom from Daniel Kahneman.
link |
01:18:19.160
Intelligence is not only the ability to reason, it is also the ability to find relevant material
link |
01:18:24.780
and memory and to deploy attention when needed.
link |
01:18:29.320
Thank you for listening and hope to see you next time.