back to indexDaniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65
link |
The following is a conversation with Daniel Kahneman, winner of the Nobel Prize in Economics
link |
for his integration of economic science with the psychology of human behavior,
link |
judgment, and decision making. He's the author of the popular book Thinking Fast and Slow that
link |
summarizes in an accessible way his research of several decades, often in collaboration with
link |
Amos Tversky on cognitive biases, prospect theory, and happiness. The central thesis of this work
link |
is the dichotomy between two modes of thought. What he calls system one is fast, instinctive,
link |
and emotional. System two is slower, more deliberative, and more logical. The book
link |
delineates cognitive biases associated with each of these two types of thinking.
link |
His study of the human mind and its peculiar and fascinating limitations are both instructive and
link |
inspiring for those of us seeking to engineer intelligent systems. This is the Artificial
link |
Intelligence Podcast. If you enjoy it, subscribe on YouTube, give it five stars on Apple Podcast,
link |
follow on Spotify, support it on Patreon, or simply connect with me on Twitter at
link |
Lex Friedman spelled F R I D M A N. I recently started doing ads at the end of the introduction.
link |
I'll do one or two minutes after introducing the episode and never any ads in the middle
link |
that can break the flow of the conversation. I hope that works for you and doesn't hurt the
link |
listening experience. This show is presented by Cash App, the number one finance app in the App
link |
Store. I personally use Cash App to send money to friends, but you can also use it to buy, sell,
link |
and deposit Bitcoin in just seconds. Cash App also has a new investing feature. You can buy
link |
fractions of a stock, say one dollar's worth, no matter what the stock price is. Broker services
link |
are provided by Cash App Investing, a subsidiary of Square and member SIPC. I'm excited to be
link |
working with Cash App to support one of my favorite organizations called First, best known
link |
for their FIRST Robotics and Lego competitions. They educate and inspire hundreds of thousands
link |
of students in over 110 countries and have a perfect rating at Charity Navigator, which means
link |
that donated money is used to maximum effectiveness. When you get Cash App from the App Store or Google
link |
Play and use code LEXPODCAST, you'll get $10 and Cash App will also donate $10 to FIRST,
link |
which again is an organization that I've personally seen inspire girls and boys to dream
link |
of engineering a better world. And now here's my conversation with Daniel Kahneman.
link |
You tell a story of an SS soldier early in the war, World War II, in Nazi occupied France in
link |
Paris, where you grew up. He picked you up and hugged you and showed you a picture of a boy,
link |
Daniel Kahneman. Maybe not realizing that you were Jewish.
link |
Not maybe, certainly not.
link |
So I told you I'm from the Soviet Union that was significantly impacted by the war as well,
link |
and I'm Jewish as well. What do you think World War II taught us about human psychology broadly?
link |
Well, I think the only big surprise is the extermination policy, genocide,
link |
by the German people. That's when you look back on it, and I think that's a major surprise.
link |
It's a surprise because...
link |
It's a surprise that they could do it. It's a surprise that enough people
link |
willingly participated in that. This is a surprise. Now it's no longer a surprise,
link |
but it's changed many people's views, I think, about human beings. Certainly for me,
link |
the Ackman trial, that teaches you something because it's very clear that if it could happen
link |
in Germany, it could happen anywhere. It's not that the Germans were special.
link |
This could happen anywhere.
link |
So what do you think that is? Do you think we're all capable of evil? We're all capable of cruelty?
link |
I don't think in those terms. I think that what is certainly possible is you can dehumanize people
link |
so that you treat them not as people anymore, but as animals. And the same way that you can slaughter
link |
animals without feeling much of anything, it can be the same. And when you feel that,
link |
I think, the combination of dehumanizing the other side and having uncontrolled power over
link |
other people, I think that doesn't bring out the most generous aspect of human nature.
link |
So that Nazi soldier, he was a good man. And he was perfectly capable of killing a lot of people,
link |
and I'm sure he did.
link |
But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of?
link |
IA Again, this is surprising that it was so extreme,
link |
but it's not one thing in human nature. I don't want to call it evil, but the distinction between
link |
the in group and the out group, that is very basic. So that's built in. The loyalty and
link |
affection towards in group and the willingness to dehumanize the out group, that is in human nature.
link |
That's what I think probably didn't need the Holocaust to teach us that. But the Holocaust is
link |
a very sharp lesson of what can happen to people and what people can do.
link |
SL. So the effect of the in group and the out group. IA It's clear. Those were people,
link |
you could shoot them. They were not human. There was no empathy, or very, very little empathy left.
link |
So occasionally, there might have been. And very quickly, by the way, the empathy disappeared,
link |
if there was initially. And the fact that everybody around you was doing it,
link |
that completely, the group doing it, and everybody shooting Jews, I think that makes it permissible.
link |
Now, how much, whether it could happen in every culture, or whether the Germans were just
link |
particularly efficient and disciplined, so they could get away with it. It's an interesting
link |
question. SL. Are these artifacts of history or is it human nature? IA I think that's really human
link |
nature. You put some people in a position of power relative to other people, and then they become
link |
less human, they become different. SL. But in general, in war, outside of concentration camps
link |
in World War Two, it seems that war brings out darker sides of human nature, but also the beautiful
link |
things about human nature. IA Well, I mean, what it brings out is the loyalty among soldiers. I mean,
link |
it brings out the bonding, male bonding, I think is a very real thing that happens. And there is
link |
a certain thrill to friendship, and there is certainly a certain thrill to friendship under
link |
risk and to shared risk. And so people have very profound emotions, up to the point where it gets
link |
so traumatic that little is left. SL. So let's talk about psychology a little bit. In your book,
link |
Thinking Fast and Slow, you describe two modes of thought, system one, the fast and instinctive,
link |
and emotional one, and system two, the slower, deliberate, logical one. At the risk of asking
link |
Darwin to discuss theory of evolution, can you describe distinguishing characteristics for people
link |
who have not read your book of the two systems? IA Well, I mean, the word system is a bit
link |
misleading, but at the same time it's misleading, it's also very useful. But what I call system one,
link |
it's easier to think of it as a family of activities. And primarily, the way I describe it
link |
is there are different ways for ideas to come to mind. And some ideas come to mind automatically,
link |
and the standard example is two plus two, and then something happens to you. And in other cases,
link |
you've got to do something, you've got to work in order to produce the idea. And my example,
link |
I always give the same pair of numbers as 27 times 14, I think. SL. You have to perform some
link |
algorithm in your head, some steps. IA Yes, and it takes time. It's a very difference. Nothing
link |
comes to mind except something comes to mind, which is the algorithm, I mean, that you've got
link |
to perform. And then it's work, and it engages short term memory, it engages executive function,
link |
and it makes you incapable of doing other things at the same time. So the main characteristic of
link |
system two is that there is mental effort involved, and there is a limited capacity for mental effort,
link |
whereas system one is effortless, essentially. That's the major distinction.
link |
SL. So you talk about there, you know, it's really convenient to talk about two systems,
link |
but you also mentioned just now and in general that there's no distinct two systems in the brain
link |
from a neurobiological, even from a psychology perspective. But why does it seem to, from the
link |
experiments you've conducted, there does seem to be kind of emergent two modes of thinking? So
link |
at some point, these kinds of systems came into a brain architecture. Maybe mammals share it.
link |
Or do you not think of it at all in those terms that it's all a mush and these two things just
link |
emerge? RL. Evolutionary theorizing about this is cheap and easy. So it's the way I think about it
link |
is that it's very clear that animals have perceptual system, and that includes an ability
link |
to understand the world, at least to the extent that they can predict, they can't explain anything,
link |
but they can anticipate what's going to happen. And that's a key form of understanding the world.
link |
And my crude idea is that what I call system two, well, system two grew out of this.
link |
And, you know, there is language and there is the capacity of manipulating ideas and the capacity
link |
of imagining futures and of imagining counterfactual things that haven't happened
link |
and to do conditional thinking. And there are really a lot of abilities that without language
link |
and without the very large brain that we have compared to others would be impossible. Now,
link |
system one is more like what the animals are, but system one also can talk. I mean,
link |
it has language. It understands language. Indeed, it speaks for us. I mean, you know,
link |
I'm not choosing every word as a deliberate process. The words, I have some idea and then
link |
the words come out and that's automatic and effortless. And many of the experiments you've
link |
done is to show that, listen, system one exists and it does speak for us and we should be careful
link |
about the voice it provides. Well, I mean, you know, we have to trust it because it's
link |
the speed at which it acts. System two, if we're dependent on system two for survival,
link |
we wouldn't survive very long because it's very slow. Yeah. Crossing the street.
link |
Crossing the street. I mean, many things depend on their being automatic. One very important aspect
link |
of system one is that it's not instinctive. You use the word instinctive. It contains skills that
link |
clearly have been learned. So that skilled behavior like driving a car or speaking, in fact,
link |
skilled behavior has to be learned. And so it doesn't, you know, you don't come equipped with
link |
driving. You have to learn how to drive and you have to go through a period where driving is not
link |
automatic before it becomes automatic. So. Yeah. You construct, I mean, this is where you talk
link |
about heuristic and biases is you, to make it automatic, you create a pattern and then system
link |
one essentially matches a new experience against the previously seen pattern. And when that match
link |
is not a good one, that's when the cognitive, all the mess happens, but it's most of the time
link |
it works. And so it's pretty. Most of the time, the anticipation of what's going to happen next
link |
is correct. And most of the time the plan about what you have to do is correct. And so most of
link |
the time everything works just fine. What's interesting actually is that in some sense,
link |
system one is much better at what it does than system two is at what it does. That is there is
link |
that quality of effortlessly solving enormously complicated problems, which clearly exists so
link |
that the chess player, a very good chess player, all the moves that come to their mind are strong
link |
moves. So all the selection of strong moves happens unconsciously and automatically and
link |
very, very fast. And all that is in system one. So system two verifies.
link |
So along this line of thinking, really what we are are machines that construct
link |
a pretty effective system one. You could think of it that way. So we're not talking about humans,
link |
but if we think about building artificial intelligence systems, robots, do you think
link |
all the features and bugs that you have highlighted in human beings are useful
link |
for constructing AI systems? So both systems are useful for perhaps instilling in robots?
link |
What is happening these days is that actually what is happening in deep learning is more like
link |
a system one product than like a system two product. I mean, deep learning matches patterns
link |
and anticipate what's going to happen. So it's highly predictive. What deep learning
link |
doesn't have and many people think that this is the critical, it doesn't have the ability to
link |
reason. So there is no system two there. But I think very importantly, it doesn't have any
link |
causality or any way to represent meaning and to represent real interactions. So until that is
link |
solved, what can be accomplished is marvelous and very exciting, but limited.
link |
That's actually really nice to think of current advances in machine learning as essentially
link |
system one advances. So how far can we get with just system one? If we think of deep learning
link |
in artificial intelligence systems? I mean, you know, it's very clear that deep mind has already
link |
gone way beyond what people thought was possible. I think the thing that has impressed me most about
link |
the developments in AI is the speed. It's that things, at least in the context of deep learning,
link |
and maybe this is about to slow down, but things moved a lot faster than anticipated.
link |
The transition from solving chess to solving Go, that's bewildering how quickly it went.
link |
The move from Alpha Go to Alpha Zero is sort of bewildering the speed at which they accomplished
link |
that. Now, clearly, there are many problems that you can solve that way, but there are some problems
link |
for which you need something else. Something like reasoning.
link |
Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus, who is
link |
also a critic of AI. I mean, what he points out, and I think he has a point, is that humans learn
link |
quickly. Children don't need a million examples, they need two or three examples. So, clearly,
link |
there is a fundamental difference. And what enables a machine to learn quickly, what you have
link |
to build into the machine, because it's clear that you have to build some expectations or
link |
or something in the machine to make it ready to learn quickly. That at the moment seems to be
link |
unsolved. I'm pretty sure that DeepMind is working on it, but if they have solved it, I haven't heard
link |
yet. They're trying to actually, them and OpenAI are trying to start to get to use neural networks
link |
to reason. So, assemble knowledge. Of course, causality is, temporal causality, is out of
link |
reach to most everybody. You mentioned the benefits of System 1 is essentially that it's
link |
fast, allows us to function in the world.
link |
Fast and skilled, yeah.
link |
And it has a model of the world. You know, in a sense, I mean, there was the early phase of
link |
AI attempted to model reasoning. And they were moderately successful, but, you know, reasoning
link |
by itself doesn't get you much. Deep learning has been much more successful in terms of, you know,
link |
what they can do. But now, it's an interesting question, whether it's approaching its limits.
link |
What do you think?
link |
I think absolutely. So, I just talked to Gian LeCun. He mentioned, you know, so he thinks
link |
that the limits, we're not going to hit the limits with neural networks, that ultimately,
link |
this kind of System 1 pattern matching will start to look like System 2 without significant
link |
transformation of the architecture. So, I'm more with the majority of the people who think that,
link |
yes, neural networks will hit a limit in their capability.
link |
He, on the one hand, I have heard him tell them it's a sub, it's essentially that, you know,
link |
what they have accomplished is not a big deal, that they have just touched, that basically,
link |
you know, they can't do unsupervised learning in an effective way. But you're telling me that he
link |
thinks that the current, within the current architecture, you can do causality and reasoning?
link |
So, he's very much a pragmatist in a sense that's saying that we're very far away,
link |
that there's still, I think there's this idea that he says is, we can only see
link |
one or two mountain peaks ahead and there might be either a few more after or
link |
thousands more after. Yeah, so that kind of idea.
link |
I heard that metaphor.
link |
Yeah, right. But nevertheless, it doesn't see the final answer not fundamentally looking like one
link |
that we currently have. So, neural networks being a huge part of that.
link |
Yeah, I mean, that's very likely because pattern matching is so much of what's going on.
link |
And you can think of neural networks as processing information sequentially.
link |
Yeah, I mean, you know, there is an important aspect to, for example, you get systems that
link |
translate and they do a very good job, but they really don't know what they're talking about.
link |
And for that, I'm really quite surprised. For that, you would need an AI that has sensation,
link |
an AI that is in touch with the world.
link |
Yes, self awareness and maybe even something resembles consciousness kind of ideas.
link |
Certainly awareness of, you know, awareness of what's going on so that the words have meaning
link |
or can get, are in touch with some perception or some action.
link |
Yeah, so that's a big thing for Jan and as what he refers to as grounding to the physical space.
link |
So that's what we're talking about the same thing.
link |
Yeah, so how do you ground?
link |
I mean, the grounding, without grounding, then you get a machine that doesn't know what
link |
it's talking about because it is talking about the world ultimately.
link |
The question, the open question is what it means to ground. I mean, we're very
link |
human centric in our thinking, but what does it mean for a machine to understand what it means
link |
to be in this world? Does it need to have a body? Does it need to have a finiteness like we humans
link |
have all of these elements? It's a very, it's an open question.
link |
You know, I'm not sure about having a body, but having a perceptual system,
link |
having a body would be very helpful too. I mean, if you think about human, mimicking human,
link |
you know, but having a perception that seems to be essential so that you can build,
link |
you can accumulate knowledge about the world. So if you can imagine a human completely paralyzed,
link |
and there's a lot that the human brain could learn, you know, with a paralyzed body.
link |
So if we got a machine that could do that, that would be a big deal.
link |
TK And then the flip side of that, something you see in children and something in machine
link |
learning world is called active learning. Maybe it is also in, is being able to play with the world.
link |
How important for developing System 1 or System 2 do you think it is to play with the world?
link |
To be able to interact with the world?
link |
MG A lot of what you learn is you learn to anticipate the outcomes of your actions. I mean,
link |
you can see that how babies learn it, you know, with their hands, how they learn, you know,
link |
to connect, you know, the movements of their hands with something that clearly is something
link |
that happens in the brain and the ability of the brain to learn new patterns. So, you know,
link |
it's the kind of thing that you get with artificial limbs, that you connect it and then people learn
link |
to operate the artificial limb, you know, really impressively quickly, at least from what I hear.
link |
So we have a system that is ready to learn the world through action.
link |
TK At the risk of going into way too mysterious of land,
link |
what do you think it takes to build a system like that? Obviously, we're very far from understanding
link |
how the brain works, but how difficult is it to build this mind of ours?
link |
MG You know, I mean, I think that Jan LeCun's answer that we don't know how many mountains
link |
there are, I think that's a very good answer. I think that, you know, if you look at what Ray
link |
Kurzweil is saying, that strikes me as off the wall. But I think people are much more realistic
link |
than that, where actually Demis Hassabis is and Jan is, and so the people are actually doing the
link |
work fairly realistic, I think. TK To maybe phrase it another way,
link |
from a perspective not of building it, but from understanding it,
link |
how complicated are human beings in the following sense? You know, I work with autonomous vehicles
link |
and pedestrians, so we tried to model pedestrians. How difficult is it to model a human being,
link |
their perception of the world, the two systems they operate under, sufficiently to be able to
link |
predict whether the pedestrian is going to cross the road or not?
link |
MG I'm, you know, I'm fairly optimistic about that, actually, because what we're talking about
link |
is a huge amount of information that every vehicle has, and that feeds into one system,
link |
into one gigantic system. And so anything that any vehicle learns becomes part of what the whole
link |
system knows. And with a system multiplier like that, there is a lot that you can do.
link |
So human beings are very complicated, and the system is going to make mistakes, but human
link |
makes mistakes. I think that they'll be able to, I think they are able to anticipate pedestrians,
link |
otherwise a lot would happen. They're able to, you know, they're able to get into a roundabout
link |
and into traffic, so they must know both to expect or to anticipate how people will react
link |
when they're sneaking in. And there's a lot of learning that's involved in that.
link |
RL Currently, the pedestrians are treated as things that cannot be hit, and they're not
link |
treated as agents with whom you interact in a game theoretic way. So, I mean, it's not,
link |
it's a totally open problem, and every time somebody tries to solve it, it seems to be harder
link |
than we think. And nobody's really tried to seriously solve the problem of that dance,
link |
because I'm not sure if you've thought about the problem of pedestrians, but you're really
link |
putting your life in the hands of the driver.
link |
RL You know, there is a dance, there's part of the dance that would be quite complicated,
link |
but for example, when I cross the street and there is a vehicle approaching, I look the driver
link |
in the eye, and I think many people do that. And, you know, that's a signal that I'm sending,
link |
and I would be sending that machine to an autonomous vehicle, and it had better understand
link |
it, because it means I'm crossing.
link |
RL So, and there's another thing you do, that actually, so I'll tell you what you do,
link |
because we watched, I've watched hundreds of hours of video on this, is when you step
link |
in the street, you do that before you step in the street, and when you step in the street,
link |
you actually look away.
link |
RL Yeah. Now, what is that? What that's saying is, I mean, you're trusting that the car who
link |
hasn't slowed down yet will slow down.
link |
RL Yeah. And you're telling him, I'm committed. I mean, this is like in a game of chicken,
link |
so I'm committed, and if I'm committed, I'm looking away. So, there is, you just have
link |
RL So, the question is whether a machine that observes that needs to understand mortality.
link |
RL Here, I'm not sure that it's got to understand so much as it's got to anticipate. So, and
link |
here, but you know, you're surprising me, because here I would think that maybe you
link |
can anticipate without understanding, because I think this is clearly what's happening in
link |
playing go or in playing chess. There's a lot of anticipation, and there is zero understanding.
link |
RL So, I thought that you didn't need a model of the human and a model of the human mind
link |
to avoid hitting pedestrians, but you are suggesting that actually…
link |
RL There you go, yeah.
link |
RL You do. Then it's a lot harder, I thought.
link |
RL And I have a follow up question to see where your intuition lies. It seems that almost
link |
every robot human collaboration system is a lot harder than people realize. So, do you
link |
think it's possible for robots and humans to collaborate successfully? We talked a little
link |
bit about semi autonomous vehicles, like in the Tesla autopilot, but just in tasks in
link |
general. If you think we talked about current neural networks being kind of system one,
link |
do you think those same systems can borrow humans for system two type tasks and collaborate
link |
RL Well, I think that in any system where humans and the machine interact, the human
link |
will be superfluous within a fairly short time. That is, if the machine is advanced
link |
enough so that it can really help the human, then it may not need the human for a long
link |
time. Now, it would be very interesting if there are problems that for some reason the
link |
machine cannot solve, but that people could solve. Then you would have to build into the
link |
machine an ability to recognize that it is in that kind of problematic situation and
link |
to call the human. That cannot be easy without understanding. That is, it must be very difficult
link |
to program a recognition that you are in a problematic situation without understanding
link |
SL. That's very true. In order to understand the full scope of situations that are problematic,
link |
you almost need to be smart enough to solve all those problems.
link |
RL It's not clear to me how much the machine will need the human. I think the example of
link |
chess is very instructive. I mean, there was a time at which Kasparov was saying that human
link |
machine combinations will beat everybody. Even stockfish doesn't need people and Alpha
link |
Zero certainly doesn't need people.
link |
SL. The question is, just like you said, how many problems are like chess and how many
link |
problems are not like chess? Every problem probably in the end is like chess. The question
link |
is, how long is that transition period?
link |
RL That's a question I would ask you. Autonomous vehicle, just driving, is probably a lot more
link |
complicated than Go to solve that problem. Because it's open. That's not surprising to
link |
me because there is a hierarchical aspect to this, which is recognizing a situation
link |
and then within the situation bringing up the relevant knowledge. For that hierarchical
link |
type of system to work, you need a more complicated system than we currently have.
link |
SL. A lot of people think, because as human beings, this is probably the cognitive biases,
link |
they think of driving as pretty simple because they think of their own experience. This is
link |
actually a big problem for AI researchers or people thinking about AI because they evaluate
link |
how hard a particular problem is based on very limited knowledge, based on how hard
link |
it is for them to do the task. And then they take for granted, maybe you can speak to that
link |
because most people tell me driving is trivial and humans in fact are terrible at driving
link |
is what people tell me. And I see humans and humans are actually incredible at driving
link |
and driving is really terribly difficult. Is that just another element of the effects
link |
that you've described in your work on the psychology side?
link |
No, I mean, I haven't really, I would say that my research has contributed nothing to
link |
understanding the ecology and to understanding the structure of situations and the complexity
link |
of problems. So all we know is very clear that that goal, it's endlessly complicated,
link |
but it's very constrained. And in the real world, there are far fewer constraints and
link |
many more potential surprises.
link |
SL. So that's obvious because it's not always obvious to people, right? So when you think
link |
Well, I mean, you know, people thought that reasoning was hard and perceiving was easy,
link |
but you know, they quickly learned that actually modeling vision was tremendously complicated
link |
and modeling, even proving theorems was relatively straightforward.
link |
To push back on that a little bit on the quickly part, it took several decades to learn that
link |
and most people still haven't learned that. I mean, our intuition, of course, AI researchers
link |
have, but you drift a little bit outside the specific AI field, the intuition is still
link |
perceptible to solve that.
link |
No, I mean, that's true. Intuitions, the intuitions of the public haven't changed
link |
radically. And they are, as you said, they're evaluating the complexity of problems by how
link |
difficult it is for them to solve the problems. And that's got very little to do with the
link |
complexities of solving them in AI.
link |
SL. How do you think from the perspective of an AI researcher, do we deal with the intuitions
link |
of the public? So in trying to think, arguably, the combination of hype investment and the
link |
public intuition is what led to the AI winters. I'm sure that same could be applied to tech
link |
or that the intuition of the public leads to media hype, leads to companies investing
link |
in the tech, and then the tech doesn't make the company's money. And then there's a crash.
link |
Is there a way to educate people to fight the, let's call it system one thinking?
link |
In general, no. I think that's the simple answer. And it's going to take a long time
link |
before the understanding of what those systems can do becomes public knowledge. And then
link |
the expectations, there are several aspects that are going to be very complicated. The
link |
fact that you have a device that cannot explain itself is a major, major difficulty. And we're
link |
already seeing that. I mean, this is really something that is happening. So it's happening
link |
in the judicial system. So you have system that are clearly better at predicting parole
link |
violations than judges, but they can't explain their reasoning. And so people don't want
link |
We seem to in system one, even use cues to make judgements about our environment. So
link |
this explainability point, do you think humans can explain stuff?
link |
No, but I mean, there is a very interesting aspect of that. Humans think they can explain
link |
themselves. So when you say something and I ask you, why do you believe that? Then reasons
link |
will occur to you. But actually, my own belief is that in most cases, the reasons have very
link |
little to do with why you believe what you believe. So that the reasons are a story that
link |
comes to your mind when you need to explain yourself. But people traffic in those explanations
link |
I mean, the human interaction depends on those shared fictions and, and the stories that
link |
people tell themselves.
link |
You just made me actually realize and we'll talk about stories in a second. That not to
link |
be cynical about it, but perhaps there's a whole movement of people trying to do explainable
link |
AI. And really, we don't necessarily need to explain AI doesn't need to explain itself.
link |
It just needs to tell a convincing story.
link |
It doesn't necessarily, the story doesn't necessarily need to reflect the truth as it
link |
might, it just needs to be convincing. There's something to that.
link |
You can say exactly the same thing in a way that sounds cynical or doesn't sound cynical.
link |
But the objective of having an explanation is to tell a story that will be acceptable
link |
to people. And, and, and for it to be acceptable and to be robustly acceptable, it has to have
link |
some elements of truth. But, but the objective is for people to accept it.
link |
It's quite brilliant, actually. But so on the, on the stories that we tell, sorry to
link |
ask me, ask you the question that most people know the answer to, but you talk about two
link |
selves in terms of how life is lived, the experienced self and remembering self. Can
link |
you describe the distinction between the two?
link |
Well, sure. I mean, the, there is an aspect of, of life that occasionally, you know, most
link |
of the time we just live and we have experiences and they're better and they're worse and it
link |
goes on over time. And mostly we forget everything that happens or we forget most of what happens.
link |
Then occasionally you, when something ends or at different points, you evaluate the past
link |
and you form a memory and the memory is schematic. It's not that you can roll a film of an interaction.
link |
You construct, in effect, the elements of a story about an, about an episode. So there
link |
is the experience and there is the story that is created about the experience. And that's
link |
what I call the remembering. So I had the image of two selves. So there is a self that
link |
lives and there is a self that evaluates life. Now the paradox and the deep paradox in that
link |
is that we have one system or one self that does the living, but the other system, the
link |
remembering self is all we get to keep. And basically decision making and, and everything
link |
that we do is governed by our memories, not by what actually happened. It's, it's governed
link |
by, by the story that we told ourselves or by the story that we're keeping. So that's,
link |
that's the distinction.
link |
I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of
link |
that. What are the properties of happiness which emerge from a remembering self?
link |
There are, there are properties of how we construct stories that are really important.
link |
So that I studied a few, but, but a couple are really very striking. And one is that
link |
in stories, time doesn't matter. There's a sequence of events or there are highlights
link |
or not. And, and how long it took, you know, they lived happily ever after or three years
link |
later or something. It, time really doesn't matter. And in stories, events matter, but
link |
time doesn't. That, that leads to a very interesting set of problems because time is all we got
link |
to live. I mean, you know, time is the currency of life. And yet time is not represented basically
link |
in evaluated memories. So that, that creates a lot of paradoxes that I've thought about.
link |
Yeah. They're fascinating. But if you were to give advice on how one lives a happy life
link |
based on such properties, what's the optimal?
link |
You know, I gave up, I abandoned happiness research because I couldn't solve that problem.
link |
I couldn't, I couldn't see. And in the first place, it's very clear that if you do talk
link |
in terms of those two selves, then that what makes the remembering self happy and what
link |
makes the experiencing self happy are different things. And I, I asked the question of, suppose
link |
you're planning a vacation and you're just told that at the end of the vacation, you'll
link |
get an amnesic drug, so you remember nothing. And they'll also destroy all your photos.
link |
So there'll be nothing. Would you still go to the same vacation? And, and it's, it turns
link |
out we go to vacations in large part to construct memories, not to have experiences, but to
link |
construct memories. And it turns out that the vacation that you would want for yourself,
link |
if you knew, you will not remember is probably not the same vacation that you will want for
link |
yourself if you will remember. So I have no solution to these problems, but clearly those
link |
And you've talked about, you've talked about sort of how many minutes or hours you spend
link |
about the vacation. It's an interesting way to think about it because that's how you really
link |
experience the vacation outside the being in it. But there's also a modern, I don't
link |
know if you think about this or interact with it. There's a modern way to, um, magnify the
link |
remembering self, which is by posting on Instagram, on Twitter, on social networks. A lot of people
link |
live life for the picture that you take, that you post somewhere. And now thousands of people
link |
share and potentially potentially millions. And then you can relive it even much more
link |
than just those minutes. Do you think about that magnification much?
link |
You know, I'm too old for social networks. I, you know, I've never seen Instagram, so
link |
I cannot really speak intelligently about those things. I'm just too old.
link |
But it's interesting to watch the exact effects you've described.
link |
Make a very big difference. I mean, and it will make, it will also make a difference.
link |
And that I don't know whether, uh, it's clear that in some ways the devices that serve us
link |
are supplant functions. So you don't have to remember phone numbers. You don't have,
link |
you really don't have to know facts. I mean, the number of conversations I'm involved with,
link |
somebody says, well, let's look it up. Uh, so it's, it's in a way it's made conversations.
link |
Well it's, it means that it's much less important to know things. You know, it used to be very
link |
important to know things. This is changing. So the requirements of that, that we have
link |
for ourselves and for other people are changing because of all those supports and because,
link |
and I have no idea what Instagram does, but it's, uh, well, I'll tell you, I wish I could
link |
just have the, my remembering self could enjoy this conversation, but I'll get to enjoy it
link |
even more by having watched, by watching it and then talking to others. It'll be about
link |
a hundred thousand people as scary as this to say, well, listen or watch this, right?
link |
It changes things. It changes the experience of the world that you seek out experiences
link |
which could be shared in that way. It's in, and I haven't seen, it's, it's the same effects
link |
that you described. And I don't think the psychology of that magnification has been
link |
described yet because it's a new world.
link |
But the sharing, there was a, there was a time when people read books and, uh, and,
link |
and you could assume that your friends had read the same books that you read. So there
link |
was kind of invisible sharing. There was a lot of sharing going on and there was a lot
link |
of assumed common knowledge and, you know, that was built in. I mean, it was obvious
link |
that you had read the New York Times. It was obvious that you had read the reviews. I mean,
link |
so a lot was taken for granted that was shared. And, you know, when there were, when there
link |
were three television channels, it was obvious that you'd seen one of them probably the same.
link |
So sharing, sharing always was always there. It was just different.
link |
At the risk of, uh, inviting mockery from you, let me say that I'm also a fan of Sartre
link |
and Camus and existentialist philosophers. And, um, I'm joking of course about mockery,
link |
but from the perspective of the two selves, what do you think of the existentialist philosophy
link |
of life? So trying to really emphasize the experiencing self as the proper way to, or
link |
the best way to live life.
link |
I don't know enough philosophy to answer that, but it's not, uh, you know, the emphasis on,
link |
on experience is also the emphasis in Buddhism.
link |
Yeah, right. That's right.
link |
So, uh, that's, you just have got to, to experience things and, and, and not to evaluate and not
link |
to pass judgment and not to score, not to keep score. So, uh,
link |
If, when you look at the grand picture of experience, you think there's something to
link |
that, that one, one of the ways to achieve contentment and maybe even happiness is letting
link |
go of any of the things, any of the procedures of the remembering self.
link |
Well, yeah, I mean, I think, you know, if one could imagine a life in which people don't
link |
score themselves, uh, it, it feels as if that would be a better life as if the self scoring
link |
and you know, how am I doing a kind of question, uh, is not, is not a very happy thing to have.
link |
But I got out of that field because I couldn't solve that problem and, and that was because
link |
my intuition was that the experiencing self, that's reality.
link |
But then it turns out that what people want for themselves is not experiences. They want
link |
memories and they want a good story about their life. And so you cannot have a theory
link |
of happiness that doesn't correspond to what people want for themselves. And when I, when
link |
I realized that this, this was where things were going, I really sort of left the field
link |
Do you think there's something instructive about this emphasis of reliving memories in
link |
building AI systems. So currently artificial intelligence systems are more like experiencing
link |
self in that they react to the environment. There's some pattern formation like a learning
link |
so on, but you really don't construct memories, uh, except in reinforcement learning every
link |
once in a while that you replay over and over.
link |
Yeah, but you know, that would in principle would not be.
link |
Do you think that's useful? Do you think it's a feature or a bug of human beings that we,
link |
that we look back?
link |
Oh, I think that's definitely a feature. That's not a bug. I mean, you, you have to look back
link |
in order to look forward. So, uh, without, without looking back, you couldn't, you couldn't
link |
really intelligently look forward.
link |
You're looking for the echoes of the same kind of experience in order to predict what
link |
So though Victor Frankel in his book, man's search for meaning, I'm not sure if you've
link |
read, describes his experience at the consecration concentration camps during world war two as
link |
a way to describe that finding identifying a purpose in life, a positive purpose in life
link |
can save one from suffering. First of all, do you connect with the philosophy that he
link |
Not really. I mean, the, so I can, I can really see that somebody who has that feeling of
link |
purpose and meaning and so on, that, that could sustain you. Uh, I in general don't
link |
have that feeling and I'm pretty sure that if I were in a concentration camp, I'd give
link |
up and die, you know? So he talks, he is, he is a survivor.
link |
And, you know, he survived with that. And I'm, and I'm not sure how essential to survival
link |
this sense is, but I do know when I think about myself that I would have given up. Oh,
link |
this isn't going anywhere. And there is, there is a sort of character that, that, that manages
link |
to survive in conditions like that. And then because they survive, they tell stories and
link |
it sounds as if they survive because of what they were doing. We have no idea. They survived
link |
because the kind of people that they are and the other kind of people who survives and
link |
would tell themselves stories of a particular kind. So I'm not, uh,
link |
So you don't think seeking purpose is a significant driver in our being?
link |
Oh, I mean, it's, it's a very interesting question because when you ask people whether
link |
it's very important to have meaning in their life, they say, oh yes, that's the most important
link |
thing. But when you ask people, what kind of a day did you have? And, and you know,
link |
what were the experiences that you remember? You don't get much meaning. You get social
link |
experiences. Then, uh, and, and some people say that, for example, in, in, in child, you
link |
know, in taking care of children, the fact that they are your children and you're taking
link |
care of them, uh, makes a very big difference. I think that's entirely true. Uh, but it's
link |
more because of a story that we're telling ourselves, which is a very different story
link |
when we're taking care of our children or when we're taking care of other things.
link |
Jumping around a little bit in doing a lot of experiments, let me ask a question. Most
link |
of the work I do, for example, is in the, in the real world, but most of the clean good
link |
science that you can do is in the lab. So that distinction, do you think we can understand
link |
the fundamentals of human behavior through controlled experiments in the lab? If we talk
link |
about pupil diameter, for example, it's much easier to do when you can control lighting
link |
conditions, right? So when we look at driving, lighting variation destroys almost completely
link |
your ability to use pupil diameter. But in the lab for, as I mentioned, semi autonomous
link |
or autonomous vehicles in driving simulators, we can't, we don't capture true, honest, uh,
link |
human behavior in that particular domain. So what's your intuition? How much of human
link |
behavior can we study in this controlled environment of the lab? A lot, but you'd have to verify
link |
it, you know, that your, your conclusions are basically limited to the situation, to
link |
the experimental situation. Then you have to jump the big inductive leap to the real
link |
world. Uh, so, and, and that's the flare. That's where the difference, I think, between
link |
the good psychologists and others that are mediocre is in the sense of that your experiment
link |
captures something that's important and something that's real and others are just running experiments.
link |
So what is that? Like the birth of an idea to his development in your mind to something
link |
that leads to an experiment. Is that similar to maybe like what Einstein or a good physicist
link |
do is your intuition. You basically use your intuition to build up.
link |
Yeah, but I mean, you know, it's, it's very skilled intuition. I mean, I just had that
link |
experience actually. I had an idea that turns out to be very good idea a couple of days
link |
ago and, and you, and you have a sense of that building up. So I'm working with a collaborator
link |
and he essentially was saying, you know, what, what are you doing? What's, what's going on?
link |
And I was, I really, I couldn't exactly explain it, but I knew this is going somewhere, but
link |
you know, I've been around that game for a very long time. And so I can, you, you develop
link |
that anticipation that yes, this, this is worth following up. That's part of the skill.
link |
Is that something you can reduce to words in describing a process in the form of advice
link |
Follow your heart, essentially.
link |
I mean, you know, it's, it's like trying to explain what it's like to drive. It's not,
link |
you've got to break it apart and it's not.
link |
And then you lose.
link |
And then you lose the experience.
link |
You mentioned collaboration. You've written about your collaboration with Amos Tversky
link |
that this is you writing, the 12 or 13 years in which most of our work was joint were years
link |
of interpersonal and intellectual bliss. Everything was interesting. Almost everything
link |
was funny. And there was a current joy of seeing an idea take shape. So many times in
link |
those years, we shared the magical experience of one of us saying something, which the other
link |
one would understand more deeply than the speaker had done. Contrary to the old laws
link |
of information theory, it was common for us to find that more information was received
link |
than had been sent. I have almost never had the experience with anyone else. If you have
link |
not had it, you don't know how marvelous collaboration can be.
link |
So let me ask a perhaps a silly question. How does one find and create such a collaboration?
link |
That may be asking like, how does one find love?
link |
Yeah, you have to be lucky. And I think you have to have the character for that because
link |
I've had many collaborations. I mean, none were as exciting as with Amos, but I've had
link |
and I'm having just very. So it's a skill. I think I'm good at it. Not everybody is good
link |
at it. And then it's the luck of finding people who are also good at it.
link |
Is there advice in a form for a young scientist who also seeks to violate this law of information
link |
I really think it's so much luck is involved. And in those really serious collaborations,
link |
at least in my experience, are a very personal experience. And I have to like the person
link |
I'm working with. Otherwise, I mean, there is that kind of collaboration, which is like
link |
an exchange, a commercial exchange of giving this, you give me that. But the real ones
link |
are interpersonal. They're between people who like each other and who like making each
link |
other think and who like the way that the other person responds to your thoughts. You
link |
But I already noticed that even just me showing up here, you've quickly started to digging
link |
in on a particular problem I'm working on and already new information started to emerge.
link |
Is that a process, just the process of curiosity of talking to people about problems and seeing?
link |
I'm curious about anything to do with AI and robotics. And I knew you were dealing with
link |
that. So I was curious.
link |
Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding
link |
terminology of replication crisis, but really just the, at times, this effect that at times
link |
studies do not, are not fully generalizable. They don't.
link |
You are being polite. It's worse than that.
link |
Is it? So I'm actually not fully familiar to the degree how bad it is, right? So what
link |
do you think is the source? Where do you think?
link |
I think I know what's going on actually. I mean, I have a theory about what's going on
link |
and what's going on is that there is, first of all, a very important distinction between
link |
two types of experiments. And one type is within subject. So it's the same person has
link |
two experimental conditions. And the other type is between subjects where some people
link |
are this condition, other people are that condition. They're different worlds. And between
link |
subject experiments are much harder to predict and much harder to anticipate. And the reason,
link |
and they're also more expensive because you need more people. And it's just, so between
link |
subject experiments is where the problem is. It's not so much in within subject experiments,
link |
it's really between. And there is a very good reason why the intuitions of researchers about
link |
between subject experiments are wrong. And that's because when you are a researcher,
link |
you're in a within subject situation. That is you are imagining the two conditions and
link |
you see the causality and you feel it. But in the between subject condition, they live
link |
in one condition and the other one is just nowhere. So our intuitions are very weak about
link |
between subject experiments. And that I think is something that people haven't realized.
link |
And in addition, because of that, we have no idea about the power of manipulations of
link |
experimental manipulations because the same manipulation is much more powerful when you
link |
are in the two conditions than when you live in only one condition. And so the experimenters
link |
have very poor intuitions about between subject experiments. And there is something else which
link |
is very important, I think, which is that almost all psychological hypotheses are true.
link |
That is in the sense that, you know, directionally, if you have a hypothesis that A really causes
link |
B, that it's not true that A causes the opposite of B. Maybe A just has very little effect,
link |
but hypotheses are true mostly, except mostly they're very weak. They're much weaker than
link |
you think when you are having images. So the reason I'm excited about that is that I recently
link |
heard about some friends of mine who they essentially funded 53 studies of behavioral
link |
change by 20 different teams of people with a very precise objective of changing the number
link |
of times that people go to the gym. And the success rate was zero. Not one of the 53 studies
link |
worked. Now, what's interesting about that is those are the best people in the field
link |
and they have no idea what's going on. So they're not calibrated. They think that it's
link |
going to be powerful because they can imagine it, but actually it's just weak because you
link |
are focusing on your manipulation and it feels powerful to you. There's a thing that I've
link |
written about that's called the focusing illusion. That is that when you think about something,
link |
it looks very important, more important than it really is.
link |
More important than it really is. But if you don't see that effect, the 53 studies, doesn't
link |
that mean you just report that? So what was, I guess, the solution to that?
link |
Well, I mean, the solution is for people to trust their intuitions less or to try out
link |
their intuitions before. I mean, experiments have to be pre registered and by the time
link |
you run an experiment, you have to be committed to it and you have to run the experiment seriously
link |
enough and in a public. And so this is happening. The interesting thing is what happens before
link |
and how do people prepare themselves and how they run pilot experiments. It's going to
link |
train the way psychology is done and it's already happening.
link |
Do you have a hope for, this might connect to the study sample size.
link |
Do you have a hope for the internet?
link |
Well, I mean, you know, this is really happening. MTurk, everybody's running experiments on
link |
MTurk and it's very cheap and very effective.
link |
Do you think that changes psychology essentially? Because you're thinking you cannot run 10,000
link |
Eventually it will. I mean, I, you know, I can't put my finger on how exactly, but it's,
link |
that's been true in psychology with whenever an important new method came in, it changes
link |
the field. So, and MTurk is really a method because it makes it very much easier to do
link |
something, to do some things.
link |
Is there a undergrad students who'll ask me, you know, how big a neural network should
link |
be for a particular problem? So let me ask you an equivalent question. How big, how many
link |
subjects does the study have for it to have a conclusive result?
link |
Well, it depends on the strength of the effect. So if you're studying visual perception or
link |
the perception of color, many of the classic results in visual, in color perception were
link |
done on three or four people. And I think one of them was colorblind, but partly colorblind,
link |
but on vision, you know, it's highly reliable. Many people don't need a lot of replications
link |
for some type of neurological experiment. When you're studying weaker phenomena and
link |
especially when you're studying them between subjects, then you need a lot more subjects
link |
than people have been running. And that is, that's one of the things that are happening
link |
in psychology now is that the power, the statistical power of experiments is increasing rapidly.
link |
Does the between subject, as the number of subjects goes to infinity approach?
link |
Well, I mean, you know, it goes to infinity is exaggerated, but people, the standard number
link |
of subjects for an experiment in psychology were 30 or 40. And for a weak effect, that's
link |
simply not enough. And you may need a couple of hundred. I mean, it's that sort of order
link |
What are the major disagreements in theories and effects that you've observed throughout
link |
your career that still stand today? You've worked on several fields, but what still is
link |
out there as a major disagreement that pops into your mind?
link |
I've had one extreme experience of, you know, controversy with somebody who really doesn't
link |
like the work that Amos Tversky and I did. And he's been after us for 30 years or more,
link |
Do you want to talk about it?
link |
Well, I mean, his name is Gerd Gigerenzer. He's a well known German psychologist. And
link |
that's the one controversy, which I, it's been unpleasant. And no, I don't particularly
link |
want to talk about it.
link |
But is there is there open questions, even in your own mind, every once in a while? You
link |
know, we talked about semi autonomous vehicles. In my own mind, I see what the data says,
link |
but I also constantly torn. Do you have things where you or your studies have found something,
link |
but you're also intellectually torn about what it means? And there's maybe disagreements
link |
within your own mind about particular things.
link |
I mean, it's, you know, one of the things that are interesting is how difficult it is
link |
for people to change their mind. Essentially, you know, once they are committed, people
link |
just don't change their mind about anything that matters. And that is surprisingly, but
link |
it's true about scientists. So the controversy that I described, you know, that's been going
link |
on like 30 years and it's never going to be resolved. And you build a system and you live
link |
within that system and other other systems of ideas look foreign to you and there is
link |
very little contact and very little mutual influence. That happens a fair amount.
link |
Do you have a hopeful advice or message on that? Thinking about science, thinking about
link |
politics, thinking about things that have impact on this world, how can we change our
link |
I think that, I mean, on things that matter, which are political or really political or
link |
religious and people just don't, don't change their mind. And by and large, and there's
link |
very little that you can do about it. The, what does happen is that if leaders change
link |
their minds. So for example, the public, the American public doesn't really believe in
link |
climate change, doesn't take it very seriously. But if some religious leaders decided this
link |
is a major threat to humanity, that would have a big effect. So that we have the opinions
link |
that we have, not because we know why we have them, but because we trust some people and
link |
we don't trust other people. And so it's much less about evidence than it is about stories.
link |
So the way, one way to change your mind isn't at the individual level, is that the leaders
link |
of the communities you look up with, the stories change and therefore your mind changes with
link |
them. So there's a guy named Alan Turing, came up with a Turing test. What do you think
link |
is a good test of intelligence? Perhaps we're drifting in a topic that we're maybe philosophizing
link |
about, but what do you think is a good test for intelligence, for an artificial intelligence
link |
Well, the standard definition of artificial general intelligence is that it can do anything
link |
that people can do and it can do them better. What we are seeing is that in many domains,
link |
you have domain specific devices or programs or software, and they beat people easily in
link |
a specified way. What we are very far from is that general ability, general purpose intelligence.
link |
In machine learning, people are approaching something more general. I mean, for Alpha
link |
Zero was much more general than Alpha Go, but it's still extraordinarily narrow and
link |
specific in what it can do. So we're quite far from something that can, in every domain,
link |
think like a human except better.
link |
What aspect, so the Turing test has been criticized, it's natural language conversation that is
link |
too simplistic. It's easy to quote unquote pass under constraints specified. What aspect
link |
of conversation would impress you if you heard it? Is it humor? What would impress the heck
link |
out of you if you saw it in conversation?
link |
Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just
link |
factual conversation, which I think is easy. And allusions would be interesting and metaphors
link |
would be interesting. I mean, but new metaphors, not practiced metaphors. So there is a lot
link |
that would be sort of impressive that is completely natural in conversation, but that you really
link |
Does the possibility of creating a human level intelligence or superhuman level intelligence
link |
system excite you, scare you? How does it make you feel?
link |
I find the whole thing fascinating. Absolutely fascinating.
link |
I think. And exciting. It's also terrifying, you know, but I'm not going to be around
link |
to see it. And so I'm curious about what is happening now, but I also know that predictions
link |
about it are silly. We really have no idea what it will look like 30 years from now.
link |
Speaking of silly, bordering on the profound, let me ask the question of, in your view,
link |
what is the meaning of it all? The meaning of life? He's a descendant of great apes that
link |
we are. Why, what drives us as a civilization, as a human being, as a force behind everything
link |
that you've observed and studied? Is there any answer or is it all just a beautiful mess?
link |
There is no answer that I can understand and I'm not, and I'm not actively looking for
link |
Do you think an answer exists?
link |
No. There is no answer that we can understand. I'm not qualified to speak about what we cannot
link |
understand, but there is, I know that we cannot understand reality, you know. I mean, there
link |
are a lot of things that we can do. I mean, you know, gravity waves, I mean, that's a
link |
big moment for humanity. And when you imagine that ape, you know, being able to go back
link |
to the Big Bang, that's, that's, but...
link |
It's bigger than us.
link |
The why is hopeless, really.
link |
Danny, thank you so much. It was an honor. Thank you for speaking today.
link |
Thanks for listening to this conversation. And thank you to our presenting sponsor, Cash
link |
App. Download it, use code LexPodcast, you'll get $10 and $10 will go to FIRST, a STEM education
link |
nonprofit that inspires hundreds of thousands of young minds to become future leaders and
link |
innovators. If you enjoy this podcast, subscribe on YouTube, give it five stars on Apple Podcast,
link |
follow on Spotify, support it on Patreon, or simply connect with me on Twitter.
link |
And now, let me leave you with some words of wisdom from Daniel Kahneman.
link |
Intelligence is not only the ability to reason, it is also the ability to find relevant material
link |
and memory and to deploy attention when needed.
link |
Thank you for listening and hope to see you next time.