back to indexDaniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65
link |
The following is a conversation with Daniel Kahneman,
link |
winner of the Nobel Prize in Economics for his integration of economic science
link |
with the psychology of human behavior, judgment, and decision making.
link |
He's the author of the popular book, Thinking Fast and Slow, that summarizes in an accessible way
link |
his research of several decades, often in collaboration with Amos Tversky,
link |
on cognitive biases, prospect theory, and happiness.
link |
The central thesis of this work is the dichotomy between two modes of thought,
link |
what he calls System 1 is fast, instinctive, and emotional. System 2 is slower, more deliberative,
link |
and more logical. The book delineates cognitive biases associated with each of these two types
link |
of thinking. His study of the human mind and its peculiar and fascinating limitations
link |
are both instructive and inspiring for those of us seeking to engineer intelligence systems.
link |
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
give it 5 stars on Apple Podcast, follow on Spotify, support it on Patreon,
link |
or simply connect with me on Twitter. Alex Friedman, spelled F R I D M A N.
link |
I recently started doing ads at the end of the introduction. I'll do one or two minutes
link |
after introducing the episode and never any ads in the middle that can break the flow of the
link |
conversation. I hope that works for you and doesn't hurt the listening experience.
link |
This show is presented by Cash App, the number one finance app in the App Store.
link |
I personally use Cash App to send money to friends, but you can also use it to buy,
link |
sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature.
link |
You can buy fractions of a stock, say $1 worth, no matter what the stock price is.
link |
Roker's services are provided by Cash App Investing, a subsidiary of Square and member SIPC.
link |
I'm excited to be working with Cash App to support one of my favorite organizations called First,
link |
best known for their first robotics and Lego competitions. They educate and inspire hundreds
link |
of thousands of students in over 110 countries and have a perfect rating and charity navigator,
link |
which means the donated money is used to maximum effectiveness.
link |
When you get Cash App from the App Store, Google Play, and use code lexpodcast,
link |
you'll get $10 and Cash App will also donate $10 to First, which again is an organization that
link |
I've personally seen inspire girls and boys the dream of engineering a better world.
link |
And now here's my conversation with Daniel Kahneman.
link |
You tell a story of an SS soldier early in the war, World War II,
link |
in a Nazi occupied France in Paris, where you grew up. He picked you up and hugged you
link |
and showed you a picture of a boy, maybe not realizing that you were Jewish.
link |
Not maybe, certainly not.
link |
So I told you I'm from the Soviet Union, that was significantly impacted by the war as well,
link |
and I'm Jewish as well. What do you think World War II taught us about human psychology broadly?
link |
Well, I think the only big surprise is the extermination policy genocide by the German people.
link |
That's when you look back on it, and I think that's a major surprise.
link |
It's a surprise because...
link |
It's a surprise that they could do it. It's a surprise that enough people willingly participated
link |
in that. This is a surprise. Now it's no longer a surprise, but it's changed
link |
many people's views, I think, about human beings.
link |
Certainly for me, the Achman trial teaches you something because it's very clear that
link |
if it could happen in Germany, it could happen anywhere.
link |
It's not that the Germans were special. This could happen anyway.
link |
So what do you think that is? Do you think we're all capable of evil?
link |
We're all capable of cruelty?
link |
I don't think in those terms. I think that what is certainly possible is you can dehumanize people
link |
so that you treat them not as people anymore, but as animals, and the same way that you can
link |
slaughter animals without feeling much of anything, it can be the same.
link |
When you feel that, I think the combination of dehumanizing the other side and having
link |
uncontrolled power over other people, I think that doesn't bring out the most generous aspect
link |
of human nature. So that Nazi soldier, he was a good man, and he was perfectly capable
link |
of killing a lot of people, and I'm sure he did.
link |
But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of?
link |
Again, this is surprising that it was so extreme, but it's not one thing in human
link |
nature. I don't want to call it evil, but the distinction between the in group and the out
link |
group, that is very basic. So that's built in. The loyalty and affection towards in group,
link |
and the willingness to dehumanize the out group, that is in human nature.
link |
That's what I think probably didn't need the Holocaust to teach us that, but the Holocaust
link |
is a very sharp lesson of what can happen to people and what people can do.
link |
So the effect of the in group and the out group?
link |
It's clear that those were people. You could shoot them. They were not human. There was no
link |
empathy or very, very little empathy left. So occasionally there might have been. And very
link |
quickly, by the way, the empathy disappeared if there was initially. And the fact that everybody
link |
around you was doing it, that completely the group doing it and everybody shooting Jews,
link |
I think, that makes it permissible. Now, how much, you know, whether it could happen
link |
in every culture or whether the Germans were just particularly efficient and disciplined,
link |
so they could get away with it? It's an interesting question.
link |
Are these artifacts of history or is it human nature?
link |
I think that's really human nature. You know, you put some people in a position of power relative
link |
to other people and then they become less human, they become different.
link |
But in general, in war outside of concentration camps in World War II, it seems that war brings out
link |
darker sides of human nature, but also the beautiful things about human nature.
link |
Well, you know, I mean, what it brings out is the loyalty among soldiers. I mean,
link |
it brings out the bonding, male bonding. I think it's a very real thing that happens.
link |
And there is a certain thrill to friendship. And there is certainly a certain thrill to friendship
link |
under risk and to shared risk. And so people have very profound emotions up to the point
link |
where it gets so traumatic that little is left. So let's talk about psychology a little bit.
link |
In your book, Thinking Fast and Slow, you describe two modes of thought system one,
link |
the fast instinctive and emotional one, system two, the slower, deliberate, logical one,
link |
at the risk of asking Darwin to discuss theory of evolution. Can you describe
link |
distinguishing characteristics for people who have not read your book of the two systems?
link |
Well, I mean, the word system is a bit misleading, but at the same time, it's misleading. It's also
link |
very useful. But what I call system one, it's easier to think of it as, as a family of activities.
link |
And primarily the way I describe it is, there are different ways for ideas to come to mind.
link |
And some ideas come to mind automatically. And the example, a standard example is two plus two,
link |
and then something happens to you. And, and in other cases, you've got to do something,
link |
you've got to work in order to produce the idea. And my example, I always give the same pair of
link |
numbers as 27 times 14, I think. You have to perform some algorithm in your head, some steps.
link |
Yes. And, and it takes time. It's a very different, nothing comes to mind, except
link |
something comes to mind, which is the algorithm, I mean, that you've got to perform. And then it's
link |
work, and it engages short term memory and engages executive function. And it makes you incapable
link |
of doing other things at the same time. So the, the main characteristic of system two,
link |
that there is mental effort involved, and there is a limited capacity for mental effort,
link |
where a system one is effortless, essentially, that's the major distinction. So you talk about
link |
their, you know, it's really convenient to talk about two systems, but you also mentioned just
link |
now, and in general, that there is no distinct two systems in the brain, from a neurobiological,
link |
even from psychology perspective. But why does it seem to, from the experiments you've conducted,
link |
there does seem to be kind of emergent two modes of thinking. So at some point, these kinds of
link |
systems came into a brain architecture, maybe man will share it, but, or do you not think of it
link |
at all in those terms that it's all a mush and these two things just emerge?
link |
You know, evolutionary theorizing about this is cheap and easy. So it's the way I think about it
link |
is that it's very clear that animals have a perceptual system, and that includes an ability
link |
to understand the world, at least to the extent that they can predict, they can't explain anything,
link |
but they can anticipate what's going to happen. And that's the key form of understanding the world.
link |
And my crude idea is that, what I call system two, well, system two grew out of this. And,
link |
you know, there is language, and there is the capacity of manipulating ideas, and the capacity
link |
of imagining futures, and of imagining counterfactual things that haven't happened, and to do conditional
link |
thinking, and there are really a lot of abilities that without language, and without the very large
link |
brain that we have compared to others, would be impossible. Now, system one is more like what
link |
the animals are, but system one also can talk. I mean, it has language, it understands language.
link |
Indeed, it speaks for us. I mean, you know, I'm not choosing every word as a deliberate process.
link |
The words, I have some idea, and then the words come out, and that's automatic and effortless.
link |
And many of the experiments you've done is to show that, listen, system one exists and it does
link |
speak for us, and we should be careful about the voice it provides. Well, I mean, you know,
link |
we have to trust it, because it's the speed at which it acts. System two, if we dependent on
link |
system two for survival, we wouldn't survive very long, because it's very slow. Yeah, crossing
link |
the street. Crossing the street. I mean, many things depend on their being automatic. One very
link |
important aspect of system one is that it's not instinctive. You use the word instinctive. It
link |
contains skills that clearly have been learned so that skilled behavior like driving a car or
link |
speaking, in fact, skilled behavior has to be learned. And so it doesn't, you know, you don't
link |
come equipped with driving, you have to learn how to drive. And you have to go through a period
link |
where driving is not automatic before it becomes automatic. So yeah, you construct. I mean, this
link |
is where you talk about heuristic and biases is you to make it automatic. You create a pattern,
link |
and then system one essentially matches a new experience against the previously seen pattern.
link |
And when that match is not a good one, that's when the cognitive all the all the mess happens,
link |
but it's most of the time it works. And so it's pretty, most of the time, the anticipation of
link |
what's going to happen next is correct. And, and most of the time, the plan about what you have
link |
to do is correct. And so most of the time, everything works just fine. What's interesting
link |
actually is that in some sense, system one is much better at what it does than system two is at
link |
what it does. That is, there is that quality of effortlessly solving enormously complicated
link |
problems, which clearly exists. So that the chess player, a very good chess player,
link |
all the moves that come to their mind are strong moves. So all the selection of strong moves
link |
happens unconsciously and automatically and very, very fast. And, and all that is in system one.
link |
So the system two verifies. So along this line of thinking, really what we are are machines that
link |
construct pretty effective system one. You could think of it that way. So we're now talking about
link |
humans. But if we think about building artificial intelligence systems, robots, do you think all
link |
the features and bugs that you have highlighted in human beings are useful for constructing AI
link |
systems? So both systems are useful for perhaps instilling in robots? What is happening these days
link |
is that actually what is happening in deep learning is is more like a system one product
link |
than like a system two product. I mean, deep learning matches patterns and anticipate what's
link |
going to happen. So it's highly predictive. What, what deep learning doesn't have, and you know,
link |
many people think that this is a critical, it, it doesn't have the ability to reason. So it,
link |
it doesn't, there is no system two there. But I think very importantly, it doesn't have any
link |
causality or any way to represent meaning and to represent real interaction. So until that is solved,
link |
the, you know, what can be accomplished is marvelous and very exciting, but limited.
link |
That's actually really nice to think of current advances in machine learning is essentially
link |
system one advances. So how far can we get with just system one? If we think deep learning and
link |
artificial systems, I mean, you know, it's very clear that deep mind is already gone way, way
link |
beyond what people thought was possible. I think, I think the thing that has impressed me most about
link |
the developments in AI is the speed. It's that things, at least in the context of deep learning,
link |
and maybe this is about to slow down, but things moved a lot faster than anticipated.
link |
The transition from solving, solving chess to solving go was, I mean, that's bewildering how
link |
quickly it went. The move from AlphaGo to AlphaZero is sort of bewildering the speed at which they
link |
accomplished that. Now clearly, there, there, so there are many problems that you can solve that
link |
way, but there are some problems for which you need something else. Something like reasoning.
link |
Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus was
link |
also a critic of AI. I mean, he, what he points out, and I think he has a point is that humans
link |
learn quickly. Children don't need a million examples. They need two or three examples. So
link |
clearly there is a fundamental difference. And what enables, what enables a machine
link |
to learn quickly, what you have to build into the machine because it's clear that you have to
link |
build some expectations or something in the machine to make it ready to learn quickly. That's,
link |
that at the moment seems to be unsolved. I'm pretty sure that DeepMind is working on it, but
link |
yeah, they're, if they have solved it, I haven't heard yet.
link |
They're trying to actually, them and OpenAI are trying to start to get to use neural networks
link |
to reason. So assemble knowledge, of course, causality is temporal causality is out of reach
link |
to most everybody. You mentioned the benefits of system one is essentially that it's fast,
link |
allows us to function in the world. Fast and skilled, you know. It's skill. And it has a model
link |
of the world, you know, in a sense, I mean, there was the early phase of AI attempted to model
link |
reasoning. And they were moderately successful, but, you know, reasoning by itself doesn't get you
link |
much. Deep learning has been much more successful in terms of, you know, what they can do. But now
link |
that's an interesting question, whether it's approaching its limits. What do you think?
link |
I think absolutely. So I just talked to John Lacoon, he mentioned, you know,
link |
I know him. So he thinks that the limits, we're not going to hit the limits with neural networks,
link |
that ultimately this kind of system one pattern matching will start to start to look like system
link |
two without significant transformation of the architecture. So I'm more with the majority
link |
of the people who think that, yes, neural networks will hit a limit in their capability.
link |
He, on the one hand, I have heard him tell them is a service essentially that, you know, what
link |
they have accomplished is not a big deal that they have just touched that basically, you know,
link |
they can't do unsupervised learning in an effective way. But you're telling me that he thinks
link |
that the current, within the current architecture, you can do causality and reasoning.
link |
So he's very much a pragmatist in a sense of saying that we're very far away, that there's
link |
still, I think there's this idea that he says is we can only see one or two mountain peaks ahead,
link |
and there might be either a few more after or thousands more after. Yeah. So that kind of
link |
idea. I heard that metaphor. Right. But nevertheless, it doesn't see a, the final answer not fundamentally
link |
looking like one that we currently have. So neural networks being a huge part of that.
link |
Yeah. I mean, that's very likely because pattern matching is so much of what's going on. And you
link |
can think of neural networks as processing information sequentially. Yeah. I mean, you know,
link |
there is, there is an important aspect to, for example, you get systems that translate and
link |
they do a very good job, but they really don't know what they're talking about. And for that,
link |
I'm really quite surprised. For that, you would need, you would need an AI that has sensation,
link |
an AI that is in touch with the world. Yeah. And self awareness and maybe even something
link |
resembles consciousness kind of ideas. Certainly awareness of, you know, awareness of what's going
link |
on so that the words have meaning or can get are in touch with some perception or some action.
link |
Yeah. So that's a big thing for Jan. And what he refers to is grounding to the physical space.
link |
So that's what we're talking about the same thing. Yeah. So, but so how, how you ground,
link |
I mean, the grounding without grounding, then you get, you get a machine that doesn't know
link |
what it's talking about, because it is talking about the world ultimately.
link |
The question, the open question is what it means to ground. I mean, we're very human centric in
link |
our thinking, but what does it mean for a machine to understand what it means to be in this world?
link |
Does it need to have a body? Does it need to have a finiteness like we humans have?
link |
All of these elements, it's, it's a very, it's an open question.
link |
You know, I'm not sure about having a body, but having a perceptual system, having a body would
link |
be very helpful too. I mean, if, if you think about human mimicking human, but having a perception,
link |
that seems to be essential so that you can build, you can accumulate knowledge about the world.
link |
So if you can imagine a human completely paralyzed, and there's a lot that the human
link |
brain could learn, you know, with a paralyzed body. So if we got a machine that could do that,
link |
that would be a big deal. And then the flip side of that, something you see in children,
link |
and something in machine learning world is called active learning. Maybe it is also,
link |
is being able to play with the world. How important for developing system on or system
link |
to, do you think it is to play with the world to be able to interact with?
link |
Certainly a lot, a lot of what you learn as you learn to anticipate
link |
the outcomes of your actions. I mean, you can see that how babies learn it,
link |
you know, with their hands, how they, how they learn, you know, to connect,
link |
you know, the movements of their hands with something that clearly is something that happens
link |
in the brain. And, and, and the ability of the brain to learn new patterns. So, you know,
link |
it's the kind of thing that you get with artificial limbs, that you connect it and
link |
then people learn to operate the artificial limb, you know, really impressively quickly,
link |
at least from, from what I hear. So we have a system that is ready to learn the world through
link |
action. At the risk of going into way too mysterious of land, what do you think it takes
link |
to build a system like that? Obviously, we're very far from understanding how the brain works, but
link |
how difficult is it to build this mind of ours? You know, I mean, I think that Jan Lakun's answer,
link |
that we don't know how many mountains there are. I think that's a very good answer.
link |
I think that, you know, if you, if you look at what Ray Kurzweil is saying, that strikes me as
link |
off the wall. But, but I think people are much more realistic than that, where actually Demi
link |
Sasabi is, and Jan is, and so the people were actually doing the work fairly realistic, I think.
link |
To maybe phrase it another way, from a perspective not of building it, but from understanding it.
link |
How complicated are human beings in the, in the following sense? You know, I work with
link |
autonomous vehicles and pedestrians. So we tried to model pedestrians. How difficult is it to model
link |
a human being, their perception of the world, the two systems they operate under sufficiently
link |
to be able to predict whether the pedestrian is going to cross the road or not? I'm, you know,
link |
I'm fairly optimistic about that, actually, because what we're talking about is a huge
link |
amount of information that every vehicle has, and that feeds into one system, into one gigantic
link |
system. And so anything that any vehicle learns becomes part of what the whole system knows.
link |
And with a system multiplier like that, there is a lot that you can do. So human beings are very
link |
complicated, but and, and, you know, system is going to make mistakes, but human makes mistakes.
link |
I think that they'll be able to, I think they are able to anticipate pedestrians, otherwise
link |
a lot would happen. They're able to, you know, they're able to get into a roundabout and into,
link |
into traffic. So they must know both to expect or to anticipate how people will react when
link |
they're sneaking in. And there's a lot of learning that's involved in that.
link |
Currently, the pedestrians are treated as things that cannot be hit, and they're not treated as
link |
agents with whom you interact in a game theoretic way. So, I mean, it's not, it's a totally open
link |
problem. And every time somebody tries to solve it, it seems to be harder than we think. And nobody's
link |
really tried to seriously solve the problem of that dance, because I'm not sure if you've thought
link |
about the problem of pedestrians, but you're really putting your life in the hands of the driver.
link |
You know, there is a dance as part of the dance that would be quite complicated. But for example,
link |
when I cross the street and there is a vehicle approaching, I look the driver in the eye. And
link |
I think many people do that. And, you know, that's a signal that I'm sending. And I would be sending
link |
that machine to an autonomous vehicle, and it had better understand it, because it means I'm crossing.
link |
So, and there's another thing you do that actually, so I'll tell you what you do, because we watched,
link |
I've watched hundreds of hours of video on this, is when you step in the street, you do that before
link |
you step in the street. And when you step in the street, you actually look away.
link |
Look away. Yeah. Now, what is that? What that's saying is, I mean, you're trusting that the car,
link |
who hasn't slowed down yet, will slow down. Yeah. And you're telling him, yeah, I'm committed.
link |
I mean, this is like in a game of tricking. So I'm committed. And if I'm committed,
link |
I'm looking away. So there is, you just have to stop. So the question is whether a machine that
link |
observes that needs to understand mortality. Here, I'm not sure that it's got to understand so much
link |
it's got to anticipate. So, and here, but you know, you're surprising me because
link |
here I would think that maybe you can anticipate without understanding, because I think this is
link |
clearly what's happening in playing go or in playing chess. There's a lot of anticipation
link |
and there is zero understanding. So I thought that you didn't need a model of the human.
link |
And the model of the human mind to avoid hitting pedestrians. But you are suggesting that
link |
you do. Yeah, you do. And then it's, then it's a lot harder. So this is, and I have a follow
link |
question to see where your intuition lies. Is it seems that almost every robot human
link |
collaboration system is a lot harder than people realize. So do you think it's possible for robots
link |
and humans to collaborate successfully? We talked a little bit about semi autonomous vehicles,
link |
like in the Tesla autopilot, but just in tasks in general. If you think we talked about current
link |
neural networks being kind of system one, do you think those same systems can borrow humans for
link |
system two type tasks and collaborate successfully? Well, I think that in any system
link |
where humans and the machine interact, that the human will be superfluous within a fairly
link |
short time. That is, if the machine has advanced enough so that it can really help the human,
link |
then it may not need the human for a long time. Now, it would be very interesting if
link |
there are problems that for some reason the machine doesn't cannot solve, but that people
link |
could solve, then you would have to build into the machine and ability to recognize
link |
that it is in that kind of problematic situation and to call the human. That cannot be easy
link |
without understanding. That is, it must be very difficult to program a recognition that you are
link |
in a problematic situation without understanding the problem. That is very true. In order to
link |
understand the full scope of situations that are problematic, you almost need to be smart enough
link |
to solve all those problems. It is not clear to me how much the machine will need the human.
link |
I think the example of chess is very instructive. There was a time at which
link |
Kasparov was saying that human machine combinations will beat everybody. Even stock fish doesn't
link |
need people and alpha zero certainly doesn't need people. The question is, just like you said,
link |
how many problems are like chess and how many problems are the ones where are not like chess?
link |
Every problem probably in the end is like chess. The question is, how long is that transition
link |
period? I mean, that's a question I would ask you in terms of an autonomous vehicle just
link |
driving is probably a lot more complicated than go to solve that. Yes, and that's surprising
link |
because it's open. No, I mean, that's not surprising to me because there is a hierarchical
link |
aspect to this, which is recognizing a situation and then within the situation bringing up the
link |
relevant knowledge. And for that hierarchical type of system to work, you need a more complicated
link |
system than we currently have. A lot of people think because as human beings, this is probably
link |
the cognitive biases, they think of driving as pretty simple because they think of their own
link |
experience. This is actually a big problem for AI researchers or people thinking about AI because
link |
they evaluate how hard a particular problem is based on very limited knowledge, basically on
link |
how hard it is for them to do the task. And then they take for granted, maybe you can speak to
link |
that because most people tell me driving is trivial and humans, in fact, are terrible at
link |
driving is what people tell me. And I see humans and humans are actually incredible at driving
link |
and driving is really terribly difficult. So is that just another element of the effects that
link |
you've described in your work on the psychology side? No, I mean, I haven't really, you know,
link |
I would say that my research has contributed nothing to understanding the ecology and to
link |
understanding the structure situations and the complexity of problems. So all we know is very
link |
clear that that goal, it's endlessly complicated, but it's very constrained. So and in the real
link |
world, there are far fewer constraints and and many more potential surprises. So
link |
so that's obvious because it's not always obvious to people, right? So when you think about,
link |
well, I mean, you know, people thought that reasoning was hard and perceiving was easy. But
link |
you know, they quickly learned that actually modeling vision was tremendously complicated
link |
and modeling, even proving theorems was relatively straightforward.
link |
To push back on that a little bit on the quickly part, they haven't took several decades to learn
link |
that and most people still haven't learned that. I mean, our intuition, of course, AI researchers
link |
have, but you drift a little bit outside the specific AI field, the intuition is still perceptible.
link |
Yeah, that's true. I mean, intuitions, the intuitions of the public haven't changed
link |
radically. And they are there, as you said, they're evaluating the complexity of problems
link |
by how difficult it is for them to solve the problems. And that's not very little to do with
link |
the complexities of solving them in AI. How do you think from the perspective of AI researcher,
link |
do we deal with the intuitions of the public? So in trying to think, I mean, arguably,
link |
the combination of hype investment and the public intuition is what led to the AI winters.
link |
I'm sure that same could be applied to tech or that the intuition of the public leads to media
link |
hype leads to companies investing in the tech, and then the tech doesn't make the company's money,
link |
and then there's a crash. Is there a way to educate people sort of to fight the,
link |
let's call it system one thinking? In general, no. I think that's the simple answer.
link |
And it's going to take a long time before the understanding of where those systems can do
link |
becomes public knowledge. And then the expectations, there are several aspects
link |
that are going to be very complicated. The fact that you have a device that cannot explain itself
link |
is a major, major difficulty. And we're already seeing that. I mean, this is really something
link |
that is happening. So it's happening in the judicial system. So you have system that are clearly
link |
better at predicting parole violations than judges, but they can't explain the reasoning.
link |
And so people don't want to trust them. We seem to in system one even use cues
link |
to make judgments about our environment. So this explainability point,
link |
do you think humans can explain stuff? No, but I mean, there is a very interesting
link |
aspect of that. Humans think they can explain themselves. So when you say something, and I
link |
ask you why do you believe that, then reasons will occur to you. But actually, my own belief
link |
is that in most cases, the reasons have very little to do with why you believe what you believe.
link |
So that the reasons are a story that comes to your mind when you need to explain yourself.
link |
But people traffic in those explanations. I mean, the human interaction depends on those shared
link |
fictions and the stories that people tell themselves. You just made me actually realize,
link |
and we'll talk about stories in a second, that not to be cynical about it, but perhaps
link |
there's a whole movement of people trying to do explainable AI. And really, we don't necessarily
link |
need to explain. AI doesn't need to explain itself. It just needs to tell a convincing story.
link |
Yeah, absolutely. The story doesn't necessarily need to reflect the truth. It just needs to be
link |
convincing. There's something to that. You can say exactly the same thing in a way that
link |
sounds cynical or doesn't sound cynical. But the objective of having an explanation
link |
is to tell a story that will be acceptable to people. And for it to be acceptable and to
link |
be robustly acceptable, it has to have some elements of truth. But the objective is for
link |
people to accept it. It's quite brilliant, actually. But so on the stories that we tell,
link |
sorry to ask you the question that most people know the answer to, but you talk about two cells
link |
in terms of how life has lived, the experienced self and the remembering self. Can you describe
link |
the distinction between the two? Well, sure. I mean, there is an aspect of life that occasionally,
link |
you know, most of the time we just live, and we have experiences and they're better and they are
link |
worse and it goes on over time. And mostly we forget everything that happens, or we forget most
link |
of what happens. Then occasionally, you, when something ends or at different points, you evaluate
link |
the past and you form a memory. And the memory is schematic. It's not that you can roll a film
link |
of an interaction, you constructs, in effect, the elements of a story about an episode.
link |
So there is the experience and there is the story that is created about the experience. And that's
link |
what I call the remembering. So I had the image of two cells. So there is a self that lives,
link |
and there is a self that evaluates life. Now, the paradox and the deep paradox in that is that
link |
we have one system or one self that does the living, but the other system, the remembering
link |
self is all we get to keep. And basically, decision making and everything that we do
link |
is governed by our memories, not by what actually happened. It's governed by the story that we
link |
told ourselves or by the story that we're keeping. So that's the distinction.
link |
I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of that.
link |
What are the properties of happiness which emerge from the remembering self?
link |
There are properties of how we construct stories that are really important. So
link |
that I studied a few, but a couple are really very striking. And one is that in stories,
link |
time doesn't matter. There's a sequence of events or there are highlights or not.
link |
And how long it took, they lived happily ever after or three years later, something.
link |
Time really doesn't matter. In stories, events matter, but time doesn't. That leads to a very
link |
interesting set of problems because time is all we got to live. Time is the currency of life.
link |
And yet, time is not represented basically in evaluated memories. So that creates a lot of
link |
paradoxes that I've thought about. Yeah, they're fascinating. But if you were to
link |
give advice on how one lives a happy life based on such properties, what's the optimal?
link |
You know, I gave up. I abandoned happiness research because I couldn't solve that problem. I
link |
couldn't see. And in the first place, it's very clear that if you do talk in terms of those two
link |
selves, then what makes the remembering self happy and what makes the experiencing self happy
link |
are different things. And I asked the question of, suppose you're planning a vacation and you're
link |
just told that at the end of the vacation, you'll get an amnesic drug. So remember nothing. And
link |
they'll also destroy all your photos. So there'll be nothing. Would you still go to the same vacation?
link |
And it's, it turns out we go to vacations in large part to construct memories,
link |
not to have experiences, but to construct memories. And it turns out that the vacation
link |
that you would want for yourself if you knew you will not remember is probably not the same
link |
vacation that you will want for yourself if you will remember. So I have no solution to these
link |
problems, but clearly those are big issues. And you've talked about issues. You've talked about
link |
sort of how many minutes or hours you spend about the vacation. It's an interesting way to think about
link |
it because that's how you really experience the vacation outside the being in it. But there's
link |
also a modern, I don't know if you think about this or interact with it. There's a modern way to
link |
magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks.
link |
A lot of people live life for the picture that you take that you post somewhere. And now thousands
link |
of people share and potentially potentially millions. And then you can relive it even much
link |
more than just those minutes. Do you think about that magnification much? You know, I'm too old
link |
for social networks. I, you know, I've never seen Instagram. So I cannot really speak
link |
intelligently about those things. I'm just too old. But it's interesting to watch the
link |
exact effects you described. I think it will make a very big difference. I mean, and it will make,
link |
it will also make a difference. And that I don't know whether it's clear that in some ways
link |
the devices that serve us supplant function. So you don't have to remember phone numbers.
link |
You don't have, you really don't have to know facts. I mean, the number of conversations,
link |
I mean, Bob with somebody says, well, let's look it up. So it's, in a way, it's made conversations.
link |
Well, it's, it means that it's much less important to know things. No, it used to be very important
link |
to know things. This is changing. So the requirements of that, that we have for ourselves
link |
and for other people are changing because of all those supports and because, and I have no idea
link |
what Instagram does. Well, I'll tell you. I wish I knew. I mean, I wish I could just have,
link |
my remembering self could enjoy this conversation, but I'll get to enjoy it even more by having watched,
link |
by watching it and then talking to others. It'll be about 100,000 people as scary as this to say,
link |
well, listen or watch this, right? It changes things. It changes the experience of the world.
link |
And then you seek out experiences which could be shared in that way. It's in, and I haven't seen,
link |
it's, it's the same effects that you described. And I don't think the psychology of that
link |
magnification has been described yet because it's in your world.
link |
You know, the sharing, there was a, there was a time when people read books.
link |
And, and, and you could assume that your friends had read the same books that you read. So there
link |
was kind of invisible sharing. There was a lot of sharing going on. And there was a lot of assumed
link |
common knowledge. And, you know, that was built in. I mean, it was obvious that you had read the
link |
New York Times. It was obvious that you'd read the reviews. I mean, so a lot was taken for granted
link |
that was shared. And, you know, when there were, when there were three television channels,
link |
it was obvious that you'd seen one of them probably the same. So sharing, sharing has always been
link |
there. Always was always there. It was just different. At the risk of inviting mockery from
link |
you, let me say there that I'm also a fan of Sartre and Camus and existentialist philosophers.
link |
And I'm joking, of course, about mockery, but from the perspective of the two selves,
link |
what do you think of the existentialist philosophy of life? So trying to really emphasize the
link |
experiencing self as the proper way to, or the best way to live life?
link |
I don't know enough philosophy to answer that, but it's not, you know, the emphasis on
link |
experience is also the emphasis in Buddhism. So that's, you just have got to experience things
link |
and, and, and not to evaluate and not to pass judgment and not to score, not to keep score.
link |
So if when you look at the grand picture of experience, you think there's something to that
link |
that one, one of the ways to achieve contentment and maybe even happiness is letting go of any of
link |
the things, any of the procedures of the remembering self. Well, yeah, I mean, I think, you know, if
link |
one could imagine a life in which people don't score themselves, it, it feels as if that would
link |
be a better life as if the self scoring and, you know, how am I doing a kind of question
link |
is not, is not a very happy thing to have. But I got out of that field because I couldn't solve
link |
that problem. And, and that was because my intuition was that the experiencing self, that's
link |
reality. But then it turns out that what people want for themselves is not experiences, they want
link |
memories and they want a good story about their life. And so you cannot have a theory of happiness
link |
that doesn't correspond to what people want for themselves. And when I, when I realized that this,
link |
this was where things were going, I really sort of left the field of research.
link |
Do you think there's something instructive about this emphasis of reliving memories
link |
in building AI systems? So currently, artificial intelligence systems are more like experiencing
link |
self in that they react to the environment. There's some pattern formation like learning,
link |
so on. But you really don't construct memories, except in reinforcement learning every once in
link |
a while that you replay over and over. Yeah. But you know, that would in principle would not be
link |
Do you think that's useful? Do you think it's a feature or a bug of human beings that we,
link |
that we look back? Oh, I think that's definitely a feature. That's not a bug. I mean, you have to
link |
look back in order to look forward. So without, without looking back, you couldn't, you couldn't
link |
really intelligently look forward. You're looking for the echoes of the same kind of experience in
link |
order to predict what the future holds. Yeah. Though Victor Franco in his book, Man's Search
link |
for Meaning, I'm not sure if you've read, describes his experience at the concentration,
link |
concentration camps during World War II as a way to describe that finding, identifying a purpose
link |
in life, a positive purpose in life can save one from suffering. First of all, do you connect
link |
with the philosophy that he describes there? Not really. I mean, so I can, I can really see
link |
that somebody who has that feeling of purpose and meaning and so on, that that could sustain you.
link |
I in general don't have that feeling. And I'm pretty sure that if I were in a concentration
link |
camp, I'd give up and die, you know, so he talks, he's, he's a survivor. Yeah. And, you know, he
link |
survived with that. And I'm, and I'm not sure how essential to survival the sense is, but I do know
link |
when I think about myself that I would have given up at, oh, this isn't going anywhere.
link |
And there is, there is a sort of character that, that, that manages to survive in conditions like
link |
that. And then because they survive, they tell stories and it sounds as if they survive because
link |
of what they were doing, we have no idea. They survive because of the kind of people that they
link |
are and the other kind of people who survives and would tell themselves stories of a particular
link |
of a particular kind. So I'm not. So you don't think seeking purpose is a significant
link |
driver in our being? I mean, it's, it's a very interesting question because when you ask people
link |
whether it's very important to have meaning in their life, they say, oh, yes, that's the most
link |
important thing. But when you ask people, what kind of a day did you have? And, and, you know,
link |
what were the experiences that you remember? You don't get much meaning. You get social
link |
experiences. Then, and, and some people say that, for example, in, in, in child, you know,
link |
in taking care of children, the fact that they are your children and you're taking care of them
link |
makes a very big difference. I think that's entirely true. But it's more because of a story
link |
that we're telling ourselves, which is a very different story when we're taking care of our
link |
children or when we're taking care of other things. Jumping around a little bit in doing a
link |
lot of experiments. Let me ask you a question. Most of the work I do, for example, is in the
link |
in the real world, but most of the clean good science that you can do is in the lab. So that
link |
distinction, do you think we can understand the fundamentals of human behavior through controlled
link |
experiments in the lab? If we talk about pupil diameter, for example, it's much easier to do
link |
when you can control lighting conditions. Yeah. So when we look at driving, lighting variation
link |
destroys almost completely your ability to use pupil diameter. But in the lab, for as I mentioned,
link |
semi autonomous or autonomous vehicles in driving simulators, we can't, we don't capture true,
link |
honest human behavior in that particular domain. So your what's your intuition? How much of human
link |
behavior can we study in this controlled environment of the lab? A lot, but you'd have to verify it,
link |
you know, that your conclusions are basically limited to the situation, to the experimental
link |
situation. Then you have to jump the big inductive leap to the real world. So and and that's the
link |
flare. That's where the difference, I think, between the good psychologist and others that are
link |
mediocre is in the sense that that your experiment captures something that's important and something
link |
that's real and others are just running experiments. So what is that like the birth of an idea to his
link |
development in your mind to something that leads to an experiment? Is that similar to maybe like
link |
what Einstein or a good physicist do is your intuition? You basically use your intuition to
link |
build up? Yeah, but I mean, you know, it's it's very skilled intuition. Right. I mean, I just had
link |
that experience. Actually, I had an idea that turns out to be very good idea a couple of days ago.
link |
And and you and you have a sense of that building up. So I'm working with a collaborator. And he
link |
essentially was saying, you know, what what are you doing? What's what's going on? And I was
link |
really, I couldn't exactly explain it. But I knew this is going somewhere. But, you know, I've been
link |
around that game for a very long time. And so I can you develop that anticipation that, yes, this
link |
this is worth following up something here. That's part of the skill. Is that something you can
link |
reduce two words in describing a process in the form of advice to others? No,
link |
follow your heart, essentially. I mean, you know, it's it's like trying to explain what it's like
link |
to drive. It's not you've got to break it apart. And it's not and then you lose and then you lose
link |
the experience. You mentioned collaboration. You've written about your collaboration with
link |
Amos Tversky, that this is you writing the 12 or 13 years in which most of our work was joint
link |
were years of interpersonal and intellectual bliss. Everything was interesting. Almost
link |
everything was funny. And there was a current joy of seeing an idea take shape. So many times in
link |
those years, we shared the magical experience of one of us saying something, which the other one
link |
would understand more deeply than the speaker had done. Contrary to the old laws of information
link |
theory, it was common for us to find that more information was received than had been sent.
link |
I have almost never had the experience with anyone else. If you have not had it, you don't know
link |
how marvelous collaboration can be. So let me ask a perhaps a silly question.
link |
How does one find and create such a collaboration that may be asking like how does one find love?
link |
But yeah, you have to be you have to be lucky. And and and I think you have to have the character
link |
for that because I've had many collaborations. I mean, none with as exciting as with Amos
link |
Tversky. But I've had and I'm having just very so it's a skill. I think I'm good at it.
link |
Not everybody is good at it. And then it's the luck of finding people who are also good at it.
link |
Is there advice in a form for a young scientist
link |
who also seeks to violate this law of information theory?
link |
I really think it's so much luck is involved. And you know, in in those
link |
really serious collaborations, at least in my experience, are a very personal experience.
link |
And I have to like the person I'm working with. Otherwise, you know, I mean, there is that kind
link |
of collaboration, which is like an exchange or commercial exchange of I'm giving this,
link |
you give me that. But the real ones are interpersonal. They're between people like
link |
each other and and who like making each other think and who like the way that the other person
link |
responds to your thoughts. You have to be lucky. Yeah, I mean, but I already noticed that even
link |
just me showing up here, you've quickly started to digging in a particular problem I'm working on
link |
and already new information started to emerge. Is that a process, just the process of curiosity,
link |
of talking to people about problems and seeing? I'm curious about anything to do with AI and
link |
robotics and, you know, and so and I knew you were dealing with that. So I was curious.
link |
Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding
link |
terminology of replication crisis, but really just the at times,
link |
this this effect at a time studies do not are not fully generalizable. They don't you are being
link |
polite. It's worse than that. But is it so I'm actually not fully familiar. Well, I mean,
link |
how bad it is, right? So what do you think is the source? Where do you think? I think I know
link |
what's going on. Actually, I mean, I have a theory about what's going on. And what's going on
link |
is that there is, first of all, a very important distinction between two types of experiments.
link |
And one type is within subjects. So it's the same person has two experimental conditions.
link |
And the other type is between subjects, where some people are this condition, other people
link |
that condition, they're different worlds. And between subject experiments are much harder
link |
to predict, and much harder to anticipate. And the reason, and they're also more expensive,
link |
because you need more people. And it's just so between subject experiments is where the problem
link |
is. It's not so much and within subject experiments, it's really between. And there is a very good
link |
reason why the intuitions of researchers about between subject experiments are wrong.
link |
And that's because when you are a researcher, you're in a within subject situation. That is,
link |
you are imagining the two conditions and you see the causality and you feel it. But in the
link |
between subjects condition, they don't think they see they live in one condition and the other one
link |
is just nowhere. So our intuitions are very weak about between subject experiments. And that,
link |
I think, is something that people haven't realized. And, and in addition, because of that, we have
link |
no idea about the power of manipulations of experimental manipulations, because the same
link |
manipulation is much more powerful when when you are in the two conditions than when you live in
link |
only one condition. And so the experimenters have very poor intuitions about between subject
link |
experiments. And, and there is something else, which is very important, I think, which is that
link |
almost all psychological hypotheses are true. That is, in the sense that, you know, directionally,
link |
if you have a hypothesis that a really causes B that that it's not true that a causes the opposite
link |
B, maybe a just has very little effect, but hypotheses are true mostly, except mostly they're
link |
very weak. They're much weaker than you think when you are having images of. So the reason I'm
link |
excited about that is that I recently heard about some some friends of mine who they essentially
link |
funded 53 studies of behavioral change by 20 different teams of people with a very precise
link |
objective of changing the number of times that people go to the gym, but you know, and, and
link |
the success rate was zero, not one of the 53 studies worked. Now what's interesting about that
link |
is those are the best people in the field. And they have no idea what's going on. So they're not
link |
calibrated. They think that it's going to be powerful because they can imagine it. But actually,
link |
it's just weak because the you're focusing on on your manipulation and feels powerful to you.
link |
There's a thing that I've written about that's called the focusing illusion. That is that when
link |
you think about something, it looks very important, more important than it really is.
link |
More important than it really is. But if you don't see that effect, the 53 studies,
link |
doesn't that mean you just report that? So what's I guess the solution to that?
link |
Well, I mean, the solution is for people to trust their intuitions less or to try out their intuitions
link |
before. I mean, experiments have to be pre registered. And by the time you run an experiment,
link |
you have to be committed to it. And you have to run the experiment seriously enough.
link |
And in a public. And so this is happening. The interesting thing is
link |
what what happens before? And how do people prepare themselves and how they run pilot
link |
experiments? It's going to train the way psychology is done. And it's already happening.
link |
Do you have a hope for this might connect to that this study sample size? Yeah.
link |
Do you have a hope for the internet? Or this is really happening. M took
link |
everybody's running experiments on M took. And it's very cheap and very effective.
link |
So do you think that changes psychology, essentially, because you're think you cannot
link |
run 10,000 subjects, eventually, it will. I mean, I, you know, I can't put my finger
link |
on how exactly, but it's that's been true in psychology with whenever an important new method
link |
came in, it changes the field. So an M took is really a method, because it makes it very
link |
much easier to do something to do some things. Is there a undergrad students will ask me,
link |
you know, how big and your own network should be for a particular problem? So let me ask you an
link |
equivalent equivalent question. How big how many subjects that study have for it to have a
link |
conclusive result? Well, it depends on the strength of the effect. So if you're studying
link |
visual perception, or the perception of color, many of the other classic results in in visual
link |
in color perception, we're done on three or four people. And I think in one of them was
link |
colorblind, but partly colorblind. But on vision, you know, you know, many people don't need a lot
link |
of replications for some type of neurological experiment. When you're studying weaker phenomena,
link |
and especially when you're studying them between subjects, then you need a lot more subjects than
link |
people have been running. And that is, that's one of the things that are happening in psychology.
link |
Now is that the power is statistical power of experiment is is increasing rapidly.
link |
Does the between subject as the number of subjects goes to infinity approach?
link |
Well, I mean, you know, goes to infinity is exaggerated, but people the standard
link |
number of subjects who are in experiment psychology with 30 or 40. And for a weak effect,
link |
that's simply not enough. And you may need a couple of hundred. I mean, it's that that sort of
link |
order of magnitude. What are the major disagreements in theories and effects that you've observed
link |
throughout your career that still stand today? Well, you've worked on several fields. Yeah.
link |
But I what still is out there as as a major disagreement that pops into your mind? And
link |
I've had one extreme experience of, you know, controversy with somebody who really doesn't
link |
like the work that Amos Tversky and I did. And and he's been after us for 30 years or more,
link |
at least. Do you want to talk about it? Well, I mean, his name is Goetge Granzer. He's a well
link |
known German psychologist. And that's the one controversy which I it's been unpleasant and
link |
no, I don't particularly want to talk about it. But is there is there open questions, even in
link |
your own mind, every once in a while, you know, we talked about semi autonomous vehicles in my
link |
own mind, I see what the data says, but I also constantly torn. Do you have things where you
link |
or your studies have found something, but you're also intellectually torn about what it means?
link |
And there's been maybe disagreements without you within your own mind about particular thing.
link |
I mean, it's, you know, one of the things that are interesting is how difficult it is for people
link |
to change their mind. Essentially, you know, once they're committed, people just don't change their
link |
mind about anything that matters. And that is surprisingly, but it's true about scientists.
link |
So the controversy that I described, you know, that's been going on like 30 years,
link |
and it's never going to be resolved. And you build a system and you live within that system,
link |
and other systems of ideas look foreign to you. And there is very little contact and very little
link |
mutual influence that happens a fair amount. Do you have a hopeful advice or message on that?
link |
Thinking about science, thinking about politics, thinking about things that have impact on this
link |
world. How can we change our mind? I think that, I mean, on things that matter,
link |
you know, which are political or religious, and people just don't, don't change their mind.
link |
And by and large, and there's very little that you can do about it.
link |
The, what does happen is that if leaders change their mind, so for example,
link |
the public, the American public doesn't really believe in climate change,
link |
doesn't take it very seriously. But if some religious leaders decided this is a major
link |
threat to humanity, that would have a big effect. So that we, we have the opinions that we have,
link |
not because we know why we have them, but because we trust some people and we don't
link |
trust other people. And so it's much less about evidence than it is about stories.
link |
So the way, one way to change your mind isn't at the individual level, is that the leaders of
link |
the communities, you look up with the stories change and therefore your mind changes with them.
link |
So there's a guy named Alan Turing came up with a Turing test.
link |
What do you think is a good test of intelligence? Perhaps we're drifting
link |
in a topic that we're maybe philosophizing about, but what do you think is a good test
link |
for intelligence, for an artificial intelligence system?
link |
Well, the standard definition of, you know, of artificial general intelligence is that
link |
it can do anything that people can do and it can do them better.
link |
And what we are seeing is that in many domains, you have domain specific and,
link |
you know, devices or programs or software and they beat people easily in a specified way.
link |
What we are very far from is that general ability, general purpose intelligence.
link |
So we, in machine learning, people are approaching something more general.
link |
I mean, for Alpha Zero was much more general than Alpha Go,
link |
but it's still extraordinarily narrow and specific in what it can do.
link |
So we're quite far from something that can in every domain think like a human
link |
What aspect, so the Turing test has been criticized as natural language conversation
link |
that is too simplistic. It's easy to quote unquote pass under constraints specified.
link |
What aspect of conversation would impress you if you heard it? Is it humor?
link |
What would impress the heck out of you if you saw it in conversation?
link |
Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just
link |
factual conversation, which I think is easy and illusions would be interesting and
link |
metaphors would be interesting. I mean, but new metaphors, not practiced metaphors.
link |
So there is a lot that would be sort of impressive that it's completely natural in
link |
conversation, but that you really wouldn't expect.
link |
Does the possibility of creating a human level intelligence or super human level
link |
intelligence system excite you, scare you?
link |
Well, I mean, how does it make you feel?
link |
I find the whole thing fascinating. Absolutely fascinating.
link |
I think and exciting. It's also terrifying, you know, but I'm not going to be around to see it.
link |
And so I'm curious about what is happening now, but also know that predictions about it are silly.
link |
We really have no idea, but it will look like 30 years from now. No idea.
link |
Speaking of silly bordering on the profound, they may ask the question of, in your view,
link |
what is the meaning of it all, the meaning of life?
link |
These descendant of great apes that we are, why, what drives us as a civilization, as a human being,
link |
as a force behind everything that you've observed and studied?
link |
Is there any answer or is it all just a beautiful mess?
link |
There is no answer that that I can understand.
link |
And I'm not, and I'm not actively looking for one.
link |
Do you think an answer exists?
link |
No, there is no answer that we can understand.
link |
I'm not qualified to speak about what we cannot understand, but there is.
link |
I know that we cannot understand reality.
link |
I mean, there are a lot of things that we can do. I mean, gravity waves.
link |
I mean, that's a big moment for humanity.
link |
And when you imagine that ape being able to go back to the Big Bang, that's that.
link |
But the why is bigger than us.
link |
The why is hopeless, really.
link |
Danny, thank you so much. It was an honor. Thank you for speaking today.
link |
And now let me leave you with some words of wisdom from Daniel Kahneman.
link |
Intelligence is not only the ability to reason,
link |
it is also the ability to find relevant material in memory
link |
and to deploy attention when needed.
link |
Thank you for listening and hope to see you next time.