back to index

Daniel Kahneman: Thinking Fast and Slow, Deep Learning, and AI | Lex Fridman Podcast #65


small model | large model

link |
00:00:00.000
The following is a conversation with Daniel Kahneman,
link |
00:00:03.200
winner of the Nobel Prize in Economics for his integration of economic science
link |
00:00:08.080
with the psychology of human behavior, judgment, and decision making.
link |
00:00:12.240
He's the author of the popular book, Thinking Fast and Slow, that summarizes in an accessible way
link |
00:00:18.640
his research of several decades, often in collaboration with Amos Tversky,
link |
00:00:23.440
on cognitive biases, prospect theory, and happiness.
link |
00:00:27.040
The central thesis of this work is the dichotomy between two modes of thought,
link |
00:00:31.760
what he calls System 1 is fast, instinctive, and emotional. System 2 is slower, more deliberative,
link |
00:00:38.480
and more logical. The book delineates cognitive biases associated with each of these two types
link |
00:00:44.640
of thinking. His study of the human mind and its peculiar and fascinating limitations
link |
00:00:50.320
are both instructive and inspiring for those of us seeking to engineer intelligence systems.
link |
00:00:57.680
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
00:01:03.040
give it 5 stars on Apple Podcast, follow on Spotify, support it on Patreon,
link |
00:01:07.680
or simply connect with me on Twitter. Alex Friedman, spelled F R I D M A N.
link |
00:01:13.680
I recently started doing ads at the end of the introduction. I'll do one or two minutes
link |
00:01:18.080
after introducing the episode and never any ads in the middle that can break the flow of the
link |
00:01:22.480
conversation. I hope that works for you and doesn't hurt the listening experience.
link |
00:01:28.320
This show is presented by Cash App, the number one finance app in the App Store.
link |
00:01:32.880
I personally use Cash App to send money to friends, but you can also use it to buy,
link |
00:01:37.120
sell, and deposit Bitcoin in just seconds. Cash App also has a new investing feature.
link |
00:01:42.720
You can buy fractions of a stock, say $1 worth, no matter what the stock price is.
link |
00:01:47.120
Roker's services are provided by Cash App Investing, a subsidiary of Square and member SIPC.
link |
00:01:53.920
I'm excited to be working with Cash App to support one of my favorite organizations called First,
link |
00:01:59.680
best known for their first robotics and Lego competitions. They educate and inspire hundreds
link |
00:02:05.120
of thousands of students in over 110 countries and have a perfect rating and charity navigator,
link |
00:02:10.880
which means the donated money is used to maximum effectiveness.
link |
00:02:14.320
When you get Cash App from the App Store, Google Play, and use code lexpodcast,
link |
00:02:19.520
you'll get $10 and Cash App will also donate $10 to First, which again is an organization that
link |
00:02:26.560
I've personally seen inspire girls and boys the dream of engineering a better world.
link |
00:02:32.400
And now here's my conversation with Daniel Kahneman.
link |
00:02:36.880
You tell a story of an SS soldier early in the war, World War II,
link |
00:02:40.880
in a Nazi occupied France in Paris, where you grew up. He picked you up and hugged you
link |
00:02:48.640
and showed you a picture of a boy, maybe not realizing that you were Jewish.
link |
00:02:53.840
Not maybe, certainly not.
link |
00:02:56.400
So I told you I'm from the Soviet Union, that was significantly impacted by the war as well,
link |
00:03:01.360
and I'm Jewish as well. What do you think World War II taught us about human psychology broadly?
link |
00:03:08.720
Well, I think the only big surprise is the extermination policy genocide by the German people.
link |
00:03:20.560
That's when you look back on it, and I think that's a major surprise.
link |
00:03:26.800
It's a surprise because...
link |
00:03:28.240
It's a surprise that they could do it. It's a surprise that enough people willingly participated
link |
00:03:36.320
in that. This is a surprise. Now it's no longer a surprise, but it's changed
link |
00:03:43.840
many people's views, I think, about human beings.
link |
00:03:48.880
Certainly for me, the Achman trial teaches you something because it's very clear that
link |
00:03:57.200
if it could happen in Germany, it could happen anywhere.
link |
00:04:00.880
It's not that the Germans were special. This could happen anyway.
link |
00:04:05.920
So what do you think that is? Do you think we're all capable of evil?
link |
00:04:11.600
We're all capable of cruelty?
link |
00:04:13.040
I don't think in those terms. I think that what is certainly possible is you can dehumanize people
link |
00:04:22.400
so that you treat them not as people anymore, but as animals, and the same way that you can
link |
00:04:31.920
slaughter animals without feeling much of anything, it can be the same.
link |
00:04:38.480
When you feel that, I think the combination of dehumanizing the other side and having
link |
00:04:47.520
uncontrolled power over other people, I think that doesn't bring out the most generous aspect
link |
00:04:53.760
of human nature. So that Nazi soldier, he was a good man, and he was perfectly capable
link |
00:05:06.480
of killing a lot of people, and I'm sure he did.
link |
00:05:09.520
But what did the Jewish people mean to Nazis? So what the dismissal of Jewish as worthy of?
link |
00:05:21.200
Again, this is surprising that it was so extreme, but it's not one thing in human
link |
00:05:28.560
nature. I don't want to call it evil, but the distinction between the in group and the out
link |
00:05:33.680
group, that is very basic. So that's built in. The loyalty and affection towards in group,
link |
00:05:42.160
and the willingness to dehumanize the out group, that is in human nature.
link |
00:05:50.240
That's what I think probably didn't need the Holocaust to teach us that, but the Holocaust
link |
00:05:57.360
is a very sharp lesson of what can happen to people and what people can do.
link |
00:06:06.000
So the effect of the in group and the out group?
link |
00:06:09.600
It's clear that those were people. You could shoot them. They were not human. There was no
link |
00:06:18.960
empathy or very, very little empathy left. So occasionally there might have been. And very
link |
00:06:28.320
quickly, by the way, the empathy disappeared if there was initially. And the fact that everybody
link |
00:06:35.760
around you was doing it, that completely the group doing it and everybody shooting Jews,
link |
00:06:44.880
I think, that makes it permissible. Now, how much, you know, whether it could happen
link |
00:06:58.560
in every culture or whether the Germans were just particularly efficient and disciplined,
link |
00:07:04.880
so they could get away with it? It's an interesting question.
link |
00:07:10.640
Are these artifacts of history or is it human nature?
link |
00:07:14.080
I think that's really human nature. You know, you put some people in a position of power relative
link |
00:07:20.880
to other people and then they become less human, they become different.
link |
00:07:28.480
But in general, in war outside of concentration camps in World War II, it seems that war brings out
link |
00:07:36.880
darker sides of human nature, but also the beautiful things about human nature.
link |
00:07:41.040
Well, you know, I mean, what it brings out is the loyalty among soldiers. I mean,
link |
00:07:49.120
it brings out the bonding, male bonding. I think it's a very real thing that happens.
link |
00:07:57.440
And there is a certain thrill to friendship. And there is certainly a certain thrill to friendship
link |
00:08:03.520
under risk and to shared risk. And so people have very profound emotions up to the point
link |
00:08:11.920
where it gets so traumatic that little is left. So let's talk about psychology a little bit.
link |
00:08:22.400
In your book, Thinking Fast and Slow, you describe two modes of thought system one,
link |
00:08:28.160
the fast instinctive and emotional one, system two, the slower, deliberate, logical one,
link |
00:08:34.880
at the risk of asking Darwin to discuss theory of evolution. Can you describe
link |
00:08:42.400
distinguishing characteristics for people who have not read your book of the two systems?
link |
00:08:48.400
Well, I mean, the word system is a bit misleading, but at the same time, it's misleading. It's also
link |
00:08:56.160
very useful. But what I call system one, it's easier to think of it as, as a family of activities.
link |
00:09:05.600
And primarily the way I describe it is, there are different ways for ideas to come to mind.
link |
00:09:12.960
And some ideas come to mind automatically. And the example, a standard example is two plus two,
link |
00:09:20.880
and then something happens to you. And, and in other cases, you've got to do something,
link |
00:09:27.280
you've got to work in order to produce the idea. And my example, I always give the same pair of
link |
00:09:33.040
numbers as 27 times 14, I think. You have to perform some algorithm in your head, some steps.
link |
00:09:39.200
Yes. And, and it takes time. It's a very different, nothing comes to mind, except
link |
00:09:45.760
something comes to mind, which is the algorithm, I mean, that you've got to perform. And then it's
link |
00:09:52.480
work, and it engages short term memory and engages executive function. And it makes you incapable
link |
00:09:59.680
of doing other things at the same time. So the, the main characteristic of system two,
link |
00:10:06.000
that there is mental effort involved, and there is a limited capacity for mental effort,
link |
00:10:10.960
where a system one is effortless, essentially, that's the major distinction. So you talk about
link |
00:10:17.840
their, you know, it's really convenient to talk about two systems, but you also mentioned just
link |
00:10:23.280
now, and in general, that there is no distinct two systems in the brain, from a neurobiological,
link |
00:10:30.480
even from psychology perspective. But why does it seem to, from the experiments you've conducted,
link |
00:10:37.520
there does seem to be kind of emergent two modes of thinking. So at some point, these kinds of
link |
00:10:49.360
systems came into a brain architecture, maybe man will share it, but, or do you not think of it
link |
00:10:58.720
at all in those terms that it's all a mush and these two things just emerge?
link |
00:11:01.840
You know, evolutionary theorizing about this is cheap and easy. So it's the way I think about it
link |
00:11:12.560
is that it's very clear that animals have a perceptual system, and that includes an ability
link |
00:11:20.720
to understand the world, at least to the extent that they can predict, they can't explain anything,
link |
00:11:27.120
but they can anticipate what's going to happen. And that's the key form of understanding the world.
link |
00:11:34.720
And my crude idea is that, what I call system two, well, system two grew out of this. And,
link |
00:11:45.680
you know, there is language, and there is the capacity of manipulating ideas, and the capacity
link |
00:11:51.840
of imagining futures, and of imagining counterfactual things that haven't happened, and to do conditional
link |
00:11:59.840
thinking, and there are really a lot of abilities that without language, and without the very large
link |
00:12:08.000
brain that we have compared to others, would be impossible. Now, system one is more like what
link |
00:12:15.760
the animals are, but system one also can talk. I mean, it has language, it understands language.
link |
00:12:23.680
Indeed, it speaks for us. I mean, you know, I'm not choosing every word as a deliberate process.
link |
00:12:30.080
The words, I have some idea, and then the words come out, and that's automatic and effortless.
link |
00:12:37.200
And many of the experiments you've done is to show that, listen, system one exists and it does
link |
00:12:42.720
speak for us, and we should be careful about the voice it provides. Well, I mean, you know,
link |
00:12:50.160
we have to trust it, because it's the speed at which it acts. System two, if we dependent on
link |
00:13:00.400
system two for survival, we wouldn't survive very long, because it's very slow. Yeah, crossing
link |
00:13:05.760
the street. Crossing the street. I mean, many things depend on their being automatic. One very
link |
00:13:11.760
important aspect of system one is that it's not instinctive. You use the word instinctive. It
link |
00:13:18.080
contains skills that clearly have been learned so that skilled behavior like driving a car or
link |
00:13:26.320
speaking, in fact, skilled behavior has to be learned. And so it doesn't, you know, you don't
link |
00:13:34.000
come equipped with driving, you have to learn how to drive. And you have to go through a period
link |
00:13:40.640
where driving is not automatic before it becomes automatic. So yeah, you construct. I mean, this
link |
00:13:48.400
is where you talk about heuristic and biases is you to make it automatic. You create a pattern,
link |
00:13:56.320
and then system one essentially matches a new experience against the previously seen pattern.
link |
00:14:02.160
And when that match is not a good one, that's when the cognitive all the all the mess happens,
link |
00:14:06.720
but it's most of the time it works. And so it's pretty, most of the time, the anticipation of
link |
00:14:12.560
what's going to happen next is correct. And, and most of the time, the plan about what you have
link |
00:14:18.960
to do is correct. And so most of the time, everything works just fine. What's interesting
link |
00:14:26.160
actually is that in some sense, system one is much better at what it does than system two is at
link |
00:14:33.840
what it does. That is, there is that quality of effortlessly solving enormously complicated
link |
00:14:40.080
problems, which clearly exists. So that the chess player, a very good chess player,
link |
00:14:48.960
all the moves that come to their mind are strong moves. So all the selection of strong moves
link |
00:14:55.760
happens unconsciously and automatically and very, very fast. And, and all that is in system one.
link |
00:15:03.040
So the system two verifies. So along this line of thinking, really what we are are machines that
link |
00:15:11.360
construct pretty effective system one. You could think of it that way. So we're now talking about
link |
00:15:18.960
humans. But if we think about building artificial intelligence systems, robots, do you think all
link |
00:15:26.560
the features and bugs that you have highlighted in human beings are useful for constructing AI
link |
00:15:33.760
systems? So both systems are useful for perhaps instilling in robots? What is happening these days
link |
00:15:42.560
is that actually what is happening in deep learning is is more like a system one product
link |
00:15:52.160
than like a system two product. I mean, deep learning matches patterns and anticipate what's
link |
00:15:58.400
going to happen. So it's highly predictive. What, what deep learning doesn't have, and you know,
link |
00:16:06.880
many people think that this is a critical, it, it doesn't have the ability to reason. So it,
link |
00:16:12.960
it doesn't, there is no system two there. But I think very importantly, it doesn't have any
link |
00:16:19.040
causality or any way to represent meaning and to represent real interaction. So until that is solved,
link |
00:16:29.760
the, you know, what can be accomplished is marvelous and very exciting, but limited.
link |
00:16:35.600
That's actually really nice to think of current advances in machine learning is essentially
link |
00:16:40.560
system one advances. So how far can we get with just system one? If we think deep learning and
link |
00:16:47.120
artificial systems, I mean, you know, it's very clear that deep mind is already gone way, way
link |
00:16:53.680
beyond what people thought was possible. I think, I think the thing that has impressed me most about
link |
00:17:00.640
the developments in AI is the speed. It's that things, at least in the context of deep learning,
link |
00:17:07.840
and maybe this is about to slow down, but things moved a lot faster than anticipated.
link |
00:17:14.400
The transition from solving, solving chess to solving go was, I mean, that's bewildering how
link |
00:17:22.720
quickly it went. The move from AlphaGo to AlphaZero is sort of bewildering the speed at which they
link |
00:17:30.480
accomplished that. Now clearly, there, there, so there are many problems that you can solve that
link |
00:17:37.920
way, but there are some problems for which you need something else. Something like reasoning.
link |
00:17:45.200
Well, reasoning and also, you know, one of the real mysteries, psychologist Gary Marcus was
link |
00:17:53.600
also a critic of AI. I mean, he, what he points out, and I think he has a point is that humans
link |
00:18:05.120
learn quickly. Children don't need a million examples. They need two or three examples. So
link |
00:18:15.520
clearly there is a fundamental difference. And what enables, what enables a machine
link |
00:18:23.280
to learn quickly, what you have to build into the machine because it's clear that you have to
link |
00:18:28.640
build some expectations or something in the machine to make it ready to learn quickly. That's,
link |
00:18:36.080
that at the moment seems to be unsolved. I'm pretty sure that DeepMind is working on it, but
link |
00:18:44.160
yeah, they're, if they have solved it, I haven't heard yet.
link |
00:18:48.160
They're trying to actually, them and OpenAI are trying to start to get to use neural networks
link |
00:18:54.560
to reason. So assemble knowledge, of course, causality is temporal causality is out of reach
link |
00:19:03.520
to most everybody. You mentioned the benefits of system one is essentially that it's fast,
link |
00:19:09.520
allows us to function in the world. Fast and skilled, you know. It's skill. And it has a model
link |
00:19:14.800
of the world, you know, in a sense, I mean, there was the early phase of AI attempted to model
link |
00:19:24.960
reasoning. And they were moderately successful, but, you know, reasoning by itself doesn't get you
link |
00:19:31.440
much. Deep learning has been much more successful in terms of, you know, what they can do. But now
link |
00:19:39.360
that's an interesting question, whether it's approaching its limits. What do you think?
link |
00:19:44.720
I think absolutely. So I just talked to John Lacoon, he mentioned, you know,
link |
00:19:50.320
I know him. So he thinks that the limits, we're not going to hit the limits with neural networks,
link |
00:19:57.920
that ultimately this kind of system one pattern matching will start to start to look like system
link |
00:20:03.920
two without significant transformation of the architecture. So I'm more with the majority
link |
00:20:12.160
of the people who think that, yes, neural networks will hit a limit in their capability.
link |
00:20:17.360
He, on the one hand, I have heard him tell them is a service essentially that, you know, what
link |
00:20:23.840
they have accomplished is not a big deal that they have just touched that basically, you know,
link |
00:20:29.360
they can't do unsupervised learning in an effective way. But you're telling me that he thinks
link |
00:20:37.120
that the current, within the current architecture, you can do causality and reasoning.
link |
00:20:42.400
So he's very much a pragmatist in a sense of saying that we're very far away, that there's
link |
00:20:47.600
still, I think there's this idea that he says is we can only see one or two mountain peaks ahead,
link |
00:20:56.160
and there might be either a few more after or thousands more after. Yeah. So that kind of
link |
00:21:01.600
idea. I heard that metaphor. Right. But nevertheless, it doesn't see a, the final answer not fundamentally
link |
00:21:12.560
looking like one that we currently have. So neural networks being a huge part of that.
link |
00:21:18.720
Yeah. I mean, that's very likely because pattern matching is so much of what's going on. And you
link |
00:21:27.040
can think of neural networks as processing information sequentially. Yeah. I mean, you know,
link |
00:21:31.680
there is, there is an important aspect to, for example, you get systems that translate and
link |
00:21:40.480
they do a very good job, but they really don't know what they're talking about. And for that,
link |
00:21:47.920
I'm really quite surprised. For that, you would need, you would need an AI that has sensation,
link |
00:21:55.840
an AI that is in touch with the world. Yeah. And self awareness and maybe even something
link |
00:22:02.080
resembles consciousness kind of ideas. Certainly awareness of, you know, awareness of what's going
link |
00:22:07.600
on so that the words have meaning or can get are in touch with some perception or some action.
link |
00:22:15.680
Yeah. So that's a big thing for Jan. And what he refers to is grounding to the physical space.
link |
00:22:23.920
So that's what we're talking about the same thing. Yeah. So, but so how, how you ground,
link |
00:22:29.360
I mean, the grounding without grounding, then you get, you get a machine that doesn't know
link |
00:22:34.960
what it's talking about, because it is talking about the world ultimately.
link |
00:22:39.600
The question, the open question is what it means to ground. I mean, we're very human centric in
link |
00:22:46.000
our thinking, but what does it mean for a machine to understand what it means to be in this world?
link |
00:22:52.480
Does it need to have a body? Does it need to have a finiteness like we humans have?
link |
00:22:58.240
All of these elements, it's, it's a very, it's an open question.
link |
00:23:02.240
You know, I'm not sure about having a body, but having a perceptual system, having a body would
link |
00:23:06.880
be very helpful too. I mean, if, if you think about human mimicking human, but having a perception,
link |
00:23:15.360
that seems to be essential so that you can build, you can accumulate knowledge about the world.
link |
00:23:22.640
So if you can imagine a human completely paralyzed, and there's a lot that the human
link |
00:23:30.240
brain could learn, you know, with a paralyzed body. So if we got a machine that could do that,
link |
00:23:37.440
that would be a big deal. And then the flip side of that, something you see in children,
link |
00:23:44.000
and something in machine learning world is called active learning. Maybe it is also,
link |
00:23:48.880
is being able to play with the world. How important for developing system on or system
link |
00:23:57.520
to, do you think it is to play with the world to be able to interact with?
link |
00:24:00.960
Certainly a lot, a lot of what you learn as you learn to anticipate
link |
00:24:07.040
the outcomes of your actions. I mean, you can see that how babies learn it,
link |
00:24:11.280
you know, with their hands, how they, how they learn, you know, to connect,
link |
00:24:17.520
you know, the movements of their hands with something that clearly is something that happens
link |
00:24:21.280
in the brain. And, and, and the ability of the brain to learn new patterns. So, you know,
link |
00:24:28.320
it's the kind of thing that you get with artificial limbs, that you connect it and
link |
00:24:33.680
then people learn to operate the artificial limb, you know, really impressively quickly,
link |
00:24:40.240
at least from, from what I hear. So we have a system that is ready to learn the world through
link |
00:24:47.360
action. At the risk of going into way too mysterious of land, what do you think it takes
link |
00:24:55.360
to build a system like that? Obviously, we're very far from understanding how the brain works, but
link |
00:25:03.680
how difficult is it to build this mind of ours? You know, I mean, I think that Jan Lakun's answer,
link |
00:25:11.520
that we don't know how many mountains there are. I think that's a very good answer.
link |
00:25:16.560
I think that, you know, if you, if you look at what Ray Kurzweil is saying, that strikes me as
link |
00:25:23.360
off the wall. But, but I think people are much more realistic than that, where actually Demi
link |
00:25:30.640
Sasabi is, and Jan is, and so the people were actually doing the work fairly realistic, I think.
link |
00:25:39.680
To maybe phrase it another way, from a perspective not of building it, but from understanding it.
link |
00:25:44.960
How complicated are human beings in the, in the following sense? You know, I work with
link |
00:25:52.640
autonomous vehicles and pedestrians. So we tried to model pedestrians. How difficult is it to model
link |
00:25:59.760
a human being, their perception of the world, the two systems they operate under sufficiently
link |
00:26:06.880
to be able to predict whether the pedestrian is going to cross the road or not? I'm, you know,
link |
00:26:12.000
I'm fairly optimistic about that, actually, because what we're talking about is a huge
link |
00:26:20.400
amount of information that every vehicle has, and that feeds into one system, into one gigantic
link |
00:26:28.000
system. And so anything that any vehicle learns becomes part of what the whole system knows.
link |
00:26:34.240
And with a system multiplier like that, there is a lot that you can do. So human beings are very
link |
00:26:42.960
complicated, but and, and, you know, system is going to make mistakes, but human makes mistakes.
link |
00:26:50.160
I think that they'll be able to, I think they are able to anticipate pedestrians, otherwise
link |
00:26:56.960
a lot would happen. They're able to, you know, they're able to get into a roundabout and into,
link |
00:27:05.600
into traffic. So they must know both to expect or to anticipate how people will react when
link |
00:27:14.160
they're sneaking in. And there's a lot of learning that's involved in that.
link |
00:27:18.800
Currently, the pedestrians are treated as things that cannot be hit, and they're not treated as
link |
00:27:29.520
agents with whom you interact in a game theoretic way. So, I mean, it's not, it's a totally open
link |
00:27:38.160
problem. And every time somebody tries to solve it, it seems to be harder than we think. And nobody's
link |
00:27:43.680
really tried to seriously solve the problem of that dance, because I'm not sure if you've thought
link |
00:27:49.120
about the problem of pedestrians, but you're really putting your life in the hands of the driver.
link |
00:27:55.600
You know, there is a dance as part of the dance that would be quite complicated. But for example,
link |
00:28:01.920
when I cross the street and there is a vehicle approaching, I look the driver in the eye. And
link |
00:28:07.520
I think many people do that. And, you know, that's a signal that I'm sending. And I would be sending
link |
00:28:14.880
that machine to an autonomous vehicle, and it had better understand it, because it means I'm crossing.
link |
00:28:21.920
So, and there's another thing you do that actually, so I'll tell you what you do, because we watched,
link |
00:28:27.840
I've watched hundreds of hours of video on this, is when you step in the street, you do that before
link |
00:28:33.360
you step in the street. And when you step in the street, you actually look away.
link |
00:28:36.880
Look away. Yeah. Now, what is that? What that's saying is, I mean, you're trusting that the car,
link |
00:28:45.120
who hasn't slowed down yet, will slow down. Yeah. And you're telling him, yeah, I'm committed.
link |
00:28:51.680
I mean, this is like in a game of tricking. So I'm committed. And if I'm committed,
link |
00:28:56.400
I'm looking away. So there is, you just have to stop. So the question is whether a machine that
link |
00:29:03.280
observes that needs to understand mortality. Here, I'm not sure that it's got to understand so much
link |
00:29:12.960
it's got to anticipate. So, and here, but you know, you're surprising me because
link |
00:29:22.720
here I would think that maybe you can anticipate without understanding, because I think this is
link |
00:29:28.960
clearly what's happening in playing go or in playing chess. There's a lot of anticipation
link |
00:29:34.080
and there is zero understanding. So I thought that you didn't need a model of the human.
link |
00:29:44.160
And the model of the human mind to avoid hitting pedestrians. But you are suggesting that
link |
00:29:50.800
you do. Yeah, you do. And then it's, then it's a lot harder. So this is, and I have a follow
link |
00:29:58.480
question to see where your intuition lies. Is it seems that almost every robot human
link |
00:30:04.080
collaboration system is a lot harder than people realize. So do you think it's possible for robots
link |
00:30:12.320
and humans to collaborate successfully? We talked a little bit about semi autonomous vehicles,
link |
00:30:19.680
like in the Tesla autopilot, but just in tasks in general. If you think we talked about current
link |
00:30:27.920
neural networks being kind of system one, do you think those same systems can borrow humans for
link |
00:30:37.200
system two type tasks and collaborate successfully? Well, I think that in any system
link |
00:30:44.800
where humans and the machine interact, that the human will be superfluous within a fairly
link |
00:30:51.280
short time. That is, if the machine has advanced enough so that it can really help the human,
link |
00:30:58.720
then it may not need the human for a long time. Now, it would be very interesting if
link |
00:31:05.280
there are problems that for some reason the machine doesn't cannot solve, but that people
link |
00:31:10.800
could solve, then you would have to build into the machine and ability to recognize
link |
00:31:17.520
that it is in that kind of problematic situation and to call the human. That cannot be easy
link |
00:31:26.720
without understanding. That is, it must be very difficult to program a recognition that you are
link |
00:31:34.960
in a problematic situation without understanding the problem. That is very true. In order to
link |
00:31:42.960
understand the full scope of situations that are problematic, you almost need to be smart enough
link |
00:31:50.400
to solve all those problems. It is not clear to me how much the machine will need the human.
link |
00:32:00.000
I think the example of chess is very instructive. There was a time at which
link |
00:32:03.840
Kasparov was saying that human machine combinations will beat everybody. Even stock fish doesn't
link |
00:32:11.040
need people and alpha zero certainly doesn't need people. The question is, just like you said,
link |
00:32:18.000
how many problems are like chess and how many problems are the ones where are not like chess?
link |
00:32:24.800
Every problem probably in the end is like chess. The question is, how long is that transition
link |
00:32:29.360
period? I mean, that's a question I would ask you in terms of an autonomous vehicle just
link |
00:32:37.200
driving is probably a lot more complicated than go to solve that. Yes, and that's surprising
link |
00:32:42.960
because it's open. No, I mean, that's not surprising to me because there is a hierarchical
link |
00:32:54.400
aspect to this, which is recognizing a situation and then within the situation bringing up the
link |
00:33:02.640
relevant knowledge. And for that hierarchical type of system to work, you need a more complicated
link |
00:33:13.840
system than we currently have. A lot of people think because as human beings, this is probably
link |
00:33:19.920
the cognitive biases, they think of driving as pretty simple because they think of their own
link |
00:33:27.200
experience. This is actually a big problem for AI researchers or people thinking about AI because
link |
00:33:34.160
they evaluate how hard a particular problem is based on very limited knowledge, basically on
link |
00:33:42.800
how hard it is for them to do the task. And then they take for granted, maybe you can speak to
link |
00:33:48.880
that because most people tell me driving is trivial and humans, in fact, are terrible at
link |
00:33:56.320
driving is what people tell me. And I see humans and humans are actually incredible at driving
link |
00:34:02.000
and driving is really terribly difficult. So is that just another element of the effects that
link |
00:34:08.640
you've described in your work on the psychology side? No, I mean, I haven't really, you know,
link |
00:34:17.360
I would say that my research has contributed nothing to understanding the ecology and to
link |
00:34:24.160
understanding the structure situations and the complexity of problems. So all we know is very
link |
00:34:33.360
clear that that goal, it's endlessly complicated, but it's very constrained. So and in the real
link |
00:34:43.600
world, there are far fewer constraints and and many more potential surprises. So
link |
00:34:50.720
so that's obvious because it's not always obvious to people, right? So when you think about,
link |
00:34:55.520
well, I mean, you know, people thought that reasoning was hard and perceiving was easy. But
link |
00:35:03.200
you know, they quickly learned that actually modeling vision was tremendously complicated
link |
00:35:09.840
and modeling, even proving theorems was relatively straightforward.
link |
00:35:15.760
To push back on that a little bit on the quickly part, they haven't took several decades to learn
link |
00:35:22.560
that and most people still haven't learned that. I mean, our intuition, of course, AI researchers
link |
00:35:28.400
have, but you drift a little bit outside the specific AI field, the intuition is still perceptible.
link |
00:35:35.280
Yeah, that's true. I mean, intuitions, the intuitions of the public haven't changed
link |
00:35:41.760
radically. And they are there, as you said, they're evaluating the complexity of problems
link |
00:35:48.320
by how difficult it is for them to solve the problems. And that's not very little to do with
link |
00:35:55.520
the complexities of solving them in AI. How do you think from the perspective of AI researcher,
link |
00:36:01.600
do we deal with the intuitions of the public? So in trying to think, I mean, arguably,
link |
00:36:11.520
the combination of hype investment and the public intuition is what led to the AI winters.
link |
00:36:18.560
I'm sure that same could be applied to tech or that the intuition of the public leads to media
link |
00:36:26.880
hype leads to companies investing in the tech, and then the tech doesn't make the company's money,
link |
00:36:34.560
and then there's a crash. Is there a way to educate people sort of to fight the,
link |
00:36:40.720
let's call it system one thinking? In general, no. I think that's the simple answer.
link |
00:36:48.640
And it's going to take a long time before the understanding of where those systems can do
link |
00:37:00.000
becomes public knowledge. And then the expectations, there are several aspects
link |
00:37:12.240
that are going to be very complicated. The fact that you have a device that cannot explain itself
link |
00:37:24.880
is a major, major difficulty. And we're already seeing that. I mean, this is really something
link |
00:37:33.120
that is happening. So it's happening in the judicial system. So you have system that are clearly
link |
00:37:41.920
better at predicting parole violations than judges, but they can't explain the reasoning.
link |
00:37:50.640
And so people don't want to trust them. We seem to in system one even use cues
link |
00:38:01.680
to make judgments about our environment. So this explainability point,
link |
00:38:06.800
do you think humans can explain stuff? No, but I mean, there is a very interesting
link |
00:38:16.480
aspect of that. Humans think they can explain themselves. So when you say something, and I
link |
00:38:25.040
ask you why do you believe that, then reasons will occur to you. But actually, my own belief
link |
00:38:32.640
is that in most cases, the reasons have very little to do with why you believe what you believe.
link |
00:38:38.720
So that the reasons are a story that comes to your mind when you need to explain yourself.
link |
00:38:47.680
But people traffic in those explanations. I mean, the human interaction depends on those shared
link |
00:38:54.080
fictions and the stories that people tell themselves. You just made me actually realize,
link |
00:39:00.240
and we'll talk about stories in a second, that not to be cynical about it, but perhaps
link |
00:39:08.160
there's a whole movement of people trying to do explainable AI. And really, we don't necessarily
link |
00:39:15.760
need to explain. AI doesn't need to explain itself. It just needs to tell a convincing story.
link |
00:39:21.440
Yeah, absolutely. The story doesn't necessarily need to reflect the truth. It just needs to be
link |
00:39:30.080
convincing. There's something to that. You can say exactly the same thing in a way that
link |
00:39:36.080
sounds cynical or doesn't sound cynical. But the objective of having an explanation
link |
00:39:44.320
is to tell a story that will be acceptable to people. And for it to be acceptable and to
link |
00:39:53.120
be robustly acceptable, it has to have some elements of truth. But the objective is for
link |
00:40:02.640
people to accept it. It's quite brilliant, actually. But so on the stories that we tell,
link |
00:40:10.400
sorry to ask you the question that most people know the answer to, but you talk about two cells
link |
00:40:18.400
in terms of how life has lived, the experienced self and the remembering self. Can you describe
link |
00:40:25.040
the distinction between the two? Well, sure. I mean, there is an aspect of life that occasionally,
link |
00:40:32.400
you know, most of the time we just live, and we have experiences and they're better and they are
link |
00:40:37.360
worse and it goes on over time. And mostly we forget everything that happens, or we forget most
link |
00:40:43.360
of what happens. Then occasionally, you, when something ends or at different points, you evaluate
link |
00:40:55.040
the past and you form a memory. And the memory is schematic. It's not that you can roll a film
link |
00:41:01.360
of an interaction, you constructs, in effect, the elements of a story about an episode.
link |
00:41:12.000
So there is the experience and there is the story that is created about the experience. And that's
link |
00:41:18.240
what I call the remembering. So I had the image of two cells. So there is a self that lives,
link |
00:41:25.040
and there is a self that evaluates life. Now, the paradox and the deep paradox in that is that
link |
00:41:34.640
we have one system or one self that does the living, but the other system, the remembering
link |
00:41:42.000
self is all we get to keep. And basically, decision making and everything that we do
link |
00:41:50.080
is governed by our memories, not by what actually happened. It's governed by the story that we
link |
00:41:57.200
told ourselves or by the story that we're keeping. So that's the distinction.
link |
00:42:03.840
I mean, there's a lot of brilliant ideas about the pursuit of happiness that come out of that.
link |
00:42:08.960
What are the properties of happiness which emerge from the remembering self?
link |
00:42:13.520
There are properties of how we construct stories that are really important. So
link |
00:42:20.480
that I studied a few, but a couple are really very striking. And one is that in stories,
link |
00:42:31.280
time doesn't matter. There's a sequence of events or there are highlights or not.
link |
00:42:37.360
And how long it took, they lived happily ever after or three years later, something.
link |
00:42:47.040
Time really doesn't matter. In stories, events matter, but time doesn't. That leads to a very
link |
00:42:58.480
interesting set of problems because time is all we got to live. Time is the currency of life.
link |
00:43:07.920
And yet, time is not represented basically in evaluated memories. So that creates a lot of
link |
00:43:16.400
paradoxes that I've thought about. Yeah, they're fascinating. But if you were to
link |
00:43:20.960
give advice on how one lives a happy life based on such properties, what's the optimal?
link |
00:43:32.880
You know, I gave up. I abandoned happiness research because I couldn't solve that problem. I
link |
00:43:38.960
couldn't see. And in the first place, it's very clear that if you do talk in terms of those two
link |
00:43:47.280
selves, then what makes the remembering self happy and what makes the experiencing self happy
link |
00:43:53.280
are different things. And I asked the question of, suppose you're planning a vacation and you're
link |
00:44:01.600
just told that at the end of the vacation, you'll get an amnesic drug. So remember nothing. And
link |
00:44:07.920
they'll also destroy all your photos. So there'll be nothing. Would you still go to the same vacation?
link |
00:44:14.800
And it's, it turns out we go to vacations in large part to construct memories,
link |
00:44:24.800
not to have experiences, but to construct memories. And it turns out that the vacation
link |
00:44:30.560
that you would want for yourself if you knew you will not remember is probably not the same
link |
00:44:36.400
vacation that you will want for yourself if you will remember. So I have no solution to these
link |
00:44:44.080
problems, but clearly those are big issues. And you've talked about issues. You've talked about
link |
00:44:49.840
sort of how many minutes or hours you spend about the vacation. It's an interesting way to think about
link |
00:44:55.280
it because that's how you really experience the vacation outside the being in it. But there's
link |
00:45:02.080
also a modern, I don't know if you think about this or interact with it. There's a modern way to
link |
00:45:08.320
magnify the remembering self, which is by posting on Instagram, on Twitter, on social networks.
link |
00:45:17.120
A lot of people live life for the picture that you take that you post somewhere. And now thousands
link |
00:45:24.240
of people share and potentially potentially millions. And then you can relive it even much
link |
00:45:28.800
more than just those minutes. Do you think about that magnification much? You know, I'm too old
link |
00:45:35.200
for social networks. I, you know, I've never seen Instagram. So I cannot really speak
link |
00:45:43.200
intelligently about those things. I'm just too old. But it's interesting to watch the
link |
00:45:48.240
exact effects you described. I think it will make a very big difference. I mean, and it will make,
link |
00:45:53.200
it will also make a difference. And that I don't know whether it's clear that in some ways
link |
00:46:01.520
the devices that serve us supplant function. So you don't have to remember phone numbers.
link |
00:46:12.080
You don't have, you really don't have to know facts. I mean, the number of conversations,
link |
00:46:17.760
I mean, Bob with somebody says, well, let's look it up. So it's, in a way, it's made conversations.
link |
00:46:26.160
Well, it's, it means that it's much less important to know things. No, it used to be very important
link |
00:46:33.680
to know things. This is changing. So the requirements of that, that we have for ourselves
link |
00:46:44.400
and for other people are changing because of all those supports and because, and I have no idea
link |
00:46:51.680
what Instagram does. Well, I'll tell you. I wish I knew. I mean, I wish I could just have,
link |
00:46:58.880
my remembering self could enjoy this conversation, but I'll get to enjoy it even more by having watched,
link |
00:47:05.040
by watching it and then talking to others. It'll be about 100,000 people as scary as this to say,
link |
00:47:11.680
well, listen or watch this, right? It changes things. It changes the experience of the world.
link |
00:47:18.000
And then you seek out experiences which could be shared in that way. It's in, and I haven't seen,
link |
00:47:24.480
it's, it's the same effects that you described. And I don't think the psychology of that
link |
00:47:29.680
magnification has been described yet because it's in your world.
link |
00:47:33.040
You know, the sharing, there was a, there was a time when people read books.
link |
00:47:39.520
And, and, and you could assume that your friends had read the same books that you read. So there
link |
00:47:51.120
was kind of invisible sharing. There was a lot of sharing going on. And there was a lot of assumed
link |
00:47:58.560
common knowledge. And, you know, that was built in. I mean, it was obvious that you had read the
link |
00:48:04.480
New York Times. It was obvious that you'd read the reviews. I mean, so a lot was taken for granted
link |
00:48:11.920
that was shared. And, you know, when there were, when there were three television channels,
link |
00:48:19.200
it was obvious that you'd seen one of them probably the same. So sharing, sharing has always been
link |
00:48:28.400
there. Always was always there. It was just different. At the risk of inviting mockery from
link |
00:48:35.520
you, let me say there that I'm also a fan of Sartre and Camus and existentialist philosophers.
link |
00:48:43.920
And I'm joking, of course, about mockery, but from the perspective of the two selves,
link |
00:48:50.560
what do you think of the existentialist philosophy of life? So trying to really emphasize the
link |
00:48:57.600
experiencing self as the proper way to, or the best way to live life?
link |
00:49:05.840
I don't know enough philosophy to answer that, but it's not, you know, the emphasis on
link |
00:49:13.920
experience is also the emphasis in Buddhism. So that's, you just have got to experience things
link |
00:49:23.360
and, and, and not to evaluate and not to pass judgment and not to score, not to keep score.
link |
00:49:32.160
So if when you look at the grand picture of experience, you think there's something to that
link |
00:49:38.560
that one, one of the ways to achieve contentment and maybe even happiness is letting go of any of
link |
00:49:46.960
the things, any of the procedures of the remembering self. Well, yeah, I mean, I think, you know, if
link |
00:49:54.800
one could imagine a life in which people don't score themselves, it, it feels as if that would
link |
00:50:02.720
be a better life as if the self scoring and, you know, how am I doing a kind of question
link |
00:50:09.920
is not, is not a very happy thing to have. But I got out of that field because I couldn't solve
link |
00:50:22.240
that problem. And, and that was because my intuition was that the experiencing self, that's
link |
00:50:29.520
reality. But then it turns out that what people want for themselves is not experiences, they want
link |
00:50:36.480
memories and they want a good story about their life. And so you cannot have a theory of happiness
link |
00:50:42.640
that doesn't correspond to what people want for themselves. And when I, when I realized that this,
link |
00:50:49.280
this was where things were going, I really sort of left the field of research.
link |
00:50:55.120
Do you think there's something instructive about this emphasis of reliving memories
link |
00:51:00.480
in building AI systems? So currently, artificial intelligence systems are more like experiencing
link |
00:51:09.280
self in that they react to the environment. There's some pattern formation like learning,
link |
00:51:16.160
so on. But you really don't construct memories, except in reinforcement learning every once in
link |
00:51:23.280
a while that you replay over and over. Yeah. But you know, that would in principle would not be
link |
00:51:28.960
Do you think that's useful? Do you think it's a feature or a bug of human beings that we,
link |
00:51:35.680
that we look back? Oh, I think that's definitely a feature. That's not a bug. I mean, you have to
link |
00:51:42.720
look back in order to look forward. So without, without looking back, you couldn't, you couldn't
link |
00:51:50.320
really intelligently look forward. You're looking for the echoes of the same kind of experience in
link |
00:51:55.680
order to predict what the future holds. Yeah. Though Victor Franco in his book, Man's Search
link |
00:52:02.800
for Meaning, I'm not sure if you've read, describes his experience at the concentration,
link |
00:52:07.920
concentration camps during World War II as a way to describe that finding, identifying a purpose
link |
00:52:15.920
in life, a positive purpose in life can save one from suffering. First of all, do you connect
link |
00:52:22.400
with the philosophy that he describes there? Not really. I mean, so I can, I can really see
link |
00:52:34.080
that somebody who has that feeling of purpose and meaning and so on, that that could sustain you.
link |
00:52:42.720
I in general don't have that feeling. And I'm pretty sure that if I were in a concentration
link |
00:52:49.600
camp, I'd give up and die, you know, so he talks, he's, he's a survivor. Yeah. And, you know, he
link |
00:52:57.520
survived with that. And I'm, and I'm not sure how essential to survival the sense is, but I do know
link |
00:53:06.880
when I think about myself that I would have given up at, oh, this isn't going anywhere.
link |
00:53:13.200
And there is, there is a sort of character that, that, that manages to survive in conditions like
link |
00:53:21.200
that. And then because they survive, they tell stories and it sounds as if they survive because
link |
00:53:27.520
of what they were doing, we have no idea. They survive because of the kind of people that they
link |
00:53:32.800
are and the other kind of people who survives and would tell themselves stories of a particular
link |
00:53:37.920
of a particular kind. So I'm not. So you don't think seeking purpose is a significant
link |
00:53:44.960
driver in our being? I mean, it's, it's a very interesting question because when you ask people
link |
00:53:52.000
whether it's very important to have meaning in their life, they say, oh, yes, that's the most
link |
00:53:55.760
important thing. But when you ask people, what kind of a day did you have? And, and, you know,
link |
00:54:03.600
what were the experiences that you remember? You don't get much meaning. You get social
link |
00:54:09.680
experiences. Then, and, and some people say that, for example, in, in, in child, you know,
link |
00:54:21.920
in taking care of children, the fact that they are your children and you're taking care of them
link |
00:54:26.240
makes a very big difference. I think that's entirely true. But it's more because of a story
link |
00:54:37.520
that we're telling ourselves, which is a very different story when we're taking care of our
link |
00:54:41.840
children or when we're taking care of other things. Jumping around a little bit in doing a
link |
00:54:47.280
lot of experiments. Let me ask you a question. Most of the work I do, for example, is in the
link |
00:54:52.800
in the real world, but most of the clean good science that you can do is in the lab. So that
link |
00:54:59.840
distinction, do you think we can understand the fundamentals of human behavior through controlled
link |
00:55:08.800
experiments in the lab? If we talk about pupil diameter, for example, it's much easier to do
link |
00:55:17.120
when you can control lighting conditions. Yeah. So when we look at driving, lighting variation
link |
00:55:25.600
destroys almost completely your ability to use pupil diameter. But in the lab, for as I mentioned,
link |
00:55:33.920
semi autonomous or autonomous vehicles in driving simulators, we can't, we don't capture true,
link |
00:55:41.360
honest human behavior in that particular domain. So your what's your intuition? How much of human
link |
00:55:48.960
behavior can we study in this controlled environment of the lab? A lot, but you'd have to verify it,
link |
00:55:56.560
you know, that your conclusions are basically limited to the situation, to the experimental
link |
00:56:04.000
situation. Then you have to jump the big inductive leap to the real world. So and and that's the
link |
00:56:13.200
flare. That's where the difference, I think, between the good psychologist and others that are
link |
00:56:20.880
mediocre is in the sense that that your experiment captures something that's important and something
link |
00:56:29.520
that's real and others are just running experiments. So what is that like the birth of an idea to his
link |
00:56:36.800
development in your mind to something that leads to an experiment? Is that similar to maybe like
link |
00:56:43.120
what Einstein or a good physicist do is your intuition? You basically use your intuition to
link |
00:56:48.160
build up? Yeah, but I mean, you know, it's it's very skilled intuition. Right. I mean, I just had
link |
00:56:54.080
that experience. Actually, I had an idea that turns out to be very good idea a couple of days ago.
link |
00:57:01.280
And and you and you have a sense of that building up. So I'm working with a collaborator. And he
link |
00:57:09.200
essentially was saying, you know, what what are you doing? What's what's going on? And I was
link |
00:57:15.520
really, I couldn't exactly explain it. But I knew this is going somewhere. But, you know, I've been
link |
00:57:21.760
around that game for a very long time. And so I can you develop that anticipation that, yes, this
link |
00:57:30.160
this is worth following up something here. That's part of the skill. Is that something you can
link |
00:57:35.760
reduce two words in describing a process in the form of advice to others? No,
link |
00:57:43.920
follow your heart, essentially. I mean, you know, it's it's like trying to explain what it's like
link |
00:57:49.680
to drive. It's not you've got to break it apart. And it's not and then you lose and then you lose
link |
00:57:55.680
the experience. You mentioned collaboration. You've written about your collaboration with
link |
00:58:02.720
Amos Tversky, that this is you writing the 12 or 13 years in which most of our work was joint
link |
00:58:10.160
were years of interpersonal and intellectual bliss. Everything was interesting. Almost
link |
00:58:16.320
everything was funny. And there was a current joy of seeing an idea take shape. So many times in
link |
00:58:22.080
those years, we shared the magical experience of one of us saying something, which the other one
link |
00:58:27.440
would understand more deeply than the speaker had done. Contrary to the old laws of information
link |
00:58:33.120
theory, it was common for us to find that more information was received than had been sent.
link |
00:58:39.920
I have almost never had the experience with anyone else. If you have not had it, you don't know
link |
00:58:45.360
how marvelous collaboration can be. So let me ask a perhaps a silly question.
link |
00:58:54.240
How does one find and create such a collaboration that may be asking like how does one find love?
link |
00:59:00.800
But yeah, you have to be you have to be lucky. And and and I think you have to have the character
link |
00:59:09.520
for that because I've had many collaborations. I mean, none with as exciting as with Amos
link |
00:59:15.440
Tversky. But I've had and I'm having just very so it's a skill. I think I'm good at it.
link |
00:59:25.760
Not everybody is good at it. And then it's the luck of finding people who are also good at it.
link |
00:59:31.920
Is there advice in a form for a young scientist
link |
00:59:34.800
who also seeks to violate this law of information theory?
link |
00:59:48.400
I really think it's so much luck is involved. And you know, in in those
link |
00:59:56.560
really serious collaborations, at least in my experience, are a very personal experience.
link |
01:00:03.600
And I have to like the person I'm working with. Otherwise, you know, I mean, there is that kind
link |
01:00:09.440
of collaboration, which is like an exchange or commercial exchange of I'm giving this,
link |
01:00:17.120
you give me that. But the real ones are interpersonal. They're between people like
link |
01:00:24.160
each other and and who like making each other think and who like the way that the other person
link |
01:00:30.320
responds to your thoughts. You have to be lucky. Yeah, I mean, but I already noticed that even
link |
01:00:39.440
just me showing up here, you've quickly started to digging in a particular problem I'm working on
link |
01:00:46.000
and already new information started to emerge. Is that a process, just the process of curiosity,
link |
01:00:53.040
of talking to people about problems and seeing? I'm curious about anything to do with AI and
link |
01:00:58.800
robotics and, you know, and so and I knew you were dealing with that. So I was curious.
link |
01:01:04.960
Just follow your curiosity. Jumping around on the psychology front, the dramatic sounding
link |
01:01:12.960
terminology of replication crisis, but really just the at times,
link |
01:01:20.960
this this effect at a time studies do not are not fully generalizable. They don't you are being
link |
01:01:29.600
polite. It's worse than that. But is it so I'm actually not fully familiar. Well, I mean,
link |
01:01:36.640
how bad it is, right? So what do you think is the source? Where do you think? I think I know
link |
01:01:42.160
what's going on. Actually, I mean, I have a theory about what's going on. And what's going on
link |
01:01:49.040
is that there is, first of all, a very important distinction between two types of experiments.
link |
01:01:57.600
And one type is within subjects. So it's the same person has two experimental conditions.
link |
01:02:05.120
And the other type is between subjects, where some people are this condition, other people
link |
01:02:10.480
that condition, they're different worlds. And between subject experiments are much harder
link |
01:02:17.360
to predict, and much harder to anticipate. And the reason, and they're also more expensive,
link |
01:02:26.880
because you need more people. And it's just so between subject experiments is where the problem
link |
01:02:34.080
is. It's not so much and within subject experiments, it's really between. And there is a very good
link |
01:02:41.280
reason why the intuitions of researchers about between subject experiments are wrong.
link |
01:02:50.320
And that's because when you are a researcher, you're in a within subject situation. That is,
link |
01:02:57.440
you are imagining the two conditions and you see the causality and you feel it. But in the
link |
01:03:04.720
between subjects condition, they don't think they see they live in one condition and the other one
link |
01:03:11.680
is just nowhere. So our intuitions are very weak about between subject experiments. And that,
link |
01:03:21.040
I think, is something that people haven't realized. And, and in addition, because of that, we have
link |
01:03:30.240
no idea about the power of manipulations of experimental manipulations, because the same
link |
01:03:37.200
manipulation is much more powerful when when you are in the two conditions than when you live in
link |
01:03:45.120
only one condition. And so the experimenters have very poor intuitions about between subject
link |
01:03:51.760
experiments. And, and there is something else, which is very important, I think, which is that
link |
01:04:00.240
almost all psychological hypotheses are true. That is, in the sense that, you know, directionally,
link |
01:04:09.200
if you have a hypothesis that a really causes B that that it's not true that a causes the opposite
link |
01:04:17.840
B, maybe a just has very little effect, but hypotheses are true mostly, except mostly they're
link |
01:04:26.160
very weak. They're much weaker than you think when you are having images of. So the reason I'm
link |
01:04:35.760
excited about that is that I recently heard about some some friends of mine who they essentially
link |
01:04:47.600
funded 53 studies of behavioral change by 20 different teams of people with a very precise
link |
01:04:56.480
objective of changing the number of times that people go to the gym, but you know, and, and
link |
01:05:06.080
the success rate was zero, not one of the 53 studies worked. Now what's interesting about that
link |
01:05:15.360
is those are the best people in the field. And they have no idea what's going on. So they're not
link |
01:05:22.080
calibrated. They think that it's going to be powerful because they can imagine it. But actually,
link |
01:05:28.480
it's just weak because the you're focusing on on your manipulation and feels powerful to you.
link |
01:05:36.880
There's a thing that I've written about that's called the focusing illusion. That is that when
link |
01:05:42.160
you think about something, it looks very important, more important than it really is.
link |
01:05:48.080
More important than it really is. But if you don't see that effect, the 53 studies,
link |
01:05:53.360
doesn't that mean you just report that? So what's I guess the solution to that?
link |
01:05:58.960
Well, I mean, the solution is for people to trust their intuitions less or to try out their intuitions
link |
01:06:09.040
before. I mean, experiments have to be pre registered. And by the time you run an experiment,
link |
01:06:16.400
you have to be committed to it. And you have to run the experiment seriously enough.
link |
01:06:22.160
And in a public. And so this is happening. The interesting thing is
link |
01:06:30.160
what what happens before? And how do people prepare themselves and how they run pilot
link |
01:06:36.320
experiments? It's going to train the way psychology is done. And it's already happening.
link |
01:06:41.840
Do you have a hope for this might connect to that this study sample size? Yeah.
link |
01:06:49.600
Do you have a hope for the internet? Or this is really happening. M took
link |
01:06:56.880
everybody's running experiments on M took. And it's very cheap and very effective.
link |
01:07:03.280
So do you think that changes psychology, essentially, because you're think you cannot
link |
01:07:08.080
run 10,000 subjects, eventually, it will. I mean, I, you know, I can't put my finger
link |
01:07:14.720
on how exactly, but it's that's been true in psychology with whenever an important new method
link |
01:07:23.360
came in, it changes the field. So an M took is really a method, because it makes it very
link |
01:07:31.760
much easier to do something to do some things. Is there a undergrad students will ask me,
link |
01:07:38.800
you know, how big and your own network should be for a particular problem? So let me ask you an
link |
01:07:43.360
equivalent equivalent question. How big how many subjects that study have for it to have a
link |
01:07:52.240
conclusive result? Well, it depends on the strength of the effect. So if you're studying
link |
01:07:58.400
visual perception, or the perception of color, many of the other classic results in in visual
link |
01:08:06.960
in color perception, we're done on three or four people. And I think in one of them was
link |
01:08:11.200
colorblind, but partly colorblind. But on vision, you know, you know, many people don't need a lot
link |
01:08:23.760
of replications for some type of neurological experiment. When you're studying weaker phenomena,
link |
01:08:35.520
and especially when you're studying them between subjects, then you need a lot more subjects than
link |
01:08:41.200
people have been running. And that is, that's one of the things that are happening in psychology.
link |
01:08:47.760
Now is that the power is statistical power of experiment is is increasing rapidly.
link |
01:08:53.920
Does the between subject as the number of subjects goes to infinity approach?
link |
01:08:59.360
Well, I mean, you know, goes to infinity is exaggerated, but people the standard
link |
01:09:06.000
number of subjects who are in experiment psychology with 30 or 40. And for a weak effect,
link |
01:09:13.920
that's simply not enough. And you may need a couple of hundred. I mean, it's that that sort of
link |
01:09:25.280
order of magnitude. What are the major disagreements in theories and effects that you've observed
link |
01:09:35.120
throughout your career that still stand today? Well, you've worked on several fields. Yeah.
link |
01:09:40.640
But I what still is out there as as a major disagreement that pops into your mind? And
link |
01:09:47.200
I've had one extreme experience of, you know, controversy with somebody who really doesn't
link |
01:09:54.800
like the work that Amos Tversky and I did. And and he's been after us for 30 years or more,
link |
01:10:01.520
at least. Do you want to talk about it? Well, I mean, his name is Goetge Granzer. He's a well
link |
01:10:06.800
known German psychologist. And that's the one controversy which I it's been unpleasant and
link |
01:10:17.520
no, I don't particularly want to talk about it. But is there is there open questions, even in
link |
01:10:23.200
your own mind, every once in a while, you know, we talked about semi autonomous vehicles in my
link |
01:10:29.520
own mind, I see what the data says, but I also constantly torn. Do you have things where you
link |
01:10:36.400
or your studies have found something, but you're also intellectually torn about what it means?
link |
01:10:41.440
And there's been maybe disagreements without you within your own mind about particular thing.
link |
01:10:47.440
I mean, it's, you know, one of the things that are interesting is how difficult it is for people
link |
01:10:52.720
to change their mind. Essentially, you know, once they're committed, people just don't change their
link |
01:11:01.280
mind about anything that matters. And that is surprisingly, but it's true about scientists.
link |
01:11:07.600
So the controversy that I described, you know, that's been going on like 30 years,
link |
01:11:13.120
and it's never going to be resolved. And you build a system and you live within that system,
link |
01:11:20.240
and other systems of ideas look foreign to you. And there is very little contact and very little
link |
01:11:29.680
mutual influence that happens a fair amount. Do you have a hopeful advice or message on that?
link |
01:11:39.280
Thinking about science, thinking about politics, thinking about things that have impact on this
link |
01:11:45.840
world. How can we change our mind? I think that, I mean, on things that matter,
link |
01:11:53.360
you know, which are political or religious, and people just don't, don't change their mind.
link |
01:12:02.320
And by and large, and there's very little that you can do about it.
link |
01:12:07.280
The, what does happen is that if leaders change their mind, so for example,
link |
01:12:16.240
the public, the American public doesn't really believe in climate change,
link |
01:12:20.720
doesn't take it very seriously. But if some religious leaders decided this is a major
link |
01:12:27.760
threat to humanity, that would have a big effect. So that we, we have the opinions that we have,
link |
01:12:35.280
not because we know why we have them, but because we trust some people and we don't
link |
01:12:40.080
trust other people. And so it's much less about evidence than it is about stories.
link |
01:12:48.160
So the way, one way to change your mind isn't at the individual level, is that the leaders of
link |
01:12:55.040
the communities, you look up with the stories change and therefore your mind changes with them.
link |
01:13:01.280
So there's a guy named Alan Turing came up with a Turing test.
link |
01:13:07.520
What do you think is a good test of intelligence? Perhaps we're drifting
link |
01:13:12.480
in a topic that we're maybe philosophizing about, but what do you think is a good test
link |
01:13:20.160
for intelligence, for an artificial intelligence system?
link |
01:13:23.760
Well, the standard definition of, you know, of artificial general intelligence is that
link |
01:13:31.600
it can do anything that people can do and it can do them better.
link |
01:13:34.880
Yes.
link |
01:13:35.520
And what we are seeing is that in many domains, you have domain specific and,
link |
01:13:45.360
you know, devices or programs or software and they beat people easily in a specified way.
link |
01:13:52.880
What we are very far from is that general ability, general purpose intelligence.
link |
01:13:59.040
So we, in machine learning, people are approaching something more general.
link |
01:14:07.360
I mean, for Alpha Zero was much more general than Alpha Go,
link |
01:14:15.840
but it's still extraordinarily narrow and specific in what it can do.
link |
01:14:21.840
So we're quite far from something that can in every domain think like a human
link |
01:14:28.800
except better.
link |
01:14:30.640
What aspect, so the Turing test has been criticized as natural language conversation
link |
01:14:36.160
that is too simplistic. It's easy to quote unquote pass under constraints specified.
link |
01:14:43.360
What aspect of conversation would impress you if you heard it? Is it humor?
link |
01:14:51.120
What would impress the heck out of you if you saw it in conversation?
link |
01:14:55.440
Yeah, I mean, certainly wit would be impressive and humor would be more impressive than just
link |
01:15:06.080
factual conversation, which I think is easy and illusions would be interesting and
link |
01:15:16.240
metaphors would be interesting. I mean, but new metaphors, not practiced metaphors.
link |
01:15:23.680
So there is a lot that would be sort of impressive that it's completely natural in
link |
01:15:31.200
conversation, but that you really wouldn't expect.
link |
01:15:34.480
Does the possibility of creating a human level intelligence or super human level
link |
01:15:39.920
intelligence system excite you, scare you?
link |
01:15:44.160
Well, I mean, how does it make you feel?
link |
01:15:47.360
I find the whole thing fascinating. Absolutely fascinating.
link |
01:15:51.600
So exciting.
link |
01:15:52.320
I think and exciting. It's also terrifying, you know, but I'm not going to be around to see it.
link |
01:16:01.760
And so I'm curious about what is happening now, but also know that predictions about it are silly.
link |
01:16:11.840
We really have no idea, but it will look like 30 years from now. No idea.
link |
01:16:16.640
Speaking of silly bordering on the profound, they may ask the question of, in your view,
link |
01:16:26.160
what is the meaning of it all, the meaning of life?
link |
01:16:30.400
These descendant of great apes that we are, why, what drives us as a civilization, as a human being,
link |
01:16:38.400
as a force behind everything that you've observed and studied?
link |
01:16:42.080
Is there any answer or is it all just a beautiful mess?
link |
01:16:49.680
There is no answer that that I can understand.
link |
01:16:54.320
And I'm not, and I'm not actively looking for one.
link |
01:17:00.080
Do you think an answer exists?
link |
01:17:02.000
No, there is no answer that we can understand.
link |
01:17:05.760
I'm not qualified to speak about what we cannot understand, but there is.
link |
01:17:10.080
I know that we cannot understand reality.
link |
01:17:16.880
I mean, there are a lot of things that we can do. I mean, gravity waves.
link |
01:17:21.520
I mean, that's a big moment for humanity.
link |
01:17:24.160
And when you imagine that ape being able to go back to the Big Bang, that's that.
link |
01:17:33.360
But the why is bigger than us.
link |
01:17:36.800
The why is hopeless, really.
link |
01:17:40.720
Danny, thank you so much. It was an honor. Thank you for speaking today.
link |
01:17:43.360
Thank you.
link |
01:18:13.600
And now let me leave you with some words of wisdom from Daniel Kahneman.
link |
01:18:18.400
Intelligence is not only the ability to reason,
link |
01:18:21.760
it is also the ability to find relevant material in memory
link |
01:18:25.440
and to deploy attention when needed.
link |
01:18:27.680
Thank you for listening and hope to see you next time.