back to index

Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity | Lex Fridman Podcast #292


small model | large model

link |
00:00:00.000
we can actually figure out where are the aliens out there in spacetime by being clever about the
link |
00:00:04.160
few things we can see, one of which is our current date. And so now that you have this living
link |
00:00:09.040
cosmology, we can tell the story that the universe starts out empty. And then at some point, things
link |
00:00:14.640
like us appear very primitive. And then some of those stop being quiet and expand. And then for a
link |
00:00:20.960
few billion years, they expand, and then they meet each other. And then for the next 100 billion
link |
00:00:25.440
years, they commune with each other. That is, the usual models of cosmology say that in roughly
link |
00:00:32.640
150 billion years, the expansion of the universe will happen so much that all you'll have left is
link |
00:00:37.920
some galaxy clusters and they that are sort of disconnected from each other. But before then,
link |
00:00:43.200
they will interact. There will be this community of all the grabby alien civilizations, and each
link |
00:00:48.160
one of them will hear about and even meet thousands of others. And we might hope to join them someday.
link |
00:00:55.040
And we come part of that community.
link |
00:00:58.800
The following is a conversation with Robin Hansen, an economist at George Mason University,
link |
00:01:04.000
and one of the most fascinating, wild, fearless, and fun minds I've ever gotten a chance to accompany
link |
00:01:09.200
for a time in exploring questions of human nature, human civilization, and alien life out there
link |
00:01:16.400
in our impossibly big universe. He is the coauthor of a book titled, The Elephant in the Brain,
link |
00:01:22.880
Hidden Motives in Everyday Life, The Age of M, Work, Love, and Life when Robots Rule the Earth,
link |
00:01:29.680
and a fascinating recent paper I recommend on, quote, grabby aliens titled, If Loud Aliens Explain
link |
00:01:37.600
Human Earliness, Quiet Aliens Are Also Rare. This is the Lex Friedman podcast. Support it.
link |
00:01:45.120
Please check out our sponsors in the description. And now, dear friends, here's Robin Hansen.
link |
00:01:50.960
Robin, you are working on a book about, quote, grabby aliens. This is a technical term,
link |
00:01:57.840
like the big bang. So what are grabby aliens?
link |
00:02:01.840
Robin Hansen Grabby aliens expand fast into the universe and they change stuff.
link |
00:02:09.200
That's the key concept. So if they are out there, we would notice that's the key idea.
link |
00:02:14.880
So the question is, where are the grabby aliens? So Fermi's question is, where are the aliens?
link |
00:02:21.840
And we could vary that in two terms, right? Where are the quiet, hard to see aliens and where are
link |
00:02:27.200
the big, loud grabby aliens? So it's actually hard to say where all the quiet ones are, right?
link |
00:02:34.720
There could be a lot of them out there because they're not doing much. They're not making a
link |
00:02:38.560
big difference in the world. But the grabby aliens, by definition, are the ones you would see.
link |
00:02:42.960
We don't know exactly what they do with where they went. But the idea is there in some sort of
link |
00:02:48.960
competitive world where each part of them is trying to grab more stuff and do something with it.
link |
00:02:56.080
And almost surely whatever is the most competitive thing to do with all the stuff they grab
link |
00:03:03.200
isn't to leave it alone the way it started, right? So we humans, when we go around the earth
link |
00:03:08.480
and use stuff, we change it. We turn a forest into a farmland, turn a harbor into a city.
link |
00:03:14.720
So the idea is aliens would do something with it. And so we're not exactly sure what it would
link |
00:03:20.000
look like, but it would look different. So somewhere in the sky, we would see big spheres
link |
00:03:24.320
of different activity, whereas things had been changed because they had been there.
link |
00:03:28.960
Expanding spheres. Right.
link |
00:03:30.720
So as you expand, you aggressively interact and change the environment.
link |
00:03:34.480
So the word grabby versus loud, you're using them sometimes synonymously, sometimes not.
link |
00:03:40.560
Grabby, to me, is a little bit more aggressive. What does it mean to be loud? What does it mean
link |
00:03:48.000
to be grabby? What's the difference? And loud in what ways? A visual? Is it sound? Is it some other
link |
00:03:54.320
physical phenomenon like gravitational waves? What are you using this kind of in a broad
link |
00:03:59.840
philosophical sense or there's a specific thing that it means to be loud in this universe of ours?
link |
00:04:07.280
My coauthors and I put together a paper with a particular mathematical model.
link |
00:04:12.800
And so we use the term grabby aliens to describe that more particular model. And the idea is
link |
00:04:18.320
it's a more particular model of the general concept of loud. So loud would just be the general idea
link |
00:04:23.600
that they would be really obvious. So grabby is the technical term. Is it in the title of the
link |
00:04:28.160
paper? It's in the body. The title is actually about loud and quiet. You want to distinguish
link |
00:04:36.000
your particular model of things from the general category of things everybody else might talk
link |
00:04:39.760
about. So that's how we distinguish. The paper titles, if loud aliens explain human
link |
00:04:44.400
eerliness, quiet aliens are also rare. If life on earth, God, this is such a good abstract,
link |
00:04:50.400
if life on earth had to achieve and hard steps to reach humanity's level,
link |
00:04:57.120
then the chance of this event rose as time to the end of power. So we'll talk about power,
link |
00:05:02.080
we'll talk about linear increase. So what is the technical definition of grabby?
link |
00:05:10.080
How do you envision grabbiness? And why are in contrast with humans, why aren't humans grabby?
link |
00:05:18.000
So where's that line? Is it well definable? What is grabby? What is non grabby?
link |
00:05:22.880
We have a mathematical model of the distribution of advanced civilizations, i.e. aliens in space
link |
00:05:29.920
and time. That model has three parameters, and we can set each one of those parameters from data,
link |
00:05:37.280
and therefore we claim this is actually what we know about where they are in space time.
link |
00:05:42.880
So the key idea is they appear at some point in space time,
link |
00:05:46.160
and then after some short delay, they start expanding, and they expand at some speed.
link |
00:05:53.280
And the speed is one of those parameters. That's one of the three. And the other two parameters
link |
00:05:57.680
are about how they appear in time. That is, they appear at random places, and they appear in time
link |
00:06:03.920
according to a power law, and that power law has two parameters, and we can fit each of those
link |
00:06:08.880
parameters to data. And so then we can say, now we know. We know the distribution of advanced
link |
00:06:14.640
civilizations in space and time. So we are right now a new civilization, and we have not yet started
link |
00:06:20.560
to expand. But plausibly, we would start to do that within, say, 10 million years of the
link |
00:06:25.440
current moment. That's plenty of time. And 10 million years is a really short duration
link |
00:06:30.240
in the history of the universe. So we are at the moment a sort of random sample of the kind of
link |
00:06:36.720
times at which an advanced civilization might appear, because we may or may not become grabby,
link |
00:06:41.200
but if we do, we'll do it soon. And so our current date is a sample, and that gives us
link |
00:06:45.520
one of the other parameters. The second parameter is the constant in front of the power law, and
link |
00:06:50.000
that's arrived from our current date. So power law, what is the N in the power law?
link |
00:06:58.880
That's the complicated thing to explain. Right. Advanced life appeared by going through a sequence
link |
00:07:05.120
of hard steps. So starting with very simple life, and here we are at the end of this process at
link |
00:07:11.280
a pretty advanced life. And so we had to go through some intermediate steps, such as sexual
link |
00:07:16.640
selection, photosynthesis, multicellular animals. And the idea is that each of those steps was hard.
link |
00:07:24.480
Evolution just took a long time searching in a big space of possibilities to find each of those
link |
00:07:29.440
steps. And the challenge was to achieve all of those steps by a deadline of when the planets
link |
00:07:36.480
would no longer host simple life. And so Earth has been really lucky compared to
link |
00:07:42.560
all the other billions of planets out there, and that we managed to achieve all these steps
link |
00:07:47.120
in the short time of the five billion years that Earth can support simple life.
link |
00:07:53.840
So not all steps, but a lot of them, because we don't know how many steps there are before
link |
00:07:57.600
you start the expansion. These are all the steps from the birth of life to the initiation of major
link |
00:08:04.560
expansion. Right. So we're pretty sure that it would happen really soon so that it couldn't be
link |
00:08:10.080
the same sort of a hard step as the last ones in terms of taking a long time. So
link |
00:08:14.160
when we look at the history of Earth, we look at the durations of the major things that have happened,
link |
00:08:19.680
and that suggests that there's roughly, say, six hard steps that happened, say between three and 12,
link |
00:08:27.680
and that we have just achieved the last one that would take a long time.
link |
00:08:32.240
Which is?
link |
00:08:33.920
Well, we don't know. But whatever it is, we've just achieved the last one.
link |
00:08:38.400
Are we talking about humans or aliens here? So let's talk about some of these steps. So
link |
00:08:42.960
Earth is really special in some way. We don't exactly know the level of specialness. We don't
link |
00:08:48.000
really know which steps were the hardest or not, because we just have a sample of one.
link |
00:08:52.960
But you're saying that there's three to 12 steps that we have to go through
link |
00:08:56.960
to get to where we are, that are hard steps, hard to find by something that
link |
00:09:01.280
took a long time and is unlikely. There's a lot of ways to fail. There's a lot more
link |
00:09:08.320
ways to fail than to succeed. The first step would be sort of the very simplest form of life of any
link |
00:09:14.080
sort. And then we don't know whether that first sort is the first sort that we see in the historical
link |
00:09:21.280
record or not. But then some other steps are, say, the development of photosynthesis,
link |
00:09:26.080
the development of sexual reproduction. There's the development of eukaryite cells,
link |
00:09:32.000
which are a certain kind of complicated cell that seems to have only appeared once.
link |
00:09:36.640
And then there's multicellularity that is multiple cells coming together to large organisms like us.
link |
00:09:41.920
And in this statistical model of trying to fit all these steps into a finite window,
link |
00:09:47.920
the model actually predicts that these steps could be a varying difficulties. That is,
link |
00:09:52.080
they could each take different amounts of time on average. But if you're lucky enough that they
link |
00:09:56.960
all appear at a very short time, then the durations between them will be roughly equal.
link |
00:10:02.400
And the time remaining left over in the rest of the window will also be the same length.
link |
00:10:07.200
So we at the moment have roughly a billion years left on earth until simple life like us would no
link |
00:10:13.360
longer be possible. Life appeared roughly 400 million years after the very first time on life
link |
00:10:18.720
was possible at the very beginning. So those two numbers right there give you the rough estimate
link |
00:10:23.600
of six hard steps. Just to build up an intuition here. So we're trying to create a simple mathematical
link |
00:10:29.920
model of how life emerges and expands in the universe. And there's a section in this paper,
link |
00:10:38.480
how many hard steps? Question mark. Right. The two most plausibly diagnostic earth durations seem
link |
00:10:45.200
to be the one remaining after now before earth becomes uninhabitable for complex life. So you
link |
00:10:50.640
estimate how long earth lasts, how many hard steps. There's windows for doing different hard
link |
00:10:58.880
steps. And you can sort of like queuing theory mathematically estimate of like the solution
link |
00:11:08.880
or the passing of the hard steps or the taking of the hard steps sort of like coldly mathematical
link |
00:11:14.880
look. If life pre expansionary life requires a number of steps, what is the probability of taking
link |
00:11:24.320
those steps on an earth that lasts a billion years or two billion years or five billion years
link |
00:11:29.360
or 10 billion years? And you say solving for E using the observed durations of 1.1 and 0.4
link |
00:11:37.440
then gives E values of 3.9 and 12.5 range 5.7 to 26 suggesting a middle estimate of at least six.
link |
00:11:46.000
That's where you said six hard steps. Right. Just to get to where we are. Right. We started at the
link |
00:11:53.440
bottom. Now we're here. And that took six steps on average. The key point is on average, these things
link |
00:12:00.320
on any one random planet would take trillions of years, just a really long time. And so we're
link |
00:12:07.680
really lucky that they all happened really fast in a short time before our window closed. And the
link |
00:12:13.760
chance of that happening in that short window goes as that time period to the power of the
link |
00:12:19.360
number of steps. And so that was where the power we talked about before came from. And so that means
link |
00:12:25.360
in the history of the universe, we should overall roughly expect advanced life to appear as a power
link |
00:12:30.800
law in time. So that very early on, there was very little chance of anything appearing. And then
link |
00:12:36.320
later on, as things appear, other things are appearing somewhat closer to them in time because
link |
00:12:41.440
they're all going as this power law. What is the power law? Can we, for people who are not short,
link |
00:12:47.040
not inclined, can you describe what a power law is? So say the function x is linear and x squared
link |
00:12:54.320
is quadratic. So it's the power of two. If we make x to the three, that's cubic or the power of three.
link |
00:13:01.280
And so x to the sixth is the power of six. And so we'd say life appears in the universe
link |
00:13:08.560
on a planet like Earth in that proportion to the time that it's been ready for life
link |
00:13:15.520
to appear. And that over the universe in general, it'll appear at roughly a power law like that.
link |
00:13:23.520
What is the x, what is n? Is it the number of hard steps?
link |
00:13:27.440
Yes, the number of hard steps. So that's the idea.
link |
00:13:29.600
So it's like, if you're gambling, and you're doubling up every time, this is the probability
link |
00:13:34.800
of you just keep winning. So it gets very unlikely very quickly. And so we're the result of this
link |
00:13:44.000
unlikely chain of successes. It's actually a lot like cancer. So the dominant model of cancer in
link |
00:13:49.760
an organism like each of us is that we have all these cells. And in order to become cancerous,
link |
00:13:54.960
a single cell has to go through a number of mutations. And these are very unlikely mutations.
link |
00:14:00.160
And so any one cell is very unlikely to have all these mutations happen by the time your
link |
00:14:04.960
life spans over. But we have enough cells in our body that the chance of any one cell producing
link |
00:14:10.640
cancer by the end of your life is actually pretty high, more like 40%. And so the chance of cancer
link |
00:14:16.320
appearing in your lifetime also goes as a power law, this power of the number of mutations that's
link |
00:14:20.960
required for anyone's cell in your body to become cancerous.
link |
00:14:23.920
The longer you live, the likely you are to have cancer cells.
link |
00:14:28.320
And the power is also roughly six. That is, the chance of you getting cancer is the roughly the
link |
00:14:34.160
power of six of the time you've been since you were born.
link |
00:14:37.120
It is perhaps not lost. And people, you're comparing power laws of the survival or the
link |
00:14:45.840
arrival of the human species to cancerous cells.
link |
00:14:49.440
The same mathematical model, but of course we might have a different value assumption about
link |
00:14:55.200
the two outcomes. But of course, from the point of view of cancer,
link |
00:14:57.840
for the point of view of cancer, it's a win win. We'll both get to thrive, I suppose.
link |
00:15:09.120
It is interesting to take the point of view of all kinds of lifeforms on earth,
link |
00:15:13.280
of viruses, of bacteria. They have a very different view.
link |
00:15:18.320
It's like the Instagram channel, Nature is Metal. The ethic under which nature operates doesn't
link |
00:15:25.840
often coincide, correlate with human morals. It seems cold and machine like in the selection
link |
00:15:36.880
process that it performs. I am an analyst. I'm a scholar, an intellectual. And I feel I should
link |
00:15:44.080
carefully distinguish predicting what's likely to happen and then evaluating or judging what I
link |
00:15:50.720
think would be better to happen. And it's a little dangerous to mix those up too closely because then
link |
00:15:56.160
we can have wishful thinking. And so I try typically to just analyze what seems likely to happen
link |
00:16:02.320
regardless of whether I like it or that we do anything about it. And then once you see a rough
link |
00:16:07.680
picture of what's likely to happen if we do nothing, then we can ask, well, what might we
link |
00:16:12.160
prefer? And ask where could the levers be to move it at least a little toward what we might prefer.
link |
00:16:17.200
That's useful. But often doing that just analysis of what's likely to happen if
link |
00:16:22.880
we do nothing offends many people. They find that dehumanizing or cold or metal, as you say,
link |
00:16:30.640
to just say, well, this is what's likely to happen. And it's not your favorite, sorry, but
link |
00:16:37.680
maybe we can do something, but maybe we can't do that much.
link |
00:16:40.160
This is very interesting that the cold analysis, whether it's geopolitics, whether it's medicine,
link |
00:16:50.400
whether it's economics, sometimes misses some very specific aspect of
link |
00:16:59.440
human condition. Like, for example, when you look at a doctor and the act of a doctor helping
link |
00:17:07.280
a single patient, if you do the analysis of that doctor's time and cost of the medicine or the
link |
00:17:14.240
surgery or the transportation of the patient, this is the Paul Farmer question. Is it worth
link |
00:17:20.880
spending 10, 20, $30,000 on this one patient? When you look at all the people that are suffering
link |
00:17:26.880
in the world, that money could be spent so much better. And yet, there's something about human
link |
00:17:32.480
nature that wants to help the person in front of you. And that is actually the right thing to do,
link |
00:17:40.480
despite the analysis. And sometimes when you do the analysis, there's something about the human
link |
00:17:47.440
mind that allows you to not take that leap, that irrational leap to act in this way,
link |
00:17:54.880
that the analysis explains it away. For example, the US government, the DOT, Department of
link |
00:18:03.360
Transportation, puts a value of, I think, like $9 million on a human life. And the moment you
link |
00:18:09.760
put that number on a human life, you can start thinking, well, okay, I can start making decisions
link |
00:18:14.640
about this or that with a sort of cold economic perspective. And then you might lose, you might
link |
00:18:23.280
deviate from a deeper truth of what it means to be human somehow. You have to dance, because then
link |
00:18:30.640
if you put too much weight on the anecdotal evidence on these kinds of human emotions,
link |
00:18:36.880
then you're going to lose, you could also probably more likely deviate from truth.
link |
00:18:42.880
But there's something about that cold analysis. Like, I've been listening to a lot of people
link |
00:18:47.120
coldly analyze wars, war in Yemen, war in Syria, Israel, Palestine, war in Ukraine.
link |
00:18:56.240
And there's something lost when you do a cold analysis of why something happened.
link |
00:19:00.560
When you talk about energy, talking about sort of conflict, competition over resources.
link |
00:19:07.520
When you talk about geopolitics, sort of models of geopolitics and why a certain war happened,
link |
00:19:14.400
you lose something about the suffering that happens. I don't know. It's an interesting
link |
00:19:19.680
thing because you're both, you're exceptionally good at models in all domains, literally.
link |
00:19:28.400
But also there's a humanity to you. So it's an interesting dance. I don't know if you can
link |
00:19:32.640
comment on that dance. Sure. It's definitely true, as you say, that for many people, if you are accurate
link |
00:19:40.800
in your judgment of, say, for a medical patient, what's the chance that this treatment might help?
link |
00:19:47.600
And what's the cost? And compare those to each other. And you might say,
link |
00:19:53.840
this looks like a lot of cost for a small medical gain. And at that point, knowing that fact that
link |
00:20:01.600
might take the wing, the air out of your sails, you might not be willing to do the thing that
link |
00:20:08.720
maybe you feel is right anyway, which is still to pay for it. And then somebody knowing that might
link |
00:20:16.000
want to keep that news from you, not tell you about the low chance of success or the high cost
link |
00:20:20.800
in order to save you this tension, this awkward moment where you might fail to do what they and
link |
00:20:27.600
you think is right. But I think the higher calling, the higher standard to hold you to,
link |
00:20:34.800
which many people can be held to, is to say, I will look at things accurately, I will know the
link |
00:20:40.640
truth, and then I will also do the right thing with it. I will be at peace with my judgment
link |
00:20:46.880
about what the right thing is in terms of the truth. I don't need to be lied to in order to
link |
00:20:51.920
figure out what the right thing to do is. And I think if you do think you need to be lied to in
link |
00:20:56.640
order to figure out what the right thing to do is, you're at a great disadvantage because
link |
00:21:01.280
then people will be lying to you, you will be lying to yourself and you won't be as
link |
00:21:07.200
effective at achieving whatever good you are trying to achieve.
link |
00:21:11.360
But getting the data, getting the facts is step one, not the final step. Absolutely.
link |
00:21:16.800
So it's a, I would say having a good model, getting the good data is step one and it's a burden
link |
00:21:23.520
because you can't just use that data to arrive at sort of the easy, convenient thing. You have
link |
00:21:33.520
to really deeply think about what is the right thing. You can't use the, so the dark aspect of
link |
00:21:40.080
data of models is you can use it to excuse away actions that aren't ethical. You can use data
link |
00:21:50.160
to basically excuse away anything. But not looking at data, let you explore yourself to pretend and
link |
00:21:56.880
think that you're doing good when you're not. Exactly. But it is a burden. It doesn't excuse you
link |
00:22:02.960
from still being human and deeply thinking about what is right, that very kind of gray area,
link |
00:22:09.440
that very subjective area. That's part of the human condition. But let us return for a time to
link |
00:22:16.960
aliens. So you started to define sort of the model, the parameters of grabbiness.
link |
00:22:25.120
Right. Or the, as we approach grabbiness. So what happens?
link |
00:22:29.280
So again, there was three parameters. Yes. There's the speed at which they expand.
link |
00:22:34.320
There's the rate at which they appear in time and that rate has a constant and a power. So we've
link |
00:22:40.160
talked about the history of life on earth suggests that power is around six, but maybe three to 12.
link |
00:22:44.640
And we can say that constant comes from our current date, sort of sets the overall rate.
link |
00:22:50.480
And the speed, which is the last parameter, comes from the fact that when we look in the sky,
link |
00:22:55.200
we don't see them. So the model predicts very strongly that if they were expanding slowly,
link |
00:22:59.840
say 1% of the speed of light, our sky would be full of vast spheres that were full of activity.
link |
00:23:06.560
That is, at a random time when a civilization is first appearing, if it looks out into its sky,
link |
00:23:12.320
it would see many other grabby alien civilizations in the sky and they would be much bigger than
link |
00:23:16.320
the full moon. There'd be huge spheres in the sky and they would be visibly different. We don't
link |
00:23:20.720
see them. Can we pause for a second? Okay. There's a bunch of hard steps that earth had to pass
link |
00:23:27.520
to arrive at this place we are currently, which we're starting to launch rockets out into space.
link |
00:23:33.040
We're kind of starting to expand. A bit. Right. Very slowly. Okay. But this is like the birth.
link |
00:23:38.720
If you look at the entirety of the history of earth, we're now at this precipice of expansion.
link |
00:23:46.640
We could. We might not choose to, but if we do, we will do it in the next 10 million years.
link |
00:23:51.760
10 million. Wow. Time flies when you're having fun. I was taking more time on the cosmological
link |
00:23:58.080
scale. So that is, it might be only a thousand. But the point is, even if it's up to 10 million,
link |
00:24:02.160
that hardly makes any difference to the model. So I might as well give you 10 million.
link |
00:24:05.200
This makes me feel, I was so stressed about planning what I'm going to do today.
link |
00:24:10.400
Right. You got plenty of time. Plenty of time. I just need to be generating some offspring quickly
link |
00:24:16.800
here. Okay. So in this moment, this 10 million year gap or window when we start expanding,
link |
00:24:28.000
and you're saying, okay, so this is an interesting moment where there's a bunch of other alien
link |
00:24:33.040
civilizations that might, at some history of the universe arrived at this moment, were here.
link |
00:24:38.000
They passed all the hard steps. There's a model for how likely it is that that happens.
link |
00:24:44.000
And then they start expanding. And you think of an expansion as almost like a sphere.
link |
00:24:48.720
Right. That's when you say speed, we're talking about the speed of the radius growth.
link |
00:24:53.760
Exactly. The surface, how fast, the surface. How faithful. Okay. And so you're saying that there
link |
00:24:58.160
is some speed for that expansion, average speed, and then we can play with that parameter. And
link |
00:25:05.280
if that speed is super slow, then maybe that explains why we haven't seen anything. If it's
link |
00:25:10.720
super fast, well, if the slow would create the puzzle, it's low predicts we would see them,
link |
00:25:15.520
but we don't see them. Okay. And so the way to explain that is that they're fast. So the idea is,
link |
00:25:20.560
if they're moving really fast, then we don't see them till they're almost here.
link |
00:25:23.600
Okay. This is counterintuitive. All right. Hold on a second. So I think this works best when I
link |
00:25:29.520
say a bunch of dumb things. Okay. And then you elucidate the full complexity and the beauty
link |
00:25:38.080
of the dumbness. Okay. So there's these spheres out there in the universe that are made visible
link |
00:25:45.760
because they're sort of using a lot of energy. So they're generating a lot of light stuff. They're
link |
00:25:50.320
changing things. They're changing things. And change would be visible a long way off.
link |
00:25:55.920
Yes. They would take apart stars, rearrange them, restructure galaxies. They would just
link |
00:26:00.480
all kinds of big, huge stuff. Okay. If they're expanding slowly, we would see a lot of them
link |
00:26:08.240
because the universe is old. Is old enough to where we would see that. We're assuming we're
link |
00:26:13.440
just typical, you know, maybe at the 50th percentile of them. So like half of them have appeared so
link |
00:26:18.320
far. The other half will still appear later. And the math of our best estimate is that they appear
link |
00:26:26.800
roughly once per million galaxies. And we would meet them in roughly a billion years
link |
00:26:33.120
if we expanded out to meet them. So we're looking at a grab the aliens model, 3d sim, right?
link |
00:26:39.840
What's what's this? That's the actual name of the video. What by the time we get to 13.8 billion
link |
00:26:46.560
years, the fun begins. Okay. So this is, this is, we're watching a three dimensional sphere
link |
00:26:55.120
rotating. I presume that's the universe and then grab the aliens are expanding and feeling that
link |
00:27:00.240
universe exactly with all kinds of fun pretty soon. It's awful. It's full. So that's how
link |
00:27:07.680
the grab the aliens come in contact. First of all, with other aliens and then
link |
00:27:13.680
with us humans, the following is a simulation of the grab the aliens model of alien civilizations.
link |
00:27:20.240
Civilizations are born that expand outwards at constant speed. A spherical region of space is
link |
00:27:26.240
shown by the time we get to 13.8 billion years. This sphere will be about 3000 times as wide as
link |
00:27:33.840
the distance from the Milky Way to Andromeda. Okay. This is fun. It's huge. Okay. It's huge.
link |
00:27:40.240
All right. So why don't we see, we're one little tiny, tiny, tiny, tiny dot in that giant,
link |
00:27:49.840
giant sphere. Right. Why don't we see any of the grab the aliens?
link |
00:27:55.200
It depends on how fast they expand. So you could see that if they expanded at the speed of light,
link |
00:28:00.480
you wouldn't see them until they were here. So like out there, if somebody is destroying the
link |
00:28:04.960
universe with a vacuum decay, there's this doomsday scenario where somebody somewhere could change the
link |
00:28:13.280
vacuum of the universe and that would expand at the speed of light and basically destroy
link |
00:28:16.560
everything it hit. But you'd never see that until it got here because it's expanding at the speed of
link |
00:28:20.480
light. If you're spanning really slow, then you see it from a long way off. So the fact we don't
link |
00:28:25.520
see anything in the sky tells us they're expanding fast, say over a third the speed of light. And
link |
00:28:30.800
that's really, really fast. But that's what you have to believe if you look out and you don't see
link |
00:28:37.040
anything. Now you might say, well, maybe I just don't want to believe this whole model. Why should
link |
00:28:40.800
I believe this whole model at all? And our best evidence why you should believe this model is our
link |
00:28:46.320
early date. We are right now at almost 14 billion years into the universe on a planet around a star
link |
00:28:55.120
that's roughly five billion years old. But the average star out there will last roughly five
link |
00:29:02.080
trillion years. That is a thousand times longer. And remember that power law, it says that the chance
link |
00:29:09.520
of advanced life appearing on a planet goes as the power of sixth of the time. So if a planet
link |
00:29:14.960
lasts a thousand times longer, then the chance of it appearing on that planet, if everything would
link |
00:29:20.720
stay empty at least, is a thousand to the sixth power or 10 to the 18. So enormous overwhelming
link |
00:29:28.400
chance that if the universe would just stay set and empty and waiting for advanced life to appear,
link |
00:29:32.960
when it would appear, would be way at the end of all these planet lifetimes. That is the long
link |
00:29:40.000
planets near the end of the lifetime, trillions of years into the future. But we're really early
link |
00:29:45.200
compared to that. And our explanation is at the moment, as you saw in the video, the universe
link |
00:29:49.760
is filling up in roughly a billion years, it'll all be full. And at that point, it's too late for
link |
00:29:54.320
advanced life to show up. So you had to show up now before that deadline. Okay, can we break
link |
00:29:59.280
that apart a little bit? Okay. Or linger on some of the things you said. So with the power law,
link |
00:30:04.480
the things we've done on earth, the model you have says that it's very unlikely. Like we're lucky
link |
00:30:11.680
SOBs. Is that is that mathematically correct to say? We're crazy early. That is when early
link |
00:30:19.440
means like in the history of the universe in the history. Okay, so given this model,
link |
00:30:27.200
how do we make sense of that? If we're super, can we just be the lucky ones?
link |
00:30:31.600
Well, 10 to the 18 lucky, you know, how lucky do you feel? So, you know,
link |
00:30:37.840
that's pretty lucky, right? You know, 10 to the 18 is a billion billion. So then if you were just
link |
00:30:44.000
being honest and humble, that that means, what does that mean? It means one of the assumptions
link |
00:30:49.920
that calculated this crazy early must be wrong. Yeah, that's what it means. So the key assumption
link |
00:30:54.640
we suggest is that the universe would stay empty. So most life would appear like 1000 times longer
link |
00:31:02.000
later than now, if everything would stay empty waiting for it to appear. So what is not empty?
link |
00:31:08.160
So the grabby aliens are filling the universe right now, roughly at the moment they filled
link |
00:31:11.840
half of the universe, and they've changed it. And when they fill everything, it's too late for
link |
00:31:16.320
stuff like us to appear. But wait, hold on a second. Did anyone help us get lucky? If it's so
link |
00:31:23.600
difficult, what how do like, so it's like cancer, right? There's all these cells, each of which
link |
00:31:29.920
randomly does or doesn't get cancer. And eventually some cell gets cancer. And, you know, we were one
link |
00:31:35.840
of those. But hold on a second. Okay. But we got it early. Early compared to the prediction
link |
00:31:44.000
with an assumption that's wrong. That's so that's how we do a lot of, you know, theoretical analysis.
link |
00:31:49.360
You have a model that makes a prediction that's wrong, then that helps you reject that model.
link |
00:31:53.120
Okay, let's try to understand exactly where the wrong is. So the assumption is that the universe
link |
00:31:58.080
is empty, stays empty, stays empty, and waits until this advanced life appears in trillions of years.
link |
00:32:05.680
That is, if the universe would just stay empty, if there was just, you know, nobody else out there,
link |
00:32:10.160
then when you should expect advanced life to appear, if you're the only one in the universe,
link |
00:32:15.200
when should you expect to appear? You should expect to appear trillions of years in the future.
link |
00:32:19.520
I see. Right. So this is a very sort of nuanced mathematical assumption. I don't think we can
link |
00:32:25.840
intuit it cleanly with words. But if you assume that you're just waiting, the universe stays empty
link |
00:32:34.400
and you're waiting for one life civilization to pop up, then it's going to, it should happen very
link |
00:32:42.800
late, much later than now. And if you look at Earth, the way things happen on Earth,
link |
00:32:49.120
it happened much, much, much, much, much earlier than it was supposed to according to this model,
link |
00:32:53.440
if you take the initial assumption. Therefore, you can say, well, the initial assumption
link |
00:32:57.920
of the universe staying empty is very unlikely. Right. Okay. And the other, the other alternative
link |
00:33:03.920
theory is the universe is filling up and will fill up soon. And so we are typical for the origin
link |
00:33:09.600
data of things that can appear before the deadline. Before the, okay, it's filling up. So why don't we
link |
00:33:14.720
see anything if it's filling up? Because they're expanding really fast. Close to the speed of light.
link |
00:33:19.840
Exactly. So we will only see it when it's here. Almost here. Okay. What are the ways in which
link |
00:33:27.120
we might see a quickly expanding? This is both exciting and terrifying. It is terrifying. It's
link |
00:33:33.600
like watching a truck, like driving at you at a hundred miles an hour. And right. So we would see
link |
00:33:40.480
spheres in the sky, at least one sphere in the sky growing very rapidly. And like very rapidly.
link |
00:33:48.240
Right? Yes. Very rapidly. So we're not, so there's, there's, you know, different depth,
link |
00:33:54.080
because we were just talking about 10 million years. This would be, you might see it 10 million
link |
00:33:59.120
years in advance coming. I mean, you still might have a long warning. Or again, the universe is
link |
00:34:04.080
14 billion years old. The typical origin times of these things are spread over several billion
link |
00:34:09.600
years. So the chance of one originating at a, you know, very close to you in time is very low.
link |
00:34:14.800
So they still might take millions of years from the time you see it, from the time it gets here.
link |
00:34:21.360
A lot of million years of your years would be terrified of this sphere coming at you.
link |
00:34:25.440
But, but, but coming at you very fast. So if they're traveling close to the speed of light,
link |
00:34:29.200
but they're coming from a long way away. So remember, the rate at which they appear is one
link |
00:34:34.000
per million galaxies. Right. So they're, they're roughly a hundred galaxies away.
link |
00:34:39.040
I see. So the delta between the speed of light and their actual travel speed is very important.
link |
00:34:46.560
Right. So even if they're going at, say, half the speed of light,
link |
00:34:49.360
we'll have a long time then. Yeah. But what if they're traveling exactly at a speed of light?
link |
00:34:54.640
Then we see them like, then we wouldn't have much warning, but that's less likely. Well,
link |
00:34:58.560
we can't exclude it. And they could also be somehow traveling faster than the speed of light.
link |
00:35:03.760
But I think we can exclude because if they could go faster than speed of light, then
link |
00:35:08.720
they would just already be everywhere. So in a universe where you can travel
link |
00:35:12.720
faster than the speed of light, you can go backwards in space time. So any time you appeared
link |
00:35:16.800
anywhere in space time, you could just fill up everything. Yeah. And so anybody in the future,
link |
00:35:22.160
whoever appeared, they would have been here by now. Can you exclude the possibility that
link |
00:35:26.400
those kinds of aliens aren't already here? Well, you have to have a different discussion of that.
link |
00:35:32.960
Right. So let's actually lead that. Let's leave that discussion aside just to
link |
00:35:36.720
linger and understand the grabby alien expansion, which is beautiful and fascinating. Okay.
link |
00:35:44.000
So there's these giant expanding spheres of alien civilizations. Now,
link |
00:35:53.680
when those spheres collide mathematically,
link |
00:35:56.800
it's very likely that we're not the first collision of grabby alien civilizations,
link |
00:36:07.200
I suppose is one way to say it. So there's like the first time the spheres touch each other,
link |
00:36:12.560
recognize each other. Right. They meet. They recognize each other first before they meet.
link |
00:36:19.840
They see each other coming. They see each other coming. And then so there's a bunch of them,
link |
00:36:23.600
there's a combinatorial thing where they start seeing each other coming. And then there's a
link |
00:36:27.520
third neighbor, it's like, what the hell? And then there's a fourth one. Okay. So what does that,
link |
00:36:31.760
you think, look like? What lessons from human nature that's the only data we have?
link |
00:36:39.920
Well, can you draw? So the story of the history of the universe here is what I would call a
link |
00:36:44.480
living cosmology. So what I'm excited about in part by this model is that it lets us tell a story
link |
00:36:50.960
of cosmology where there are actors who have agendas. So most ancient peoples, they had cosmologies,
link |
00:36:57.760
the stories they told about where the universe came from and where it's going and what's happening
link |
00:37:01.120
out there. And their stories, they like to have agents and actors, gods or something out there
link |
00:37:04.960
doing things. And lately, our favorite cosmology is dead, kind of boring. We're the only activity
link |
00:37:12.160
we know about or see and everything else just looks dead and empty. But this is now telling us,
link |
00:37:17.920
no, that's not quite right. At the moment, the universe is filling up. And in a few billion
link |
00:37:22.640
years, it'll be all full. And from then on, the history of the universe will be the universe
link |
00:37:27.840
full of aliens. Yeah. So that's a, it's a really good reminder, a really good way to think about
link |
00:37:33.680
cosmologies. We're surrounded by a vast darkness. And we don't know what's going on in that darkness
link |
00:37:40.720
until the light from whatever generate lights arrives here. So we kind of, yeah,
link |
00:37:47.120
we look up at the sky, okay, they're stars. Oh, they're pretty. But you don't think about
link |
00:37:53.120
the giant expanding spheres of aliens. Right. Because you don't see them. But now we're
link |
00:37:58.960
dating. Looking at the clock, if you're clever, the clock tells you. So I like the analogy with
link |
00:38:03.200
the ancient Greeks. So you might think that an ancient Greek, you know, staring at the universe
link |
00:38:08.080
couldn't possibly tell how far away the sun was or how far away the moon is or how big the earth is
link |
00:38:13.200
is that all you can see is just big things in the sky. You can't tell. But they were clever enough,
link |
00:38:17.760
actually, to be able to figure out the size of the earth and the distance to the moon and the sun
link |
00:38:21.920
and the size of the moon and sun. That is, they could figure those things out, actually, by being
link |
00:38:27.200
clever enough. And so similarly, we can actually figure out where are the aliens out there in
link |
00:38:30.960
space time by being clever about the few things we can see, one of which is our current date.
link |
00:38:35.680
And so now that you have this living cosmology, we can tell the story that the universe starts
link |
00:38:40.880
out empty. And then at some point, things like us appear very primitive. And then some of those
link |
00:38:46.960
stop being quiet and expand. And then for a few billion years, they expand, and then they meet
link |
00:38:51.440
each other. And then for the next 100 billion years, they commune with each other. That is,
link |
00:38:57.200
the usual models of cosmology say that in roughly 150 billion years, the expansion of the universe
link |
00:39:04.240
will happen so much that all you'll have left is some galaxy clusters that are sort of disconnected
link |
00:39:09.760
from each other. But before then, for the next 100 billion years, they will interact.
link |
00:39:17.440
There will be this community of all the grabby alien civilizations, and each one of them will
link |
00:39:21.760
hear about and even meet thousands of others. And we might hope to join them someday and become
link |
00:39:28.640
part of that community. That's an interesting thing to aspire to.
link |
00:39:32.000
Yes. Interesting is an interesting word. Is the universe of alien civilizations defined by war
link |
00:39:40.880
as much or more than war defined human history? I would say it's defined by competition. And
link |
00:39:52.160
then the question is how much competition implies war. So up until recently, competition
link |
00:40:01.200
defined life on earth. Competition between species and organisms and among humans,
link |
00:40:08.480
competitions among individuals and communities. And that competition often took the form of war
link |
00:40:13.520
in the last 10,000 years. Many people now are hoping or even expecting to sort of suppress
link |
00:40:21.440
and end competition in human affairs. They regulate business competition, they prevent
link |
00:40:28.320
military competition. And that's a future I think a lot of people will like to continue and
link |
00:40:34.960
strengthen. People will like to have something close to world government or world governance or
link |
00:40:39.360
at least a world community. And they will like to suppress war and many forms of business and
link |
00:40:44.640
personal competition over the coming centuries. And they may like that so much that they prevent
link |
00:40:51.760
interstellar colonization, which would become the end of that era. That is interstellar colonization
link |
00:40:56.720
would just return severe competition to human or our descendant affairs. And many civilizations may
link |
00:41:03.280
prefer that and ours may prefer that. But if they choose to allow interstellar colonization,
link |
00:41:08.960
they will have chosen to allow competition to return with great force. That is, there's really
link |
00:41:13.680
not much of a way to centrally govern a rapidly expanding sphere of civilization. And so I think
link |
00:41:20.240
that's one of the most solid things we can predict about grabiolins is they have accepted
link |
00:41:25.040
competition. And they have internal competition. And therefore, they have the potential for competition
link |
00:41:31.680
when they meet each other at the borders. But whether that's military competition
link |
00:41:36.160
is more of an open question. So military meaning physically destructive, right.
link |
00:41:46.080
So there's a lot to say there. So one idea that you kind of proposed is progress might be
link |
00:41:54.080
maximized through competition, through some kind of healthy competition, some definition of healthy.
link |
00:42:02.240
So like constructive, not destructive competition. So like we would likely grabiolins civilizations
link |
00:42:10.800
would be likely defined by competition because they can expand faster because they competition
link |
00:42:16.320
allows innovation and sort of the battle of ideas. The way I would take the logic is to say,
link |
00:42:21.680
you know, competition just happens if you can't coordinate to stop it. And you probably can't
link |
00:42:28.720
coordinate to stop it in an expanding interstellar way. So competition is a fundamental force
link |
00:42:36.640
in the universe. It has been so far. And it would be within an expanding grabiolins civilization.
link |
00:42:43.200
But we today have the chance, many people think and hope, of greatly controlling and limiting
link |
00:42:49.360
competition within our civilization for a while. And that's an interesting choice,
link |
00:42:55.680
whether to allow competition to re to sort of regain its full force, or whether to suppress
link |
00:43:02.320
and manage it. Well, one of the open questions that has been raised in the past less than 100 years
link |
00:43:12.560
is whether our desire to lessen the destructive nature of competition or the destructive kind
link |
00:43:20.720
of competition will be outpaced by the destructive power of our weapons.
link |
00:43:28.720
Sort of if nuclear weapons and weapons of that kind become more destructive than our desire
link |
00:43:38.400
for peace, then all it takes is one asshole at the party to ruin the party.
link |
00:43:45.040
CB – It takes one asshole to make a delay, but not that much of a delay on the cosmological
link |
00:43:51.040
scales we're talking about. So even a vast nuclear war, if it happened here right now on Earth,
link |
00:43:59.520
it would not kill all humans. It certainly wouldn't kill all life. And so human civilization
link |
00:44:07.040
would return within 100,000 years. CB – So all the history of atrocities,
link |
00:44:14.240
and if you look at the Black Plague, which is not human cause atrocities or whatever.
link |
00:44:26.400
CB – There are a lot of military atrocities in history.
link |
00:44:29.200
CB – Absolutely. CB – In the 20th century. Those challenges to think about human nature, but
link |
00:44:36.720
the cosmic scale of time and space, they do not stop the human spirit essentially. The humanity
link |
00:44:45.440
goes on. Through all the atrocities, it goes on. Like most likely. So even a nuclear war isn't
link |
00:44:52.480
enough to destroy us or to stop our potential from expanding. But we could institute a regime
link |
00:45:01.280
of global governance that limited competition, including military and business competition
link |
00:45:06.160
of sorts, and that could prevent our expansion. CB – Of course, to play devil's advocate,
link |
00:45:12.160
global governance is centralized power, power corrupts, and absolute power corrupts absolutely.
link |
00:45:25.360
One of the aspects of competition that's been very productive is not letting any one person,
link |
00:45:33.920
any one country, any one center of power become absolutely powerful. Because that's another
link |
00:45:41.120
lesson is it seems to corrupt. There's something about ego in the human mind that seems to be
link |
00:45:46.080
corrupted by power. So when you say global governance, that terrifies me more than the
link |
00:45:53.760
possibility of war. Because it's... CB – I think people will be less terrified
link |
00:45:59.760
than you are right now. And let me try to paint the picture from their point of view. This isn't
link |
00:46:04.640
my point of view, but I think it's going to be a widely shared point of view.
link |
00:46:07.840
CB – Yes. This is two devils advocates arguing, two devils.
link |
00:46:11.200
CB – Okay. So for the last half century and into the continuing future, we actually have had
link |
00:46:19.200
a strong elite global community that shares a lot of values and beliefs and has created
link |
00:46:26.960
a lot of convergence in global policy. So if you look at electromagnetic spectrum or
link |
00:46:32.320
medical experiments, or pandemic policy, or nuclear power energy, or regulating airplanes,
link |
00:46:39.840
or just in a wide range of area, in fact, the world has very similar regulations and rules
link |
00:46:46.080
everywhere. And it's not a coincidence because they are part of a world community where people
link |
00:46:51.360
get together at places like Davos, et cetera, where world elites want to be respected by other
link |
00:46:57.680
world elites, and they have a convergence of opinion, and that produces something like
link |
00:47:04.960
global governance, but without a global center. This is what human mobs or communities have
link |
00:47:10.880
done for a long time. That is, humans can coordinate together on shared behavior without a center
link |
00:47:15.920
by having gossip and reputation within a community of elites. And that is what we have been doing
link |
00:47:22.560
and are likely to do a lot more of. So for example, one of the things that's happening,
link |
00:47:27.840
say, with the war in Ukraine is that this world community of elites has decided that they disapprove
link |
00:47:33.600
of the Russian invasion, and they are coordinating to pull resources together from all around the
link |
00:47:38.640
world in order to oppose it. And they are proud of that, sharing that opinion and their feel that
link |
00:47:46.080
they are morally justified in their stance there. And that's this kind of event that actually brings
link |
00:47:53.920
world elite communities together, where they come together and they push a particular policy
link |
00:47:59.440
and position that they share and that they achieve successes. And the same sort of passion
link |
00:48:04.160
animates global elites with respect to, say, global warming or global poverty and other
link |
00:48:09.120
sorts of things. And they are, in fact, making progress on those sorts of things through shared
link |
00:48:14.800
global community of elites. And in some sense, they are slowly walking toward global governance,
link |
00:48:22.240
slowly strengthening various world institutions of governance, but cautiously, carefully watching
link |
00:48:28.320
out for the possibility of a single power that might corrupt it. I think a lot of people over
link |
00:48:34.240
the coming centuries will look at that history and like it.
link |
00:48:36.720
It's an interesting thought. And thank you for playing that devil's advocate there. But I think
link |
00:48:47.200
the elites too easily lose touch of the morals that the best of human nature and power corrupts.
link |
00:48:57.040
Sure. But their view is the one that determines what happens. Their view may still end up there,
link |
00:49:04.160
even if you or I might criticize it from that point of view. So from a perspective of minimizing
link |
00:49:09.280
human suffering, elites can use topics of the war in Ukraine and climate change and all of those
link |
00:49:18.320
things to sell an idea to the world. And with disregard to the amount of suffering it causes,
link |
00:49:29.360
there are actual actions. So like you can tell all kinds of narratives. That's the way propaganda
link |
00:49:35.600
works. Hitler really sold the idea that everything Germany is doing is either it's the victim is
link |
00:49:43.120
defending itself against the cruelty of the world and it's actually trying to bring out
link |
00:49:48.560
about a better world. So every power center thinks they're doing good. And so this is
link |
00:49:55.360
this is the positive of competition of not having multiple power centers. This kind of
link |
00:50:02.960
gathering of elites makes me very, very, very nervous. The dinners, the meetings in the closed
link |
00:50:12.400
rooms. I don't know. I another but remember we talked about separating our cold analysis of
link |
00:50:19.920
what's likely or possible from what we prefer. And so that's this isn't exactly enough time for
link |
00:50:24.720
that. We might say, I would recommend we don't go this route of a world strong world governance.
link |
00:50:30.800
And because I would say it'll preclude this possibility of becoming grabby aliens of filling
link |
00:50:37.200
the next nearest million galaxies for the next billion years with vast amounts of activity
link |
00:50:43.600
and interest and value of life out there. That's the thing we would lose by deciding that we
link |
00:50:50.240
wouldn't expand that we would stay here and keep our comfortable shared governance.
link |
00:50:55.840
So you wait, you think that global governance is makes it more likely or less likely that we
link |
00:51:07.360
expand out into the universe less. So okay, this is the key. This is the key point. Right. Right.
link |
00:51:13.760
So screw the elites. Wait, do we want to expand? So again, I want to separate my neutral analysis
link |
00:51:23.280
from my evaluation and say, first of all, I have an analysis that tells us this is a key choice
link |
00:51:29.200
that we will face and that it's key choice other aliens have faced out there. And it could be that
link |
00:51:33.760
only one in 10 or one in 100 civilizations chooses to expand and the rest of them stay quiet. And
link |
00:51:39.360
that's how it goes out there. And we face that choice too. And it'll happen sometime in the next
link |
00:51:46.000
10 million years, maybe the next 1000. But the key thing to notice from our point of view is that
link |
00:51:52.000
even though you might like our global governance, you might like the fact that we've come together.
link |
00:51:56.000
We no longer have massive wars and we no longer have destructive competition.
link |
00:52:01.440
And that we could continue that. The cost of continuing that would be to prevent
link |
00:52:06.560
interstellar colonization. That is, once you allow interstellar colonization,
link |
00:52:10.160
then you've lost control of those colonies. And whatever they change into, they could come back
link |
00:52:15.200
here and compete with you back here as a result of having lost control. And I think if people value
link |
00:52:21.760
that global governance and global community and regulation and all the things it can do enough,
link |
00:52:28.000
they would then want to prevent interstellar colonization.
link |
00:52:31.440
I want to have a conversation with those people. I believe that both for humanity,
link |
00:52:37.680
for the good of humanity, for what I believe is good in humanity and for expansion, exploration,
link |
00:52:44.880
innovation, distributing the centers of power is very beneficial. So this whole meeting of elites
link |
00:52:51.280
and I've been very fortunate to meet quite a large number of elites. They made me nervous
link |
00:52:59.040
because it's easy to lose touch of reality. I'm nervous about that myself,
link |
00:53:09.040
to make sure that you never lose touch as you get older, wiser,
link |
00:53:17.760
you know, how you generally get like disrespectful of kids, kids these days.
link |
00:53:21.840
No, the kids are... Okay, but I think we should hear a stronger case for their position. So I'm
link |
00:53:26.960
going to play for the elites. Yes. Well, for the limiting of expansion and for the regulation
link |
00:53:34.880
of behavior. Can I link on that? Sure. So you're saying those two are connected.
link |
00:53:41.600
So the human civilization and alien civilizations come to a crossroads. They have to decide,
link |
00:53:48.640
do we want to expand or not? And connected to that, do we want to give a lot of power to a
link |
00:53:55.040
central elite? Do we want to distribute the power centers, which is naturally connected to the
link |
00:54:04.160
expansion? When you expand, you distribute the power. If, say over the next thousand years,
link |
00:54:11.200
we fill up the solar system, right? We go out from Earth and we colonize Mars and we change a lot of
link |
00:54:16.240
things. Within a solar system, still everything is within reach. That is, if there's a rebellious
link |
00:54:21.600
colony around Neptune, you can throw rocks at it and smash it and then teach them discipline.
link |
00:54:27.760
How did that work for the British Empire? Central control over the solar system is feasible.
link |
00:54:33.040
But once you let it escape the solar system, it's no longer feasible. But if you have a solar
link |
00:54:37.440
system that doesn't have a central control, maybe broken into a thousand different political units
link |
00:54:41.600
in the solar system, then any one part of that that allows interstellar colonization,
link |
00:54:47.200
and it happens. That is, interstellar colonization happens when only one party chooses to do it and
link |
00:54:53.200
is able to do it. And that's what it is there for. So we can just say in a world of competition,
link |
00:54:58.640
if interstellar colonization is possible, it will happen and then competition will continue.
link |
00:55:02.640
And that will ensure the continuation of competition into the indefinite future.
link |
00:55:07.520
And competition, we don't know, but competition can take violent forms or productive forms.
link |
00:55:13.280
And the case I was going to make is that I think one of the things that most scares people about
link |
00:55:17.520
competition is not just that it creates Holocausts and death on massive scales,
link |
00:55:22.720
is that it's likely to change who we are and what we value.
link |
00:55:28.480
Yes. So this is the other thing with power. As we grow, as human civilization grows,
link |
00:55:37.120
because multi planetary, multi solar system potentially, how does that change us? Do you think?
link |
00:55:44.320
I think the more you think about it, the more you realize it can change us a lot.
link |
00:55:48.160
So first of all, it's pretty dark, by the way. Well, it's just honest.
link |
00:55:53.440
Right. Well, I was trying to get you there. I think the first thing you should say,
link |
00:55:55.920
if you look at history, just human history over the last 10,000 years,
link |
00:55:59.760
if you really understood what people were like a long time ago, you'd realize they were really
link |
00:56:04.160
quite different. Ancient cultures created people who were really quite different. Most historical
link |
00:56:09.520
fiction lies to you about that. It often offers you modern characters in an ancient world.
link |
00:56:14.640
But if you actually study history, you will see just how different they were and how
link |
00:56:18.640
differently they thought. And they've changed a lot many times and they've changed a lot
link |
00:56:24.640
across time. So I think the most obvious prediction about the future is,
link |
00:56:28.080
even if you only have the mechanisms of change we've seen in the past,
link |
00:56:31.360
you should still expect a lot of change in the future. But we have a lot bigger mechanisms
link |
00:56:35.920
for change in the future than we had in the past. So I have this book called The Age of M,
link |
00:56:42.480
Work, Love, and Life, and Robots Rule the Earth. And it's about what happens if brain
link |
00:56:46.720
emulations become possible. So a brain emulation is where you take an actual human brain and you
link |
00:56:51.520
scan it and find spatial and chemical detail to create a computer simulation of that brain.
link |
00:56:56.800
And then those computer simulations of brains are basically citizens in a new world. They work
link |
00:57:02.720
and they vote and they fall in love and they get mad and they lie to each other. And this is a whole
link |
00:57:07.680
new world. And my book is about analyzing how that world is different than our world. Basically
link |
00:57:13.120
using competition as my key lever of analysis. That is, if that world remains competitive,
link |
00:57:17.920
then I can figure out how they change in that world, what they do differently than we do.
link |
00:57:21.920
And it's very different. And it's different in ways that are shocking sometimes to many people
link |
00:57:29.600
and ways some people don't like. I think it's an okay world, but I have to admit,
link |
00:57:33.760
it's quite different. And that's just one technology. If we add dozens more technologies
link |
00:57:41.040
and changes into the future, we should just expect it's possible to become very different than who
link |
00:57:46.800
we are. In the space of all possible minds, our minds are a particular architecture, a particular
link |
00:57:52.480
structure, a particular set of habits, and they are only one piece in a vast base of possibilities.
link |
00:57:59.040
The space of possible minds is really huge. So yeah, let's linger on the space of possible minds
link |
00:58:05.520
for a moment, just to sort of humble ourselves how peculiar our peculiarities are. Like the fact that
link |
00:58:17.200
we like a particular kind of sex and the fact that we eat food through one hole
link |
00:58:24.400
and poop through another hole. And that seems to be a fundamental aspect of life. It's very
link |
00:58:30.400
important to us. And that life is finite in a certain kind of way. We have a meat vehicle.
link |
00:58:38.720
So death is very important to us. I wonder which aspects are fundamental or would be common
link |
00:58:45.040
throughout human history and also throughout history of life on earth and throughout other
link |
00:58:52.320
kinds of lives. Like what is really useful? You mentioned competition seems to be one
link |
00:58:56.960
fundamental thing. I've tried to do analysis of where our distant descendants might go in terms
link |
00:59:02.720
of what are robust features we could predict about our descendants. So again, I have this analysis
link |
00:59:07.680
of sort of the next generation, so the next era after ours. If you think of human history as having
link |
00:59:13.040
three eras so far, there was the forager era, the farmer era, and the industry era, then my
link |
00:59:18.400
attempt in age of M is to analyze the next era after that. And it's very different, but of course
link |
00:59:22.640
there could be more and more eras after that. So analyzing a particular scenario and thinking
link |
00:59:28.080
it through is one way to try to see how different the future could be, but that doesn't give you
link |
00:59:32.720
some sort of sense of what's typical. But I have tried to analyze what's typical. And so I have
link |
00:59:39.920
two predictions I think I can make pretty solidly. One thing is that we know at the moment that humans
link |
00:59:46.640
discount the future rapidly. So we discount the future in terms of caring about consequences,
link |
00:59:53.120
roughly a factor of two per generation. And there's a solid evolutionary analysis why
link |
00:59:58.080
sexual creatures would do that, because basically your descendants only share half of your genes
link |
01:00:03.040
and your descendants are a generation away. So we only care about our grandchildren.
link |
01:00:08.480
Basically, that's a factor of four later because it's later. So this actually explains
link |
01:00:14.880
typical interest rates in the economy. That is, interest rates are greatly influenced by our
link |
01:00:19.440
discount rates. And we basically discount the future by a factor of two per generation.
link |
01:00:25.920
But that's a side effect of the way our preferences evolved as sexually selected creatures.
link |
01:00:33.520
We should expect that in the longer run, creatures will evolve who don't discount the future.
link |
01:00:39.120
They will care about the long run, and they will therefore not neglect the wrong one. So for example,
link |
01:00:43.920
for things like global warming or things like that, at the moment, many commenters are sad that
link |
01:00:49.440
basically ordinary people don't seem to care much, market prices don't seem to care much,
link |
01:00:52.880
and more ordinary people, it doesn't really impact them much because humans don't care much about
link |
01:00:57.360
the long term future. And futurists find it hard to motivate people and to engage people about
link |
01:01:04.240
the long term future because they just don't care that much. But that's a side effect of this
link |
01:01:08.640
particular way that our preferences evolved about the future. And so in the future, they will neglect
link |
01:01:14.640
the future less. And that's an interesting thing that we can predict robustly. Eventually,
link |
01:01:20.080
maybe a few centuries, maybe longer, eventually our descendants will care about the future.
link |
01:01:25.760
Can you speak to the intuition behind that? Is it useful to think more about the future?
link |
01:01:31.600
Right. If evolution rewards creatures for having many descendants, then if you have
link |
01:01:38.480
decisions that influence how many descendants you have, then that would be good if you made
link |
01:01:42.880
those decisions. But in order to do that, you'll have to care about them. You'll have to care about
link |
01:01:46.400
that future. So to push back, that's if you're trying to maximize the number of descendants. But
link |
01:01:52.720
the nice thing about not caring too much about the long term future is you're more likely to take
link |
01:01:57.600
big risks or you're less risk averse. And it's possible that both evolution and just
link |
01:02:04.960
life in the universe rewards the risk takers. Well, we actually have analysis of the ideal
link |
01:02:13.840
risk preferences too. So there's a literature on ideal preferences that evolutions should promote.
link |
01:02:21.360
And for example, there's a literature on competing investment funds and what the managers of those
link |
01:02:25.920
funds should care about in terms of risk, various kinds of risks, and in terms of discounting.
link |
01:02:30.880
And so managers of investment funds should basically have logarithmic risk, i.e.,
link |
01:02:39.680
in shared risk, in correlated risk, but be very risk neutral with respect to uncorrelated risk.
link |
01:02:46.000
So that's a feature that's predicted to happen about individual personal choices in biology
link |
01:02:53.440
and also for investment funds. So that's other things. That's also something we can say about
link |
01:02:57.280
the long run. What's correlated and uncorrelated risk? If there's something that would affect
link |
01:03:03.440
all of your descendants, then if you take that risk, you might have more descendants,
link |
01:03:09.760
but you might have zero. And that's just really bad to have zero descendants. But an uncorrelated
link |
01:03:15.840
risk would be a risk that some of your descendants would suffer, but others wouldn't. And then
link |
01:03:20.240
you have a portfolio of descendants. And so that portfolio ensures you against problems with any
link |
01:03:26.000
one of them. I like the idea of portfolio of descendants. And we'll talk about portfolios with
link |
01:03:30.640
your idea of, you briefly mentioned, we'll return there with M. E. M. The age of E. M. Work, love,
link |
01:03:37.200
and life when robots rule the earth. E. M., by the way, is emulated minds. So this one of the
link |
01:03:44.160
M is short for emulations. M is short for emulations. And it's kind of an idea of how we
link |
01:03:49.280
might create artificial minds, artificial copies of minds, or human like intelligences.
link |
01:03:56.480
L. M. I have another dramatic prediction I can make about long term preferences.
link |
01:04:00.160
M. E. M. Yes.
link |
01:04:00.960
L. M. Which is, at the moment, we reproduce as the result of a hodgepodge of preferences that
link |
01:04:07.280
aren't very well integrated, but sort of in our ancestral environment induce us to reproduce.
link |
01:04:12.240
So we have preferences over being sleepy and hungry and thirsty and wanting to have sex and
link |
01:04:17.920
wanting to be excited, excitement, et cetera, right? And so in our ancestral environment,
link |
01:04:22.960
the packages of preferences that we evolved to have did induce us to have more descendants.
link |
01:04:29.280
That's why we're here. But those packages of preferences are not a robust way to
link |
01:04:34.960
provoke having more descendants. They were tied to our ancestral environment, which no
link |
01:04:39.520
longer true. So that's one of the reasons we are now having a big fertility decline,
link |
01:04:43.520
because in our current environment, our ancestral preferences are not inducing us to have a lot
link |
01:04:48.080
of kids, which is from evolution's point of view, a big mistake. We can predict that in the longer
link |
01:04:54.400
run, there will arise creatures who just abstractly know that what they want is more descendants.
link |
01:05:00.800
That's a very robust way to have more descendants is to have that as your direct preference.
link |
01:05:05.760
First of all, your ticket is so clear. I love it. So mathematical and thank you for thinking
link |
01:05:14.160
so clear with me and bearing with my interruptions and going on the tangents when we go there.
link |
01:05:20.480
So you're just clearly saying that successful long term civilizations will prefer to have
link |
01:05:27.840
descendants, more descendants. Not just prefer consciously and abstractly prefer,
link |
01:05:33.360
that is, it won't be the indirect consequence of other preferences. It will just be the thing they
link |
01:05:39.360
know they want. There'll be a president in the future that says, we must have more sex.
link |
01:05:44.640
We must have more descendants and do whatever it takes to do that. Whatever.
link |
01:05:48.880
We must go to the moon and do the other things. Not because they're easy, but because they're hard,
link |
01:05:53.600
but instead of the moon, let's have lots of sex. Okay. But there's a lot of ways to have descendants,
link |
01:05:58.000
right? Right. But so that's the whole point. When the world gets more complicated and there
link |
01:06:02.240
are many possible strategies, it's having that as your abstract preference that will
link |
01:06:06.640
force you to think through those possibilities and pick the one that's most effective.
link |
01:06:10.000
So just to clarify, descendants doesn't necessarily mean the narrow definition of
link |
01:06:15.440
descendants, meaning humans having sex and then having babies. Exactly.
link |
01:06:18.880
You can have artificial intelligence systems that in whom you instill some capability of
link |
01:06:26.480
cognition and perhaps even consciousness. You can also create through genetics and biology,
link |
01:06:31.520
clones of yourself, or slightly modified clones, thousands of them. Right.
link |
01:06:38.320
So all kinds of descendants. It could be descendants in the space of ideas too.
link |
01:06:43.200
But somehow we no longer exist in this meat vehicle. It's now just like,
link |
01:06:48.480
whatever the definition of a life form is, you have descendants of those life forms.
link |
01:06:54.320
Yes. And they will be thoughtful about that. They will have thought about what counts as a
link |
01:06:58.800
descendant. And that will be important to them to have the right concept.
link |
01:07:02.240
So the they there is very interesting who the they are.
link |
01:07:06.000
But the key thing is we're making predictions that I think are somewhat robust about what
link |
01:07:10.160
our distant descendants will be like. Another thing I think you would automatically accept
link |
01:07:13.920
is they will almost entirely be artificial. And I think that would be the obvious prediction
link |
01:07:17.840
about any aliens we would meet. That is, they would long sense have given up reproducing biologically.
link |
01:07:23.520
Well, it's all it's like organic or something. It's all real and it might be squishy and made
link |
01:07:30.000
out of hydrocarbons. But it would be artificial in the sense of made in factories with designs on
link |
01:07:34.720
CAD things, right? Factories with scale economy. So the factories we have made on Earth today
link |
01:07:39.280
have much larger scale economies than the factories in our cells. So the factories in
link |
01:07:42.720
our cells are there are marvels, but they don't achieve very many scale economies. They're tiny
link |
01:07:47.120
little factories. But they're all factories. Yes. Factors on top of factories. So everything,
link |
01:07:51.280
the fact that the factories that are designed is different than sort of the factories that have
link |
01:07:56.080
evolved. I think the nature of the word design is very interesting to uncover there. But let
link |
01:08:02.640
me, in terms of aliens, let me go, let me analyze your Twitter like it's Shakespeare.
link |
01:08:09.440
Okay. There's a tweet says define hello in quotes alien civilizations as one that might
link |
01:08:16.720
in the next million years identify humans as intelligent and civilized travel to Earth and
link |
01:08:22.240
say hello by making their presence and advanced abilities known to us. The next 15 polls, this
link |
01:08:29.200
is a Twitter thread. The next 15 polls ask about such hello aliens. And what these polls ask is
link |
01:08:36.640
your Twitter followers, what they think those aliens will be like certain particular qualities.
link |
01:08:43.520
So poll number one is what percent of hello aliens evolved from biological species with
link |
01:08:49.680
two main genders? And you know, the popular vote is above 80%. So most of them have two genders.
link |
01:08:58.480
What do you think about that? I'll ask you about some of these because it's so interesting. It's
link |
01:09:01.600
such an interesting question. It is a fun set of questions. Yes. I like fun set of questions. So
link |
01:09:05.280
the genders as we look through revolutionary history, what's the usefulness of that as opposed
link |
01:09:09.760
to having just one or like millions? So there's a question in evolution of life on Earth. There are
link |
01:09:16.960
very few species that have more than two genders. There are some, but they aren't very many. But
link |
01:09:22.400
there's an enormous number of species that do have two genders, much more than one. And so there's a
link |
01:09:27.680
literature on why did multiple genders evolve? And that's sort of what's the point of having males
link |
01:09:34.480
and females versus hemaphrodites. So most plants are hemaphrodites. That is, they would mate male
link |
01:09:42.400
female, but each plant can be either role. And then most animals have chosen to split into males
link |
01:09:49.120
and females. And then they're differentiating the two genders. And there's an interesting set
link |
01:09:54.720
of questions about why that happens. Because you can do selection. You basically have
link |
01:09:59.440
like one gender competes for the affection of other and their sexual partnership that creates
link |
01:10:06.880
the offspring. So there's sexual selection. It's nice to have, to a party, it's nice to have dance
link |
01:10:13.200
partners. And then each one gets to choose based on certain characteristics. And that's an efficient
link |
01:10:18.800
mechanism for adopting to the environment, being successfully adopted to the environment.
link |
01:10:23.200
It does look like there's an advantage in, if you have males, then the males can take higher
link |
01:10:29.760
variants. And so there can be stronger selection among the males in terms of weeding out genetic
link |
01:10:34.240
mutations because the males have higher variants in their mating success.
link |
01:10:39.120
Sure. Okay. Question number two, what percent of hello aliens evolved from land animals as
link |
01:10:45.280
opposed to plants or ocean slash air organisms? By the way, I did recently see that there's
link |
01:10:55.680
a, only 10% of species on earth are in the ocean. So there's a lot more variety on land.
link |
01:11:04.400
There is. It's interesting. So why is that? I don't even, I can't even intuit exactly why that
link |
01:11:10.320
would be. Maybe survival on land is harder. And so you get a lot. The story that I understand is
link |
01:11:16.160
it's about small niches. So speciation can be promoted by having multiple different species.
link |
01:11:23.280
So in the ocean, species are larger. That is, there are more creatures in each species because the
link |
01:11:29.520
ocean environments don't vary as much. So if you're good in one place, you're good in many other
link |
01:11:33.280
places. But on land, especially in rivers, rivers contain an enormous percentage of the
link |
01:11:38.560
kinds of species on land, you see, because they vary so much from place to place. And so
link |
01:11:46.800
a species can be good in one place and then other species can't really compete because
link |
01:11:51.040
they came from a different place where things are different. So it's a remarkable fact actually
link |
01:11:57.200
that speciation promotes evolution in the long run. That is, more evolution has happened on land
link |
01:12:02.640
because there have been more species on land, because each species has been smaller. And
link |
01:12:07.760
that's actually a warning about something called rot that I've thought a lot about, which is one
link |
01:12:12.480
of the problems with even a world government, which is large systems of software today just
link |
01:12:17.280
consistently rot and decay with time and have to be replaced. And that plausibly also is a problem
link |
01:12:22.880
for other large systems, including biological systems, legal systems, regulatory systems.
link |
01:12:27.680
And it seems like large species actually don't evolve as effectively as small ones do.
link |
01:12:33.920
And that's an important thing to notice about. And that's actually different from ordinary
link |
01:12:40.880
sort of evolution in economies on earth in the last few centuries, say. On earth,
link |
01:12:48.720
the more technical evolution and economic growth happens in larger integrated cities and nations.
link |
01:12:54.320
But in biology, it's the other way around. More evolution happened in the fragmented species.
link |
01:12:58.720
Yeah. It's such a nuanced discussion, because you can also push back in terms of nations and
link |
01:13:04.880
at least companies. It's like large companies seem to evolve less effectively. There is something
link |
01:13:13.280
they have more resources, they don't even have better resilience. When you look at the scale of
link |
01:13:20.800
decades and centuries, it seems like a lot of large companies die.
link |
01:13:24.480
But still large economies do better. Large cities grow better than small cities. Large
link |
01:13:30.640
integrated economies like the United States or the European Union do better than small fragmented
link |
01:13:34.800
ones. That's a very interesting long discussion. But so most the people and obviously votes on
link |
01:13:43.040
Twitter represent the absolute objective truth of things. But an interesting question about oceans
link |
01:13:50.480
is that, okay, remember I told you about how most planets would last for trillions of years
link |
01:13:54.960
and then be later, right? So people have tried to explain why life appeared on earth by saying,
link |
01:13:59.920
oh, all those planets are going to be unqualified for life because of various problems. That is,
link |
01:14:04.080
they're around smaller stars which last longer and smaller stars have some things like more
link |
01:14:08.320
solar flares, maybe more tidal locking. But almost all of these problems with longer lived
link |
01:14:13.920
planets aren't problems for ocean worlds. And a large fraction of planets out there are ocean
link |
01:14:19.520
worlds. So if life can appear on an ocean world, then that pretty much ensures that these planets
link |
01:14:27.840
that last a very long time could have advanced life because most, no, there's a huge fraction
link |
01:14:32.400
of ocean worlds. So that's actually an open question. So when you say, sorry, when you say
link |
01:14:37.200
life appear, you're kind of saying life and intelligent life. So that's an open question
link |
01:14:45.200
is land. And as I suppose the question behind the Twitter poll, which is a grabby alien civilization
link |
01:14:55.120
that comes to say hello, what's the chance that they first began their early steps,
link |
01:15:01.920
the difficult steps they took on land? What do you think? 80%
link |
01:15:09.600
most people on Twitter think is very likely on land. What do you think?
link |
01:15:14.880
I think people are discounting ocean worlds too much. That is, I think people tend to assume that
link |
01:15:20.480
whatever we did must be the only way it's possible. And I think people aren't giving enough credit for
link |
01:15:24.880
other possible paths. But dolphins, water world, by the way, people criticize that movie. I love
link |
01:15:30.400
that movie. Kevin Costner can do me no wrong. Okay, next question. What percent of hello aliens
link |
01:15:36.640
once had a nuclear war with greater than 10 nukes fired in anger? So not in the incompetence
link |
01:15:45.280
and as an accident. Intentional firing of nukes and less than 20% was the most popular vote.
link |
01:15:54.240
That just seems wrong to me. So like, I wonder what, so most people think once you get nukes,
link |
01:16:01.120
we're not going to fire them. They believe in the power of the game. I think they're assuming that
link |
01:16:06.480
if you had a nuclear war, then that would just end civilization for good. I think that's the
link |
01:16:10.240
thinking. That's the main thing. And I think that's just wrong. I think you could rise again
link |
01:16:14.160
after a nuclear war. It might take 10,000 years or 100,000 years, but it could rise again.
link |
01:16:18.880
So what do you think about mutually assured destruction as a force to prevent people from
link |
01:16:24.240
firing nuclear weapons? That's a question that's a new to a terrifying degree has been raised now
link |
01:16:30.800
and what's going on. Clearly it has had an effect. The question is just how strong an effect for how
link |
01:16:36.800
long? Clearly we have not gone wild with nuclear war and clearly the devastation that you would get
link |
01:16:44.240
if you initiated nuclear war is part of the reasons people have been reluctant to start a war. The
link |
01:16:48.080
question is just how reliably will that ensure the absence of a war? Yeah, the night is still
link |
01:16:53.920
young. Exactly. It's been 70 years or whatever it's been. But what do you think? Do you think
link |
01:17:02.480
we'll see nuclear war in the century? I don't know in the century, but it's the sort of thing
link |
01:17:10.400
that's likely to happen eventually. There's a very loose statement. Okay, I understand. Now this is
link |
01:17:16.080
where I pull you out of your mathematical model and ask a human question. Do you think this
link |
01:17:22.160
particular question... I think we've been lucky that it hasn't happened so far. But what is the
link |
01:17:26.000
nature of nuclear war? Let's think about this. There's dictators. There's democracies.
link |
01:17:37.280
Miscommunication. How do wars start? World War I, World War II. So the biggest datum here is that
link |
01:17:43.120
we've had an enormous decline in major war over the last century. So that has to be taken into
link |
01:17:48.240
account. War is a process that has a very long tail. That is, there are rare, very large wars.
link |
01:17:58.240
So the average war is much worse than the median war because of this long tail. And that makes it
link |
01:18:05.280
hard to identify trends over time. So the median war has clearly gone way down in the last century
link |
01:18:11.200
that a median rate of war. But it could be that's because the tail has gotten thicker. And in fact,
link |
01:18:15.760
the average war is just as bad. But most wars are going to be big wars. So that's the thing
link |
01:18:20.400
we're not so sure about. There's no strong data on wars with one... Because of the destructive
link |
01:18:29.440
nature of the weapons kill hundreds of millions of people. There's no data on this. But we can
link |
01:18:36.480
start intuiting... But we can see that the power law... We can do a power law fit to the rate of
link |
01:18:40.800
wars and it's a power law with a thick tail. So it's one of those things that you should expect,
link |
01:18:45.840
most of the damage to be in the few biggest ones. So that's also true for pandemics and
link |
01:18:49.920
a few other things. For pandemics, most of the damage is in the few biggest ones. So the median
link |
01:18:54.560
pandemic so far is less than the average that you should expect in the future.
link |
01:18:57.760
But that fitting of data is very questionable because everything you said is correct. The
link |
01:19:06.560
question is like, what can we infer about the future of civilization threatening pandemics
link |
01:19:14.320
or nuclear war from studying the history of the 20th century? So you can't just fit it to the
link |
01:19:23.280
data, the rate of wars and the destructive nature. That's not how nuclear war will happen.
link |
01:19:28.720
Nuclear war happens with two assholes or idiots that have access to a button.
link |
01:19:35.120
Small wars happen that way too. No, I understand that. But that's,
link |
01:19:38.480
it's very important. Small wars aside, it's very important to understand the dynamics,
link |
01:19:42.800
the human dynamics and the geopolitics of the way nuclear war happens in order to predict how we
link |
01:19:49.040
can minimize the chance of... But it is a common and useful intellectual strategy
link |
01:19:55.360
to take something that could be really big or but is often very small and fit the distribution of
link |
01:20:00.880
the data, small things, which you have a lot of them and then ask, do I believe the big things
link |
01:20:04.720
are really that different? Right? I see. So sometimes it's reasonable to say like,
link |
01:20:08.400
save with tornadoes or even pandemics or something. The underlying process might not be
link |
01:20:13.760
that different. But that's a high possible one. It might not be. The fact that mutual
link |
01:20:19.920
assured destruction seems to work to some degree shows you that to some degree it's different
link |
01:20:26.080
than the small wars. So it's a really important question to understand is, are humans capable?
link |
01:20:39.120
One human, like how many humans on earth? If I give them a button now, say you pressing this
link |
01:20:45.600
button will kill everyone on earth. Everyone, right? How many humans will press that button?
link |
01:20:50.880
I want to know those numbers, like day to day, minute to minute. How many people have that much
link |
01:20:57.600
irresponsibility, evil, incompetence, ignorance, whatever word you want to assign. There's a lot
link |
01:21:04.880
of dynamics to the psychology that leads you to press that button. But how many? My intuition is
link |
01:21:10.080
the number, the more destructive that press of a button, the fewer humans you find and that
link |
01:21:16.480
number gets very close to zero very quickly, especially people have access to such a button.
link |
01:21:22.720
But that's perhaps a hope than a reality. Unfortunately, we don't have good data on this,
link |
01:21:30.960
which is like, how destructive are humans willing to be?
link |
01:21:35.520
So I think part of this just has to think about, ask you about your time scales you're looking
link |
01:21:39.680
at, right? Right. So if you say, if you look at the history of war, we've had a lot of wars
link |
01:21:44.560
pretty consistently over many centuries. So if I ask, if you ask, will we have a nuclear war in
link |
01:21:49.920
the next 50 years, I might say, well, probably not. If I say 500 or 5000 years, if the same sort
link |
01:21:56.320
of risks are underlying and they just continue, then you have to add that up over time and think
link |
01:22:00.880
the risk is getting a lot larger the longer a timescale we're looking at.
link |
01:22:04.400
But okay, let's generalize nuclear war because what I was more referring to is something that
link |
01:22:09.920
kills more than 20% of humans on earth and injures or makes the other 80% suffer horribly,
link |
01:22:28.480
survive but suffer. That's what I was referring to. So when you look at 500 years from now,
link |
01:22:32.640
that might not be nuclear war, that might be something else, right? That's that kind of,
link |
01:22:36.640
has that destructive effect. And I don't know, these feel like novel questions in the history
link |
01:22:45.280
of humanity. I just don't know. I think since nuclear weapons, this has been engineering
link |
01:22:53.120
pandemics, for example, robotics, so nanobots. It just seems like a real new possibility that
link |
01:23:03.680
we have to contend with and we don't have good models for my perspective.
link |
01:23:08.160
So if you look on, say, the last 1000 years or 10,000 years, we could say we've seen a certain
link |
01:23:13.280
rate at which people are willing to make big destruction in terms of war.
link |
01:23:19.520
If you're willing to project that data forward, then I think if you want to ask over periods of
link |
01:23:23.920
thousands or tens of thousands of years, you would have a reasonable data set. So the key question
link |
01:23:28.320
is what's changed lately? Yes. Okay. And so a big question of which I've given a lot of thought to,
link |
01:23:35.280
what are the major changes that seem to have happened in culture and human attitudes over
link |
01:23:39.840
the last few centuries and what's our best explanation for those so that we can project
link |
01:23:43.520
them forward into the future? And I have a story about that, which is the story that we have been
link |
01:23:50.240
drifting back toward forager attitudes in the last few centuries as we get rich. So the idea is we
link |
01:23:56.960
spent a million years being a forager and that was a very sort of standard lifestyle that we know
link |
01:24:03.120
a lot about. Foragers sort of live in small bands, they make decisions cooperatively, they share food,
link |
01:24:09.680
they don't have much property, etc. And humans liked that. And then 10,000 years ago, farming
link |
01:24:17.200
became possible, but it was only possible because we were plastic enough to really change our culture.
link |
01:24:21.920
Farming styles and cultures are very different. They have slavery, they have war, they have
link |
01:24:26.160
property, they have inequality, they have kings, they stay in one place instead of wandering,
link |
01:24:31.440
they don't have as much diversity of experience or food, they have more disease. This farming life
link |
01:24:36.960
is just very different. But humans were able to sort of introduce conformity and religion and all
link |
01:24:42.320
sorts of things to become just a very different kind of creature as farmers. Farmers are just
link |
01:24:45.920
really different than foragers in terms of their values in their lives. But the pressures that made
link |
01:24:51.280
foragers into farmers were part mediated by poverty. Farmers were poor and if they deviated
link |
01:24:57.200
from the farming norms that people around them supported, they were quite at risk of starving
link |
01:25:01.520
to death. And then in the last few centuries, we've gotten rich. And as we've gotten rich,
link |
01:25:08.800
the social pressures that turned foragers into farmers have become less persuasive to us.
link |
01:25:15.600
So for example, a farming young woman who was told, if you have a child out of wedlock,
link |
01:25:19.600
you and your child may starve, that was a credible threat. She would see actual examples around her
link |
01:25:25.200
to make that believable threat. Today, if you say to a young woman, you shouldn't have a child out
link |
01:25:30.320
of wedlock, she will see other young women around her doing okay that way. We're all rich enough
link |
01:25:34.720
to be able to afford that sort of a thing. And therefore, she's more inclined often to go with
link |
01:25:40.000
her inclinations or sort of more natural inclinations about such things rather than to be
link |
01:25:44.800
pressured to follow the official farming norms of that you shouldn't do that sort of thing. And
link |
01:25:49.920
all through our lives, we have been drifting back toward forager attitudes because we've been getting
link |
01:25:55.520
rich. And so aside from at work, which is an exception, but elsewhere, I think this explains
link |
01:26:02.080
trends toward less slavery, more democracy, less religion, less fertility, more promiscuity,
link |
01:26:08.080
more travel, more art, more leisure, fewer work hours, all of these trends are basically explained
link |
01:26:15.680
by becoming more forager like. And much science fiction celebrates this. Star Trek or the culture
link |
01:26:21.680
novels, people like this image that we are moving toward this world where basically like foragers
link |
01:26:26.640
were peaceful, we share, we make decisions collectively, we have a lot of free time,
link |
01:26:31.200
we are into art. So forger is a word and it has, it's a loaded word because it's connected to
link |
01:26:42.960
the actual, what life was actually like at that time. As you mentioned, we sometimes don't do a
link |
01:26:49.280
good job of telling accurately what life was like back then. But you're saying if it's not exactly
link |
01:26:55.200
like foragers, it rhymes in some fundamental way. You also said peaceful. Is it obvious that a forager
link |
01:27:02.400
with a nuclear weapon would be peaceful? I don't know if that's 100% obvious. So we know, again,
link |
01:27:10.400
we know fair bit about what foragers lives were like. The main sort of violence they had would be
link |
01:27:15.760
sexual jealousy. They were relatively promiscuous. And so there'd be a lot of jealousy. But they did
link |
01:27:20.400
not have organized wars with each other. That is, they were at peace with their neighboring forager
link |
01:27:25.280
bands. They didn't have property in land or even in people. They didn't really have marriage.
link |
01:27:31.040
And so they were in fact peaceful. When you think about large scale wars, they don't start
link |
01:27:37.920
large scale wars. They didn't have coordinated large scale wars in the ways chimpanzees do.
link |
01:27:41.600
Our chimpanzees do have wars between one tribe of chimpanzees and others, but human foragers
link |
01:27:46.080
do not. Farmers return to that, of course, the more chimpanzee like styles. Well, that's a hopeful
link |
01:27:51.600
message. If we could return real quick to the Hello Aliens Twitter thread, one of them is really
link |
01:27:59.840
interesting about language. What percent of Hello Aliens would be able to talk to us in our language?
link |
01:28:05.280
This is the question of communication. It actually gets to the nature of language.
link |
01:28:10.080
It also gets to the nature of how advanced you expect them to be. So I think some people see
link |
01:28:19.040
that we have advanced over the last thousands of years, and we aren't reaching any sort of limit.
link |
01:28:25.200
And so they tend to assume it could go on forever. And I actually tend to think that within, say,
link |
01:28:30.880
10 million years, we will sort of max out on technology. We will sort of learn everything
link |
01:28:36.400
that's feasible to know for the most part. And then obstacles to understanding would more be
link |
01:28:42.720
about cultural differences, like ways in which different places had just chosen to do things
link |
01:28:47.600
differently. And so then the question is, is it even possible to communicate across some cultural
link |
01:28:55.680
differences? And I might think, I could imagine some maybe advanced aliens who've become so weird
link |
01:29:01.520
and different from each other, they can't communicate with each other. But we're probably
link |
01:29:05.040
pretty simple compared to them. So I would think, sure, if they wanted to, they could communicate
link |
01:29:11.440
with us. So it's the simplicity of the recipient. I tend to just to push back. Let's explore the
link |
01:29:19.200
possibility where that's not the case. Can we communicate with ants? I find that this idea
link |
01:29:29.680
that we're not very good at communicating in general. Oh, you're saying, all right, I see.
link |
01:29:36.080
You're saying once you get orders of magnitude better at communicating.
link |
01:29:39.840
Once they had maxed out on all, you know, communication technology in general, and they
link |
01:29:43.600
just understood in general how to communicate with lots of things, and had done that for millions
link |
01:29:48.000
of years. But you have to be able to, this is so interesting, as somebody who cares a lot about
link |
01:29:52.400
empathy and imagining how other people feel. It's communication requires empathy, meaning
link |
01:30:01.440
you have to truly understand how the other person, the other organism sees the world.
link |
01:30:08.720
It's like a four dimensional species talking to two dimensional species. It's not as trivial as,
link |
01:30:15.200
to me at least, as it might have foreseen. So let me reverse my position a little,
link |
01:30:20.880
because I'll say, well, the whole Hello Aliens question really combines two different scenarios
link |
01:30:28.240
that we're slipping over. So one scenario would be that the Hello Aliens would be like Grebi
link |
01:30:34.560
Aliens. They would be just fully advanced. They would have been expanding for millions of years.
link |
01:30:38.480
They would have a very advanced civilization. And then they would finally be arriving here,
link |
01:30:42.960
you know, after a billion years perhaps of expanding, in which case they're going to be
link |
01:30:46.640
crazy advanced at some and maximal level. But the Hello Aliens about aliens we might meet soon,
link |
01:30:54.000
which might be sort of UFO aliens, and UFO aliens probably are not Grebi aliens.
link |
01:31:01.440
How do you get here if you're not a Grebi alien? Well, they would have to be able to travel.
link |
01:31:07.440
Oh, but they would not be expansive. So if it's a road trip, it doesn't count as
link |
01:31:14.160
Grebi. So we're talking about expanding the colony, the comfortable colony.
link |
01:31:19.520
The question is, if UFOs, some of them are aliens, what kind of aliens would they be?
link |
01:31:26.480
This is sort of the key question you have to ask in order to try to interpret that scenario.
link |
01:31:32.240
The key fact we would know is that they are here right now, but the universe around us is not full
link |
01:31:38.960
of an alien civilization. So that says right off the bat that they chose not to allow
link |
01:31:47.840
massive expansion of a Grebi civilization. Is it possible that they chose it, but we just don't
link |
01:31:53.920
see them yet? These are the stragglers, the journeymen. So the timing coincidence is,
link |
01:32:00.320
it's almost surely if they are here now, they are much older than us. They are many millions of
link |
01:32:05.280
years older than us. And so they could have filled the galaxy in that last millions of years if they
link |
01:32:11.360
had wanted to. That is, they couldn't just be right at the edge. They're very unlikely. Most
link |
01:32:17.280
likely they would have been around waiting for us for a long time. They could have come here any
link |
01:32:21.120
time in the last millions of years, and they've been waiting around for this, or they just chose
link |
01:32:25.520
to come recently. But the timing coincidence would be crazy unlikely that they just happened to be
link |
01:32:31.200
able to get here, say in the last 100 years. They would no doubt have been able to get here
link |
01:32:36.560
far earlier than that. Again, we don't know. So this is a fringe like UFO sightings on Earth.
link |
01:32:41.440
We don't know if this kind of increase in sightings have anything to do with actual visitation.
link |
01:32:46.480
I'm just talking about the timing. They arose at some point in space time.
link |
01:32:52.000
And it's very unlikely that that was just at a point that they could just barely get here recently.
link |
01:32:57.600
Almost surely they could have been here much earlier. And throughout the stretch of several
link |
01:33:03.680
billion years that Earth existed, they could have been here often. Exactly. So they could have
link |
01:33:08.240
therefore filled the galaxy a long time ago if they had wanted to. Let's push back on that.
link |
01:33:13.760
The question to me is, isn't it possible that the expansion of a civilization is much harder than the
link |
01:33:21.120
travel? The sphere of the reachable is different than the sphere of the colonized. So isn't it
link |
01:33:33.360
possible that the sphere of places where the stragglers go, the different people that journey
link |
01:33:39.440
out, the explorers, is much, much larger and grows much faster than the civilization? So in which
link |
01:33:47.120
case they would visit us. There's a lot of visitors, the grad students of the civilization.
link |
01:33:52.880
They're exploring, they're collecting the data, but we're not yet going to see them.
link |
01:33:58.560
And by yet, I mean across millions of years. The time delay between when the first thing
link |
01:34:06.560
might arrive and then when colonists could arrive in mass and do a mass amount of work is cosmologically
link |
01:34:13.760
short. In human history, of course, sure, there might be a century between that,
link |
01:34:18.560
but a century is just a tiny amount of time on the scales we're talking about.
link |
01:34:23.040
So this is in computer science and colony optimization. It's true for ants. So it's like
link |
01:34:28.880
when the first ant shows up, it's likely, and if there's anything of value, it's likely the other
link |
01:34:34.400
ants will follow quickly. Yeah. Relatively short. It's also true that traveling over very long
link |
01:34:40.480
distances, probably one of the main ways to make that feasible is that you land somewhere,
link |
01:34:46.240
you colonize a bit, you create new resources that can then allow you to go farther.
link |
01:34:50.480
Many short hops as opposed to a giant long journey.
link |
01:34:53.200
Exactly. Those hops require that you are able to start a colonization of sorts
link |
01:34:58.240
along those hops, right? You have to be able to stop somewhere, make it into a waystation
link |
01:35:02.640
such that you can then support your moving farther.
link |
01:35:05.520
So what do you think of, there's been a lot of UFO sightings. What do you think about
link |
01:35:11.280
those UFO sightings? And what do you think if any of them are of extraterrestrial origin
link |
01:35:20.400
and we don't see giant civilizations out in the sky, how do you make sense of that then?
link |
01:35:27.440
I want to do some clearing of throats, which people like to do on this topic.
link |
01:35:31.200
Right? They want to make sure you understand they're saying this and not that.
link |
01:35:35.280
Right? So I would say the analysis needs both a prior and a likelihood.
link |
01:35:42.720
So the prior is what are the scenarios that are all plausible in terms of what we know about the
link |
01:35:48.880
universe? And then the likelihood is the particular actual sightings, like how hard are those to
link |
01:35:54.880
explain through various means? I will establish myself as somewhat of an expert on the prior. I
link |
01:36:01.200
would say my studies and the things I've studied make me an expert and I should stand up and have
link |
01:36:06.080
an opinion on that and be able to explain it. The likelihood, however, is not my area of expertise.
link |
01:36:11.760
That is, I'm not a pilot. I don't do atmospheric studies of things I haven't studied in detail,
link |
01:36:18.000
the various kinds of atmospheric phenomena or whatever that might be used to explain the
link |
01:36:22.080
particular sightings. I can just say from my amateur stance, the sightings look damn puzzling.
link |
01:36:28.800
They do not look easy to dismiss. The attempts I've seen to easily dismiss them seem to me to
link |
01:36:33.760
fail. It seems like these are pretty puzzling weird stuff that deserve an expert's attention.
link |
01:36:39.840
So in terms of considering asking what the likelihood is. So an analogy I would make
link |
01:36:44.560
is a murder trial. On average, if we say what's the chance any one person murdered another person
link |
01:36:50.960
as a prior probability, maybe one in a thousand people get murdered, maybe each person has a
link |
01:36:55.120
thousand people around them who could plausibly have done it. So the prior probability of a murder
link |
01:36:58.960
is one in a million. But we allow murder trials because often evidence is sufficient to overcome
link |
01:37:04.800
on one in a million prior because the evidence is often strong enough, right? My guess, rough guess
link |
01:37:11.440
for the UFOs as aliens scenario, some of them is the priors roughly one in a thousand,
link |
01:37:16.560
much higher than the usual murder trial, plenty high enough that strong physical evidence could
link |
01:37:23.920
put you over the top to think it's more likely than not. But I'm not an expert on that physical
link |
01:37:28.720
evidence. I'm going to leave that part to someone else. I'm going to say the priors pretty high.
link |
01:37:33.440
This isn't a crazy scenario. So then I can elaborate on where my prior comes from.
link |
01:37:38.000
What scenario could make most sense of this data? My scenario to make sense has
link |
01:37:44.160
two main parts. First is panspermia siblings. So panspermia is the
link |
01:37:52.000
hypothesized process by which life might have arrived on Earth from elsewhere.
link |
01:37:57.920
And a plausible time for that. I mean, it would have to happen very early in Earth history because
link |
01:38:01.760
we see life early in history. And a plausible time could have been during the stellar nursery
link |
01:38:06.640
where the sun was born with many other stars in the same close proximity with lots of rocks
link |
01:38:13.360
flying around able to move things from one place to another. Rock with life on it from some
link |
01:38:21.520
rock with planet with life came into that stellar nursery. It plausibly could have seeded many
link |
01:38:27.120
planets in that stellar nursery all at the same time. They're all born at the same time in the
link |
01:38:31.040
same place pretty close to each other. Lots of rocks flying around. So a panspermia scenario
link |
01:38:36.960
would then create siblings, i.e., there would be, say, a few thousand other planets out there.
link |
01:38:44.560
So after the nursery forms, it drifts, it separates. They drift apart. And so out there in the galaxy,
link |
01:38:49.920
there would now be a bunch of other stars all formed at the same time. And we can actually spot
link |
01:38:53.760
them in terms of their spectrum. And they would have then started on the same path of life as we
link |
01:39:00.080
did with that life being seeded. But they would move at different rates. And most likely, most of
link |
01:39:07.200
them would never reach an advanced level before the deadline. But maybe one other did. And maybe it
link |
01:39:14.000
did before us. So if they did, they could know all of this and they could go searching for their
link |
01:39:20.080
siblings. That is, they could look in the sky for the other stars with the spectrum that
link |
01:39:24.320
matches the spectrum that came from this nursery. They could identify their sibling stars in the
link |
01:39:29.680
galaxy, the thousand of them. And those would be of special interest to them because they would
link |
01:39:34.160
think, well, life might be on those. And they could go looking for them.
link |
01:39:39.360
We just such a brilliant mathematical, philosophical, physical, biological idea
link |
01:39:48.960
of panspermia siblings, because we all kind of started a similar time in this local pocket
link |
01:39:56.400
of the universe. And so that changes a lot of the math.
link |
01:40:02.640
So that would create this correlation between when advanced life might appear,
link |
01:40:06.080
no longer just random independent spaces in space time. There'd be this cluster, perhaps.
link |
01:40:10.800
And that allows interaction between elements of the cluster. Yes.
link |
01:40:14.960
Non grabby alien civilizations, kind of primitive alien civilizations like us
link |
01:40:22.240
with others. And they might be a little bit ahead. That's so fascinating.
link |
01:40:26.240
Well, they would probably be a lot ahead. So the puzzle is, if they happen before us,
link |
01:40:33.600
they probably happen hundreds of millions of years before us.
link |
01:40:36.400
But less than a billion.
link |
01:40:38.080
Less than a billion, but still plenty of time that they could have become grabby and filled
link |
01:40:43.680
the galaxy and gone beyond. So the fact is they chose not to become grabby. That would have to
link |
01:40:49.520
be the interpretation. If we have panspermia. So plenty of time to become grabby, you said.
link |
01:40:54.160
So yes, they should be. And they chose not to.
link |
01:40:57.920
Are we sure about this? So again, 100 million years is enough.
link |
01:41:01.840
100 million. So I told you before that I said within 10 million years, our descendants will
link |
01:41:07.280
become grabby or not. And they'll have that choice. Okay.
link |
01:41:10.880
And so they clearly more than 10 million years earlier than us. So they chose not to.
link |
01:41:16.240
But still go on vacation, look around. So just not grabby.
link |
01:41:20.400
If they chose not to expand, that's going to have to be a rule they set to not allow any
link |
01:41:25.280
part of themselves to do it. Like if they let any little ship fly away with the ability to
link |
01:41:31.600
create a colony, the game's over, then the universe becomes grabby from their origin
link |
01:41:38.000
with this one colony. So in order to prevent their civilization being grabby, they have to
link |
01:41:42.560
have a rule they enforce pretty strongly that no part of them can ever try to do that.
link |
01:41:46.960
Through a global authoritarian regime or through something that's internal to that,
link |
01:41:52.800
meaning it's part of the nature of life that it doesn't want as like a political officer in
link |
01:41:59.120
the brain or whatever. Yes, there's something in human nature that prevents you from want or
link |
01:42:06.480
like alien nature that as you get more advanced, you become lazier and lazier in terms of exploration
link |
01:42:13.360
and expansion. So I would say they would have to have enforced a rule against expanding. And that
link |
01:42:19.680
rule would probably make them reluctant to let people leave very far. Anyone vacation tripped
link |
01:42:26.080
far away could risk an expansion from this vacation trip. So they would probably have a
link |
01:42:30.080
pretty tight lid on just allowing any travel out from their origin in order to enforce this rule.
link |
01:42:36.720
But then we also know, well, they would have chosen to come here. So clearly,
link |
01:42:41.120
they made an exception from their general rule to say, okay, but an expedition to Earth,
link |
01:42:46.880
that should be allowed. It could be intentional exception or incompetent exception. But if
link |
01:42:53.200
incompetent, then they couldn't maintain this over 100 million years, this policy of not allowing
link |
01:42:58.160
any expansion. So we have to see, they have successfully, they're not just had a policy to
link |
01:43:02.400
try, they succeeded over 100 million years in preventing the expansion. That's substantial
link |
01:43:09.120
competence. Let me think about this. So you don't think there could be a barrier in 100 million
link |
01:43:14.160
years, you don't think there could be a barrier to like technological barrier to becoming expansionary.
link |
01:43:24.240
Imagine the Europeans that tried to prevent anybody from leaving Europe to go to the New
link |
01:43:30.320
World. And imagine what it would have taken to make that happen over 100 million years.
link |
01:43:36.240
Yeah, it's impossible. They would have to have very strict guards at the borders.
link |
01:43:41.840
I just don't know. They're saying, no, you can't go.
link |
01:43:43.920
But just to clarify, you're not suggesting that's actually possible.
link |
01:43:48.320
I am suggesting it's possible. I don't know how you keep my silly human brain.
link |
01:43:55.840
Maybe it's a brain that values freedom, but I don't know how you can keep no matter how much force,
link |
01:44:01.600
no matter how much censorship or control or so on. I just don't know how you can keep people from
link |
01:44:09.360
exploring into the mysterious into the end. You're thinking of people, we're talking aliens.
link |
01:44:13.360
So remember, there's a vast space of different possible social creatures they could have evolved
link |
01:44:17.040
from, different cultures they could be in, different kinds of threats. I mean, there are many things
link |
01:44:22.160
that you talked about that most of us would feel very reluctant to do. This isn't one of those.
link |
01:44:26.560
Okay, so how if the UFO sightings represent alien visitors, how the heck are they getting here under
link |
01:44:34.240
the Panspermia siblings? So Panspermia siblings is one part of the scenario, which is that's
link |
01:44:40.160
where they came from. And from that, we can conclude they had this rule against expansion,
link |
01:44:44.400
and they've successfully enforced that. That also creates a plausible agenda for why they would be
link |
01:44:50.080
here, that is to enforce that rule in us. That is, if we go out and expanding, then we have defeated
link |
01:44:56.000
the purpose of this rule they set up. So they would be here to convince us to not expand.
link |
01:45:03.600
Convince in quotes. Right, through various mechanisms. So obviously, one thing we conclude
link |
01:45:08.480
is they didn't just destroy us. That would have been completely possible. So the fact that they're
link |
01:45:13.120
here and we are not destroyed means that they chose not to destroy us. They have some degree of
link |
01:45:18.880
empathy or whatever their morals are that would make them reluctant to just destroy us. They would
link |
01:45:24.720
rather persuade us. They're destroying their brethren. And so there's a difference in arrival
link |
01:45:31.120
and observation. They may have been observing for a very long time. Exactly. And they arrive to try
link |
01:45:37.120
to not to try. I don't think to ensure that we don't become grabby, which is because we can see
link |
01:45:48.240
that they did not. They must have enforced a rule against that. And they are therefore
link |
01:45:53.040
here to do that's a plausible interpretation why they would risk this expedition when they
link |
01:45:57.120
clearly don't risk very many expeditions over this long period to allow this one exception.
link |
01:46:01.280
Because otherwise, if they don't, we may become grabby. And they could have just destroyed us,
link |
01:46:05.840
but they didn't. And they're closely monitoring the technological advancing of our civilization,
link |
01:46:10.800
like what nuclear weapons is one thing that, all right, cool. That might have less to do with
link |
01:46:15.440
nuclear weapons and more with nuclear energy. Maybe they're monitoring fusion closely.
link |
01:46:20.800
Like, how clever are these apes getting? So no doubt they have a button that if we get
link |
01:46:26.320
too uppity or risky, they can push the button and ensure that we don't expand. But they'd rather do
link |
01:46:31.760
it some other way. So now, that explains why they're here and why they aren't out there.
link |
01:46:36.640
There's another thing that we need to explain. There's another key data we need to explain
link |
01:46:39.680
about UFOs if we're going to have a hypothesis that explains them. And this is something many
link |
01:46:43.840
people have noticed, which is they had two extreme options they could have chosen and
link |
01:46:49.840
didn't chose. They could have either just remained completely invisible, clearly an advanced
link |
01:46:54.640
civilization could have been completely invisible. There's no reason they need to fly around and
link |
01:46:58.320
be noticed. They could just be in orbit in dark satellites that are completely invisible to us
link |
01:47:03.120
watching whatever they want to watch. That would be well within their abilities. That's one thing
link |
01:47:07.120
they could have done. The other thing they could do is just show up and land on the White House lawn,
link |
01:47:11.600
as they say, and shake hands and make themselves really obvious. They could have done either of
link |
01:47:16.000
those and they didn't do either of those. That's the next thing you need to explain about UFOs
link |
01:47:20.800
as aliens. Why would they take this intermediate approach of hanging out near the edge of visibility
link |
01:47:25.360
with somewhat impressive mechanisms, but not walking up and introducing themselves,
link |
01:47:29.600
nor just being completely invisible? Okay. A lot of questions there. So one,
link |
01:47:35.120
do you think it's obvious where the White House lawn is? Well, it's obvious where there are
link |
01:47:40.480
concentrations of humans that you could go up and introduce. But is humans the most interesting
link |
01:47:44.080
thing about Earth? Yeah. Are you sure about this? Because if they're worried about an expansion,
link |
01:47:51.360
then they would be worried about a civilization that could be capable of expansion. Obviously,
link |
01:47:54.880
humans are the civilization on Earth. That's by far the closest to being able to expand.
link |
01:47:59.600
I just don't know if aliens obviously see humans, like the individual humans,
link |
01:48:11.360
like the organ of the meat vehicles as the center of focus for observing a life on a planet.
link |
01:48:19.520
They're supposed to be really smart in advance. This shouldn't be that hard for that.
link |
01:48:23.600
But I think we're actually the dumb ones because we think humans are the important things,
link |
01:48:28.080
but it could be our ideas. It could be something about our technologies.
link |
01:48:32.720
But that's mediated with us. It's correlated with us.
link |
01:48:34.560
No, we make it seem like it's mediated by us humans. But the focus for alien civilizations might be
link |
01:48:44.560
the AI systems or the technologies themselves. That might be the organism.
link |
01:48:48.560
Okay. Human is the food, the source of the organism that's under observation.
link |
01:48:59.520
But what they wanted to have close contact with was something that was closely near humans,
link |
01:49:03.360
then they would be contacting those. And we would just incidentally see, but we would still see.
link |
01:49:08.000
But isn't it possible, taking their perspective, isn't it possible that they would want to interact
link |
01:49:15.040
with some fundamental aspect that they're interested in without interfering with it?
link |
01:49:20.960
And that's actually a very, no matter how advanced you are, it's very difficult to do.
link |
01:49:25.280
But that's puzzling. The prototypical UFO observation is a shiny, big object in the sky
link |
01:49:35.520
that has very rapid acceleration and no apparent surfaces for using air to manipulate at speed.
link |
01:49:48.080
The question is why that? Again, for example, if they just wanted to talk to our computer
link |
01:49:53.280
systems, they could move some sort of a little probe that connects to a wire and reads and
link |
01:49:59.760
sends bits there. They don't need a shiny thing flying in the sky.
link |
01:50:02.640
But I don't think they would be, they are, would be looking for the right way to communicate,
link |
01:50:08.880
the right language to communicate. Everything you just said, looking at the computer systems,
link |
01:50:14.160
I mean, that's not a trivial thing. Coming up with a signal that us humans would not freak out too
link |
01:50:21.440
much about, but also understand, might not be that trivial. How would you talk to things?
link |
01:50:25.600
Well, so not freak out a part is another interesting constraint. So again, I said,
link |
01:50:29.360
like the two obvious strategies are just to remain completely invisible and watch,
link |
01:50:32.960
which would be quite feasible or to just directly interact. That's come out and be really very
link |
01:50:38.160
direct, right? I mean, there's big things that you can see around. There's big cities,
link |
01:50:42.240
there's aircraft carriers, there's lots of, if you wanted to just find a big thing and come
link |
01:50:46.480
right up to it and like tap it on the shoulder or whatever, that would be quite feasible,
link |
01:50:50.240
then they're not doing that. So my hypothesis is that one of the other questions there was,
link |
01:50:58.320
do they have a status hierarchy? And I think most animals on earth, who are social animals,
link |
01:51:03.120
have status hierarchy. And they would reasonably presume that we have a status hierarchy.
link |
01:51:09.520
And take me to your leader. Well, I would say their strategy is to be impressive and sort of
link |
01:51:16.000
get us to see them at the top of our status hierarchy. Just to, that's how, for example,
link |
01:51:23.520
we domesticate dogs, right? We convince dogs we're the leader of their pack, right? And we
link |
01:51:29.280
domesticate many animals that way. But as we just swap in to the top of their status hierarchy,
link |
01:51:34.000
and we say, we're your top status animal, so you should do what we say, you should follow our lead.
link |
01:51:39.600
So the idea that would be, they are going to get us to do what they want by being top status.
link |
01:51:48.480
You know, all through history, kings and emperors, etc, have tried to impress their citizens and
link |
01:51:52.800
other people by having the bigger palace, the bigger parade, the bigger crown, and diamonds,
link |
01:51:57.200
right, whatever, maybe building a bigger pyramid, etc. Just, it's a very well established trend,
link |
01:52:02.320
to just be high status by being more impressive than the rest.
link |
01:52:05.760
To push back, when there's an order of several orders of magnitude of power differential,
link |
01:52:11.520
asymmetry of power, I feel like that status hierarchy no longer applies. It's like memetic
link |
01:52:16.560
theory. It's like, most emperors are several orders of magnitude more powerful than anyone,
link |
01:52:21.280
a member of their empire. Let's increase that by even more. So like, if I'm interacting with ants,
link |
01:52:29.600
I no longer feel like I need to establish my power with ants. I actually want to
link |
01:52:35.440
lessen, I want to lower myself to the ants. I want to become the lowest possible ant,
link |
01:52:42.240
so that they would welcome me. So I'm less concerned about them worshiping me. I'm more
link |
01:52:47.200
concerned about them welcoming me. Well, it is important that you be non threatening and that
link |
01:52:51.840
you be local. So I think, for example, if the aliens had done something really big in the sky,
link |
01:52:55.600
you know, 100 light years away, that would be there, not here. And that could seem threatening.
link |
01:53:01.200
So I think their strategy to be the high status would have to be to be visible, but to be here
link |
01:53:05.760
and non threatening. I just don't know if it's obvious how to do that. Like, take your own
link |
01:53:10.240
perspective, you see a planet with with relatively intelligent, like complex structures being formed,
link |
01:53:18.240
like, yeah, life forms, we could see this under in Titan, or something like that, the moon,
link |
01:53:24.240
you know, right, Europa, you start to see not just primitive bacterial life, but multicellular
link |
01:53:30.880
life. And it seems to form some very complicated cellular colonies, structures that they're
link |
01:53:37.760
dynamic, there's a lot of stuff going on, some some giant, gigantic cellular automata type of
link |
01:53:44.080
construct. How do you make yourself known to them in an impressive fashion without destroying it?
link |
01:53:54.400
Like, we know how to destroy potentially, right? So if you go touch stuff, you're likely to hurt
link |
01:54:00.480
it, right? There's a good risk of hurting something by touch, getting too close and touching it and
link |
01:54:04.160
interacting, right? Yeah, like landing on a White House lawn. Right. So the claim is that
link |
01:54:10.080
their current strategy of hanging out at the periphery of our vision and just being very
link |
01:54:14.720
clearly physically impressive with very clear physically impressive abilities is at least
link |
01:54:20.800
a plausible strategy they might use to impress us and convince us sort of we're at the top of their
link |
01:54:26.080
status hierarchy. And I would say if they if they came closer, not only would they risk hurting us
link |
01:54:31.760
in ways that they couldn't really understand, but more plausibly, they would reveal things about
link |
01:54:36.400
themselves we would hate. So if you look at how we treat other civilizations on earth and other
link |
01:54:41.600
people, we are generally, you know, interested in foreigners and people from other plant lands.
link |
01:54:47.600
And we were generally interested in their varying cult customs, etc. Until we find out that they
link |
01:54:52.480
do something that violates our moral norms. And then we hate them. And these are aliens for God's
link |
01:54:58.080
Sakes, right? They there's just going to be something about them that we hate. They eat babies,
link |
01:55:02.400
who knows what it is, but something they don't think is offensive, but that they think we might
link |
01:55:07.440
find. And so they would be they would be risking a lot by revealing a lot about themselves. We
link |
01:55:11.760
would find something we hated. Interesting. But do you resonate at all with mimetic theory where
link |
01:55:18.160
like, we only feel this way about things that are very close to us. So aliens are sufficiently
link |
01:55:22.880
different to where we'll be like, fascinated, terrified or fascinated, but not like.
link |
01:55:27.680
Right. But if they want to be at the top of our status hierarchy to get us to follow them,
link |
01:55:31.360
they can't be too distant. They have to be close enough that we would see them that way.
link |
01:55:36.160
Pretend to be close enough, right. And not reveal much that mystery that old Clint Eastwood cowboy
link |
01:55:43.040
say less. We're clever enough that we can figure out their agenda. That is just from the fact
link |
01:55:47.760
that we're here. If we see that they're here, we can figure out, oh, they want us not to expand.
link |
01:55:51.520
And look, they are this huge power and they're very impressive. So and a lot of us don't want to
link |
01:55:56.160
expand. So that could easily tip us over the edge toward we already wanted to not expand. We already
link |
01:56:02.240
wanted to be able to regulate and have a central community. And here are these very advanced smart
link |
01:56:07.440
aliens who have survived for a hundred million years. And they're telling us not to expand either.
link |
01:56:14.720
This is brilliant. I love this so much. The, the, so returning to Penn's spermia siblings,
link |
01:56:21.360
just to clarify one thing. In that framework, how would, who originated, who planted it?
link |
01:56:31.120
Would it be a grabby alien civilization that planted the siblings? Or no.
link |
01:56:35.760
The simple scenario is that life started on some other planet billions of years ago.
link |
01:56:41.840
Yes. And it went through part of the stages of evolution to advance life, but not all the way
link |
01:56:46.480
to advanced life. And then some rock hit it, grabbed a piece of it on the rock and that rock
link |
01:56:52.080
drifted for maybe in a million years until it happened upon the stellar nursery where it then
link |
01:56:57.920
seeded many stars. And something about that life, without being super advanced, it was nevertheless
link |
01:57:03.760
resilient to the harsh conditions of space. There's some graphs that I've been impressed by
link |
01:57:08.720
that show sort of the level of genetic information in various kinds of life on the history of Earth.
link |
01:57:14.160
And basically, we are now more complex than the earlier life, but the earlier life was
link |
01:57:19.680
still pretty damn complex. And so if you actually project this log graph in history,
link |
01:57:24.640
it looks like it was many billions of years ago when you get down to zero. So plausible,
link |
01:57:29.520
you could say there was just a lot of evolution that had to happen before you
link |
01:57:32.720
to get to the simplest life we've ever seen in history of life on Earth was still pretty damn
link |
01:57:36.400
complicated. And so that's always been this puzzle. How could life get to this enormously
link |
01:57:42.880
complicated level in the short period it seems to at the beginning of Earth history? So it's only
link |
01:57:50.160
300 million years at most when it appeared, and then it was really complicated at that point.
link |
01:57:55.920
So Panspermi allows you to explain that complexity by saying, well, it's been another
link |
01:58:01.040
five billion years on another planet going through lots of earlier stages where it was
link |
01:58:05.360
working its way up to the level of complexity you see at the beginning of Earth.
link |
01:58:08.720
Well, we'll try to talk about other ideas of the origin of life. But let me return to UFO
link |
01:58:15.280
sightings. Is there other explanations that are possible outside of Panspermi as siblings
link |
01:58:20.480
that can explain no grabby aliens in the sky and yet alien arrival on Earth?
link |
01:58:28.640
Well, the other categories of explanations that most people will use is, well, first of all,
link |
01:58:33.680
just mistakes like you're confusing something ordinary for something mysterious, right?
link |
01:58:40.640
Or some sort of secret organization like our government is secretly messing with us and trying
link |
01:58:46.400
to do a false flag ops or whatever. They're trying to convince the Russians or the Chinese
link |
01:58:51.920
that there might be aliens and scare them into not attacking or something, right?
link |
01:58:56.480
Because the history of World War II say the US government did all these big fake operations
link |
01:59:01.520
where they were faking a lot of big things in order to mess with people. So that's a possibility.
link |
01:59:06.880
The government's been lying and faking things and paying people to lie about what they saw,
link |
01:59:12.880
etc. That's a plausible set of explanations for the range of sightings seen. And another
link |
01:59:19.440
explanation people offer is some other hidden organization on Earth. There's some secret
link |
01:59:24.240
organizations somewhere that has much more advanced capabilities than anybody's given
link |
01:59:28.000
a credit for. For some reason, it's been keeping secret. I mean, they all sound somewhat implausible,
link |
01:59:33.920
but again, we're looking for maybe one in a thousand sort of priors. The question is,
link |
01:59:39.600
could they be in that level of plausibility? Can we just link on this? First of all, you've written,
link |
01:59:47.360
talked about, thought about so many different topics. You're an incredible mind. And I just
link |
01:59:54.400
thank you for sitting down today. I'm almost like at a loss of which place we explore. But let me,
link |
02:00:00.160
on this topic, ask about conspiracy theories. Because you've written about institutions and
link |
02:00:06.640
authorities. What, this is a bit of a therapy session, but what do we make of conspiracy theories?
link |
02:00:19.280
The phrase itself is pushing you in a direction. So clearly, in history, we've had many large
link |
02:00:27.040
coordinated keepings of secrets, say the Manhattan Project. And there was hundreds of thousands of
link |
02:00:32.000
people working on that over many years, but they kept it a secret. Clearly, many large military
link |
02:00:37.840
operations have kept things secrets over even decades with many thousands of people involved.
link |
02:00:45.280
So clearly, it's possible to keep some things secret over time periods. But the more people
link |
02:00:52.640
you involve and the more time you are assuming and the less centralized an organization or the
link |
02:00:58.880
less discipline they have, the harder it gets to believe. But we're just trying to calibrate,
link |
02:01:02.880
basically, in our minds, which kind of secrets can be kept by which groups over what time periods
link |
02:01:07.600
for what purposes, right? But let me, I don't have enough data. So I'm somebody, I hang out with
link |
02:01:15.680
people and I love people. I love all things really. And I just, I think that most people,
link |
02:01:22.880
even the assholes, have the capacity to be good and they're beautiful and I enjoy them.
link |
02:01:28.400
So the kind of data my brain, whatever the chemistry of my brain is that sees the beautiful
link |
02:01:33.280
things is maybe collecting a subset of data that doesn't allow me to intuit the competence that
link |
02:01:42.560
humans are able to achieve in constructing conspiracy theories. So for example, one thing
link |
02:01:50.720
that people often talk about is like intelligence agencies, this like broad thing they say,
link |
02:01:55.120
the CIA, the FSB, the different, the British intelligence, I've fortunate or unfortunate
link |
02:02:01.360
enough, never gotten a chance that I know of to talk to any member of those intelligence agencies,
link |
02:02:08.320
nor like take a peek behind the curtain or the first curtain, I don't know how many levels
link |
02:02:15.280
of curtains there are. And so I don't, I can't intuit my interactions with government. I was
link |
02:02:21.040
funded by DOD and DARPA and I've interacted, been to the Pentagon, like with all due respect
link |
02:02:28.160
to my friends, lovely friends in government and there are a lot of incredible people,
link |
02:02:33.920
but there is a very giant bureaucracy that sometimes suffocates the ingenuity of the human
link |
02:02:40.880
spirit is one way I can put it, meaning they are, I just, it's difficult for me to imagine
link |
02:02:47.520
extreme competence at a scale of hundreds or thousands of human beings. Now that doesn't
link |
02:02:53.280
mean that's my very anecdotal data of the situation. And so I try to build up my intuition
link |
02:03:00.640
about centralized system of government, how much conspiracy is possible, how much the
link |
02:03:09.040
intelligence agencies or some other source can generate sufficiently robust propaganda that
link |
02:03:16.560
controls the populace. If you look at World War II, as you mentioned, there have been extremely
link |
02:03:23.040
powerful propaganda machines on the side of Nazi Germany, on the side of the Soviet Union,
link |
02:03:30.160
on the side of the United States and all these different mechanisms. Sometimes they control
link |
02:03:36.800
the free press through social pressures. Sometimes they control the press through the threat of
link |
02:03:44.480
violence as you do in authoritarian regimes. Sometimes it's like deliberately the dictator
link |
02:03:50.160
like writing the news, the headlines and literally announcing it. And something about human psychology
link |
02:03:59.040
forces you to embrace the narrative and believe the narrative and at scale that becomes reality
link |
02:04:07.520
when the initial spark was just a propaganda thought in a single individual's mind. So I don't,
link |
02:04:13.760
I can't necessarily intuit of what's possible, but I'm skeptical of the power of human institutions
link |
02:04:23.280
to construct conspiracy theories that cause suffering at scale, especially in this modern age
link |
02:04:30.720
when information is becoming more and more accessible by the populace. Anyway, that's,
link |
02:04:35.600
I don't know if you can elucidate, cause suffering at scale. But of course, say during war time,
link |
02:04:40.560
the people who are managing the various conspiracies like D Day or Manhattan Project,
link |
02:04:45.200
they thought that their conspiracy was avoiding harm rather than causing harm. So if you can get
link |
02:04:51.520
a lot of people to think that supporting the conspiracy is helpful, then a lot more might do
link |
02:04:58.080
that. And there's just a lot of things that people just don't want to see. So if you can make your
link |
02:05:03.840
conspiracy the sort of thing that people wouldn't want to talk about anyway, even if they knew about
link |
02:05:07.680
it, you're, you know, most of the way there. So I have learned many over the years, many things
link |
02:05:13.600
that most ordinary people should be interested in, but somehow don't know even though the data's
link |
02:05:18.080
been very widespread. So, you know, I have this book, The Elephant in the Brain, and one of the
link |
02:05:22.000
chapters is there on medicine. And basically, most people seem ignorant of the very basic fact that
link |
02:05:28.240
when we do randomized trials where we give some people more medicine than others, the people
link |
02:05:32.960
who get more medicine are not healthier. Just overall in general, just like induce somebody
link |
02:05:38.800
to get more medicine because you just give them more budget to buy medicine, say, not a specific
link |
02:05:43.680
medicine, just the whole category. And you would think that would be something most people should
link |
02:05:48.160
know about medicine. You might even think that would be a conspiracy theory to think that would
link |
02:05:52.640
be hidden. But in fact, most people never learn that fact. So just to clarify, just a general
link |
02:05:59.200
high level statement, the more medicine you take, the less healthy you are.
link |
02:06:04.480
Randomized experiments don't find that fact. Do not find that more medicine makes you more
link |
02:06:09.440
healthy. They're just no connection. Oh, in randomized experiments, there's no relationship
link |
02:06:14.960
between more medicine. So it's not a negative relationship, but it's just no relationship.
link |
02:06:19.680
Right. And so the conspiracy theories would say that the businesses that sell you medicine don't
link |
02:06:27.120
want you to know that fact. And then you're saying that there's also part of this is that people just
link |
02:06:32.800
don't want to know. They just don't want to know. And so they don't learn this. So, you know,
link |
02:06:37.040
I've lived in the Washington area for several decades now reading the Washington Post regularly.
link |
02:06:41.600
Every week there was a special, you know, section on health and medicine. It never was mentioned
link |
02:06:47.040
in that section of the paper in all the 20 years I read that. So do you think there is some truth
link |
02:06:51.920
to this caricatured blue pill, red pill, where most people don't want to know the truth?
link |
02:06:58.000
Not most people. There are many things about which people don't want to know certain kinds of
link |
02:07:01.760
truths that is bad looking truths, truths that discouraging, truths that sort of take away
link |
02:07:07.840
the justification for things they feel passionate about. Do you think that's a bad aspect of human
link |
02:07:13.680
nature? That's something we should try to overcome? Well, as we discussed, my first priority is to
link |
02:07:20.400
just tell people about it, to do the analysis and the cold facts of what's actually happening,
link |
02:07:24.560
and then to try to be careful about how we can improve. So our book, The Elephant in the Rain,
link |
02:07:28.800
coauthored with Kevin Simler, is about how we hidden motives in everyday life. And our first
link |
02:07:34.080
priority there is just to explain to you what are the things that you are not looking at that you
link |
02:07:38.480
have reluctant to look at. And many people try to take that book as a self help book where they're
link |
02:07:43.360
trying to improve themselves and make sure they look at more things. And that often goes badly
link |
02:07:47.760
because it's harder to actually do that than you think. Yeah. And so we at least want you to know
link |
02:07:53.840
that this truth is available if you want to learn about it. It's the Nietzsche, if you gaze long
link |
02:07:58.640
to the abyss, the abyss gazes into you. Let's talk about this elephant in the brain. Amazing book,
link |
02:08:05.840
The Elephant in the Room is, quote, an important issue that people are reluctant to acknowledge
link |
02:08:10.720
or address a social taboo. The elephant in the brain is an important but unacknowledged feature
link |
02:08:16.560
of how our mind works and introspective taboo. You describe selfishness and self deception
link |
02:08:25.280
as the core or some of the core elephants, some of the elephants, elephant offspring in the brain,
link |
02:08:33.440
selfishness and self deception. All right. Can you explain, can you explain why these are
link |
02:08:41.520
the taboos in our brain that we don't want to acknowledge to ourselves?
link |
02:08:47.760
Your conscious mind, the one that's listening to me that I'm talking to at the moment,
link |
02:08:52.880
you like to think of yourself as the president or king of your mind, ruling over all that you see
link |
02:08:58.960
issuing commands that immediately obeyed, you are instead better understood as the press secretary
link |
02:09:06.240
of your brain. You don't make decisions, you justify them to an audience. That's what your
link |
02:09:12.800
conscious mind is for. You watch what you're doing and you try to come up with stories that
link |
02:09:20.160
explain what you're doing so that you can avoid accusations of violating norms. Humans, compared
link |
02:09:26.560
to most other animals, have norms and this allows us to manage larger groups with our
link |
02:09:31.520
morals and norms about what we should or shouldn't be doing. This is so important to us that we
link |
02:09:37.600
needed to be constantly watching what we were doing in order to make sure we had a good story
link |
02:09:42.400
to avoid norm violations. Many norms are about motives. If I hit you on purpose, that's a big
link |
02:09:48.080
violation of hit you accidentally, that's okay. I need to be able to explain why it was an accident
link |
02:09:52.960
and not on purpose. Where does that need come from for your own self preservation?
link |
02:09:58.400
Right. So humans have norms and we have the norm that if we see anybody violating a norm,
link |
02:10:03.040
we need to tell other people and then coordinate to make them stop and punish them for violating.
link |
02:10:09.200
So such benefits are strong enough and severe enough that we each want to avoid
link |
02:10:14.400
being successfully accused of violating norms. So for example, hitting someone on purpose is a big
link |
02:10:21.200
clear norm violation. If we do it consistently, we may be thrown out of the group and that would
link |
02:10:25.040
mean we would die. So we need to be able to convince people we are not going around hitting
link |
02:10:30.560
people on purpose. If somebody happens to be at the other end of our fist and their face
link |
02:10:36.080
connects, that was an accident and we need to be able to explain that.
link |
02:10:41.600
And similarly for many other norms humans have, we are serious about these norms and we don't want
link |
02:10:47.280
people to violate them, we find them violating, we're going to accuse them. But many norms have
link |
02:10:50.960
a motive component and so we are trying to explain ourselves and make sure we have a good motive
link |
02:10:56.800
story about everything we do, which is why we're constantly trying to explain what we're doing
link |
02:11:02.160
and that's what your conscious mind is doing. It is trying to make sure you've got a good motive
link |
02:11:07.040
story for everything you're doing and that's why you don't know why you really do things.
link |
02:11:11.040
What you know is what the good story is about why you've been doing things.
link |
02:11:15.360
And that's the self deception and you're saying that there is a machine, the actual dictator
link |
02:11:21.200
is selfish and then you're just the press secretary who's desperately doesn't want to get
link |
02:11:25.680
fired and is justifying all of all the decisions of the dictator and that's the self deception.
link |
02:11:32.160
Right. Now most people actually are willing to believe that this is true in the abstract. So
link |
02:11:37.040
our book has been classified as psychology and it was reviewed by psychologists and the basic
link |
02:11:41.760
way that psychology referees and reviewers responded to say this is well known.
link |
02:11:47.040
Most people accept that there's a fair bit of self deception. But they don't want to accept it
link |
02:11:50.640
about themselves directly. Well, they don't want to accept it about the particular topics that we
link |
02:11:54.560
talk about. So people accept the idea in the abstract that they might be self deceived or
link |
02:11:59.200
that they might not be honest about various things. But that hasn't penetrated into the
link |
02:12:03.600
literatures where people are explaining particular things like why we go to school,
link |
02:12:07.280
why we go to the doctor, why we vote, etc. So our book is mainly about 10 areas of life and
link |
02:12:13.120
explaining about in each area what our actual motives there are. And people who study those
link |
02:12:19.520
things have not admitted that hidden motives are explaining those particular areas.
link |
02:12:25.440
They haven't taken the leap from theoretical psychology to actual public policy.
link |
02:12:30.080
Exactly.
link |
02:12:30.800
And economics and all that kind of stuff. Well, let me just linger on this
link |
02:12:33.920
and bring up my old friends, Zingman Freud and Carl Jung. So how vast is this
link |
02:12:45.360
landscape of the unconscious mind, the power and the scope of the dictator? Is it only dark there?
link |
02:12:53.520
Is it some light? Is there some love?
link |
02:12:56.720
The vast majority of what's happening in your head, you're unaware of. So in a literal sense,
link |
02:13:02.080
the unconscious, the aspects of your mind that you're not conscious of is the overwhelming
link |
02:13:07.520
majority. But that's just true in a literal engineering sense. Your mind is doing lots
link |
02:13:12.800
of low level things and you just can't be consciously aware of all that low level stuff.
link |
02:13:16.640
But there's plenty of room there for lots of things you're not aware of.
link |
02:13:21.120
But can we try to shine a light at the things we're unaware of specifically, now again, staying
link |
02:13:26.800
with the philosophical psychology side for a moment. Can you shine the light in the young in
link |
02:13:32.080
shadow? What's going on there? What is this machine like? What level of thoughts are happening there?
link |
02:13:40.320
Is it something that we can even interpret? If we somehow could visualize it, is it something
link |
02:13:46.400
that's human interpretable? Or is it just a chaos of monitoring different systems in the body,
link |
02:13:52.160
making sure you're happy, making sure you're fed all those kind of basic forces that form
link |
02:13:58.400
abstractions on top of each other and they're not introspective at all.
link |
02:14:01.760
We humans are social creatures. Plausibly, being social is the main reason we have these
link |
02:14:06.320
unusually large brains. Therefore, most of our brain is devoted to being social.
link |
02:14:11.920
And so the things we are very obsessed with and constantly paying attention to are,
link |
02:14:16.080
how do I look to others? What would others think of me if they knew these various things they might
link |
02:14:22.480
learn about me? So that's close to being fundamental to what it means to be human,
link |
02:14:26.480
is caring what others think. Right. To be trying to present a story that would be okay for what
link |
02:14:32.640
other things. But we're very constantly thinking, what do other people think? So let me ask you
link |
02:14:37.520
this question then about you, Robin Hansen, who many places, sometimes for fun, sometimes as a
link |
02:14:46.560
basic statement of principle, likes to disagree with what the majority of people think.
link |
02:14:54.880
So how do you explain, how are you self deceiving yourself in this task? And how are you being
link |
02:15:02.560
self, like, why is the dictator manipulating you inside your head to be so critical? Like,
link |
02:15:08.960
there's norms. Why do you want to stand out in this way? Why do you want to challenge the
link |
02:15:14.640
norms in this way? Almost by definition, I can't tell you what I'm deceiving myself about.
link |
02:15:19.520
But the more practical strategy that's quite feasible is to ask about what are typical things
link |
02:15:24.640
that most people deceive themselves about and then to own up to those particular things.
link |
02:15:29.280
Sure. What's a good one? So for example, I can very much acknowledge that I would like to be
link |
02:15:36.720
well thought of. Yes. That I would be seeking attention and glory and praise from my intellectual
link |
02:15:46.800
work and that that would be a major agenda driving my intellectual attempts. So if there
link |
02:15:54.640
were topics that other people would find less interesting, I might be less interested in those
link |
02:15:59.040
for that reason. For example, I might want to find topics where other people are interested,
link |
02:16:02.960
and I might want to go for the glory of finding a big insight rather than a small one.
link |
02:16:11.840
And maybe one that was especially surprising. That's also, of course, consistent with some
link |
02:16:17.520
more ideal concept of what an intellectual should be. But most intellectuals are relatively
link |
02:16:24.160
risk averse. They are in some local intellectual tradition and they are adding to that and they
link |
02:16:30.320
are staying conforming to the sort of usual assumptions and usual accepted beliefs and
link |
02:16:35.680
practices of a particular area so that they can be accepted in that area and treated as part of
link |
02:16:42.400
the community. But you might think for the purpose of the larger intellectual project of
link |
02:16:49.120
understanding the world better, people should be less eager to just add a little bit to some
link |
02:16:54.800
tradition and they should be looking for what's neglected between the major traditions and major
link |
02:16:58.960
questions. They should be looking for assumptions maybe we're making that are wrong. They should
link |
02:17:02.560
be looking at ways, things that are very surprising, like things that would be,
link |
02:17:08.000
you would have thought a priori unlikely that once you are convinced of it, you find that to be
link |
02:17:12.480
very important and a big update. So you could say that one motivation I might have is less
link |
02:17:23.440
motivated to be sort of comfortably accepted into some particular intellectual community and more
link |
02:17:28.800
willing to just go for these more fundamental long shots that should be very important if you
link |
02:17:35.440
could find them. Which would, if you can find them, would get you appreciated across a larger
link |
02:17:44.880
number of people across the longer time span of history. So like maybe the small local community
link |
02:17:52.720
will say you suck, you must conform, but the larger community will see the brilliance of you
link |
02:18:00.640
breaking out of the cage of the small conformity into a larger cage. There's always a bigger
link |
02:18:06.960
cage and then you'll be remembered by more. Yeah, also that explains your choice of colorful
link |
02:18:13.840
shirt that looks great in a black background, so you definitely stand out. Now of course,
link |
02:18:19.760
you could say, well, you could get all this attention by making false claims of dramatic
link |
02:18:25.280
improvement and then wouldn't that be much easier than actually working through all the details?
link |
02:18:30.080
Why not? To make true claims. Let me ask the press secretary, why not? So of course you
link |
02:18:36.640
spoke several times about how much you value truth and the pursuit of truth. That's a very
link |
02:18:40.960
nice narrative. Hitler and Stalin also talked about the value of truth. Do you worry when you
link |
02:18:48.400
introspect, as broadly as all humans might, that it becomes a drug? Being a martyr, being the person
link |
02:19:02.560
who points out that the emperor wears no clothes, even when the emperor is obviously dressed,
link |
02:19:10.160
just to be the person who points out that the emperor is wearing no clothes. Do you think about
link |
02:19:15.200
that? So I think the standards you hold yourself to are dependent on the audience you have in mind.
link |
02:19:26.160
So if you think of your audience as relatively easily fooled or relatively gullible, then you
link |
02:19:32.560
won't bother to generate more complicated, deep arguments and structures and evidence to persuade
link |
02:19:40.400
somebody who has higher standards because why bother? You can get away with something much
link |
02:19:45.920
easier. And of course, if you are a salesperson or you make money on sales, then you don't need
link |
02:19:51.600
to convince the top few percent of the most sharp customers. You can just go for the bottom 60% of
link |
02:19:57.440
the most gullible customers and make plenty of sales. So I think one of the main ways
link |
02:20:05.840
intellectuals varies in who is their audience in their mind? Who are they trying to impress? Is it
link |
02:20:11.120
the people down the hall? Is it the people who are reading their Twitter feed? Is it their parents?
link |
02:20:16.240
Is it their high school teacher? Or is it Einstein and Freud and Socrates? So I think those of us
link |
02:20:25.760
who are especially arrogant, especially think that we're really big shot or have a chance at being
link |
02:20:32.000
a really big shot. We're naturally going to pick the big shot audience that we can. We're going
link |
02:20:36.560
to be trying to impress Socrates and Einstein. Is that why you're hanging out with Tyler Cohen a
link |
02:20:40.960
lot and trying to convince him yourself? From the point of view of just making money or having
link |
02:20:47.920
sex or other sorts of things, this is misdirected energy. Trying to impress the very most highest
link |
02:20:55.680
quality minds, that's such a small sample and they can't do that much for you anyway.
link |
02:20:59.360
Yeah. So I might well have had more ordinary success in life, be more popular, invited to
link |
02:21:05.920
more parties, make more money if I had targeted a lower tier set of intellectuals with the standards
link |
02:21:13.520
they have. But for some reason, I decided early on that Einstein was my audience or people like him
link |
02:21:20.560
and I was going to impress them. Yeah. I mean, you pick your set of motivations,
link |
02:21:26.080
you know, convincing, impressing Tyler Cohen is not going to help you get laid. Trust me,
link |
02:21:31.120
I tried. All right. What are some notable sort of effects of the elephant in the brain in everyday
link |
02:21:43.040
life? So you mentioned, when we tried to apply that to economics, to public policy,
link |
02:21:48.960
so when we think about medicine, education, all those kinds of things, what are some things that
link |
02:21:53.200
well, the key thing is medicine is much less useful health wise than you think. So, you know,
link |
02:21:59.280
if you were focused on your health, you would care a lot less about it. And if you were focused
link |
02:22:04.000
on other people's health, you would also care a lot less about it. But if medicine is, as we suggest,
link |
02:22:09.280
more about showing that you care and let other people showing that they care about you,
link |
02:22:13.120
then a lot of priority on medicine can make sense. So that was our very earliest discussion
link |
02:22:18.400
in the podcast. You were talking about what should you give people a lot of medicine when
link |
02:22:22.800
it's not very effective. And then the answer then is, well, if that's the way that you show
link |
02:22:27.360
that you care about them and you really want them to know you care, then maybe that's what
link |
02:22:32.080
you need to do if you can't find a cheaper, more effective substitute. So if we actually just pause
link |
02:22:37.280
on that for a little bit, how do we start to untangle the full set of self deception happening
link |
02:22:44.720
in the space of medicine? So we have a method that we use in our book that is what I recommend
link |
02:22:49.840
for people to use in all these sorts of topics. The straightforward method is first, don't look
link |
02:22:54.480
at yourself. Look at other people. Look at broad patterns of behavior in other people. And then
link |
02:23:00.800
ask, what are the various theories we could have to explain these patterns of behavior? And then
link |
02:23:05.920
just do the simple matching, which theory better matches the behavior they have. And the last step
link |
02:23:11.680
is to assume that's true of YouTube. Don't assume you're an exception. If you happen to be an
link |
02:23:17.680
exception, that won't go so well. But nevertheless, on average, you aren't very well positioned to
link |
02:23:22.160
judge if you're an exception. So look at what other people do. Explain what other people do and
link |
02:23:27.440
assume that's you too. But also, in the case of medicine, there's several parties to consider.
link |
02:23:34.320
So there's the individual person that's receiving the medicine. There's the doctors that are prescribing
link |
02:23:38.880
the medicine. There's drug companies that are selling drugs. There are governments that have
link |
02:23:45.280
regulations that are lobbyists. So you can build up a network of categories of humans in this.
link |
02:23:51.760
And they each play their role. So how do you introspect the sort of analyze the system at a
link |
02:24:00.400
system scale versus at the individual scale? So it turns out that in general, it's usually much
link |
02:24:07.040
easier to explain producer behavior than consumer behavior. That is, the drug companies or the
link |
02:24:13.200
doctors have relatively clear incentives to give the customers whatever they want. And
link |
02:24:19.280
certainly say governments in democratic countries have the incentive to give the voters what they
link |
02:24:24.240
want. So that focuses your attention on the patient and the voter in this equation and saying,
link |
02:24:31.760
what do they want? They would be driving the rest of the system. Whatever they want,
link |
02:24:37.040
the other parties are willing to give them in order to get paid. So now we're looking for
link |
02:24:42.720
puzzles in patient and voter behavior. What are they choosing and why do they choose that?
link |
02:24:51.120
And how much exactly? And then we can explain that potentially again, returning to the producer
link |
02:24:56.160
by the producer being incentivized to manipulate the decision making processes of the voter and
link |
02:25:01.360
the consumer. Well, now in almost every industry, producers are in general happy to lie and exaggerate
link |
02:25:08.000
in order to get more customers. This is true of auto repair as much as human body repair and
link |
02:25:12.400
medicine. So the differences between these industries can't be explained by the willingness of the
link |
02:25:17.760
producers to give customers what they want or to do various things that we have to again
link |
02:25:22.400
go to the customers. Why are customers treating body repair different than auto repair?
link |
02:25:31.040
Yeah, and that potentially requires a lot of thinking, a lot of data collection and potentially
link |
02:25:36.400
looking at historical data too, because things don't just happen overnight. Over time there's
link |
02:25:41.040
trends. In principle it does, but actually it's a lot actually easier than you might think. I think
link |
02:25:45.600
the biggest limitation is just the willingness to consider alternative hypotheses. So many of the
link |
02:25:50.640
patterns that you need to rely on are actually pretty obvious, simple patterns. You just have to
link |
02:25:55.280
notice them and ask yourself, how can I explain those? Often you don't need to look at the most
link |
02:26:00.720
subtle, most difficult statistical evidence that might be out there. The simplest patterns are
link |
02:26:07.120
often enough. All right, so there's a fundamental statement about self deception in the book.
link |
02:26:12.640
There's the application of that, like we just did in medicine. Can you steelman the argument that
link |
02:26:20.240
many of the foundational ideas in the book are wrong?
link |
02:26:25.200
Meaning there's two that you just made, which is it can be a lot simpler than it looks.
link |
02:26:31.920
Can you steelman the case that it's case by case? It's always super complicated. Like it's a complex
link |
02:26:38.240
system that's very difficult to have a simple model about. It's very difficult to disrespect.
link |
02:26:43.120
And the other one is that the human brain isn't not just about self deception. That there's a lot of
link |
02:26:53.040
there's a lot of motivation to play and we are able to really introspect our own mind. And like
link |
02:26:58.640
what's on the surface of the conscious is actually quite a good representation of what's going on
link |
02:27:03.920
in the brain and you're not deceiving yourself. You're able to actually arrive to deeply think
link |
02:27:09.360
about where your mind stands and what you think about the world. And it's less about impressing
link |
02:27:14.000
people and more about being a free thinking individual.
link |
02:27:18.160
So when a child tries to explain why they don't have their homework assignment,
link |
02:27:24.320
they are sometimes inclined to say, the dog ate my homework. They almost never say the dragon ate
link |
02:27:30.960
my homework. The reason is the dragon is a completely implausible explanation. Almost always
link |
02:27:38.320
when we make excuses for things, we choose things that are at least in some degree plausible. It
link |
02:27:44.080
could perhaps have happened. That's an obstacle for any explanation of a hidden motive or a
link |
02:27:51.280
hidden motive or a hidden feature of human behavior. If people are pretending one thing while really
link |
02:27:57.520
doing another, they're usually going to pick as a pretense something that's somewhat plausible.
link |
02:28:03.040
That's going to be an obstacle to proving that hypothesis. If you are focused on sort of the
link |
02:28:09.520
local data that a person would typically have if they were challenged. So if you're just looking
link |
02:28:13.600
at one kid in his lack of homework, maybe you can't tell whether his dog ate his homework or not.
link |
02:28:19.680
If you happen to know he doesn't have a dog, you might have more confidence, right? You will need
link |
02:28:25.040
to have a wider range of evidence than a typical person would when they're encountering that
link |
02:28:29.360
actual excuse in order to see past the excuse. That will just be a general feature of it. So
link |
02:28:36.080
in order, if what if I say, there's this usual story about where we go to the doctor and then
link |
02:28:40.080
there's this other explanation, it'll be true that you'll have to look at wider data in order
link |
02:28:45.200
to see that because people don't usually offer excuses unless in the local context of their
link |
02:28:51.600
excuse, they can get away with it. That is, it's hard to tell, right? So in the case of medicine,
link |
02:28:58.160
I have to point you to sort of larger sets of data. But in many areas of academia, including
link |
02:29:05.200
health economics, the researchers there also want to support the usual points of view.
link |
02:29:10.720
And so they will have selection effects in their publications and their analysis whereby they,
link |
02:29:17.440
if they're getting a result too much contrary to the usual point of view everybody wants to have,
link |
02:29:21.280
they will file, draw that paper or redo the analysis until they get an answer that's more
link |
02:29:27.120
to people's liking. So that means in the health economics literature, there are plenty of people
link |
02:29:32.640
who will claim that in fact, we have evidence that medicine is effective. And when I respond,
link |
02:29:39.200
I will have to point you to our most reliable evidence and ask you to consider the possibility
link |
02:29:45.760
that the literature is biased in that when the evidence isn't as reliable, when they have more
link |
02:29:50.640
degrees of freedom in order to get the answer they want, they do tend to get the answer they want.
link |
02:29:54.960
But when we get to the kind of evidence that's much harder to mess with, that's where we will see
link |
02:30:01.520
the truth be more revealed. So with respect to medicine, we have millions of papers published
link |
02:30:07.760
in medicine over the years, most of which give the impression that medicine is useful.
link |
02:30:13.440
There's a small literature on randomized experiments of the aggregate effects of medicine
link |
02:30:19.040
where there's maybe a few half dozen or so papers where it would be the hardest to hide it
link |
02:30:26.320
because it's such a straightforward experiment done in a straightforward way that it's hard
link |
02:30:34.560
to manipulate. And that's where I will point you to to show you that there's relatively little
link |
02:30:39.440
correlation between health and medicine. But even then, people could try to save the phenomenon and
link |
02:30:44.400
say, well, it's not hidden motives, it's just ignorance. They could say, for example,
link |
02:30:49.440
medicine's complicated, most people don't know the literature. Therefore, they can be excused
link |
02:30:55.600
for ignorance. They are just ignorantly assuming that medicine is effective. It's not that they
link |
02:31:00.240
have some other motive that they're trying to achieve. And then I will have to do, as with a
link |
02:31:06.560
conspiracy theory analysis, and I'm saying, well, how long has this misperception been going on?
link |
02:31:12.480
How consistently has it happened around the world and across time? And I would have to say, look,
link |
02:31:19.520
if we're talking about, say, a recent new product like Segway scooters or something,
link |
02:31:25.280
I could say not so many people have seen them or used them. Maybe they could be confused about
link |
02:31:29.120
their value. If we're talking about a product that's been around for thousands of years,
link |
02:31:32.720
used in roughly the same way all across the world, and we see the same pattern over and over again,
link |
02:31:38.160
this sort of ignorance mistake just doesn't work so well.
link |
02:31:41.760
It also is a question of how much of the self deception is prevalent versus foundational,
link |
02:31:50.400
because there's a kind of implied thing where it's foundational to human nature versus just
link |
02:31:56.880
a common pitfall. This is a question I have. So maybe human progress is made by people
link |
02:32:05.600
who don't fall into the self deception. It's a baser aspect of human nature,
link |
02:32:11.600
but then you escape it easily if you're motivated.
link |
02:32:15.760
The motivational hypotheses about the self deceptions are in terms of how it makes you
link |
02:32:20.880
look to the people around you. Again, the press secretary. So the story would be,
link |
02:32:25.120
most people want to look good to the people around them. Therefore, most people present
link |
02:32:30.720
themselves in ways that help them look good to the people around them. That's sufficient
link |
02:32:37.200
to say there would be a lot of it. It doesn't need to be 100%. There's enough variety in people
link |
02:32:42.960
and in circumstances that sometimes taking a contrarian strategy can be in the interest of
link |
02:32:47.200
some minority of the people. So I might, for example, say that that's a strategy I've taken.
link |
02:32:52.480
I've decided that being contrarian on these things could be winning for me in that there's a room
link |
02:33:00.880
for a small number of people like me who have these sort of messages who can then get more
link |
02:33:06.480
attention even if there's not room for most people to do that. And that can be explaining
link |
02:33:12.720
sort of the variety. Similarly, you might say, look, just look at most obvious things. Most
link |
02:33:17.600
people would like to look good in the sense of physically, just you look good right now,
link |
02:33:21.200
you're wearing a nice suit, you have a haircut, you shaved, right? So and we...
link |
02:33:24.960
Got my own hair, by the way.
link |
02:33:26.080
Okay. Well then, all the more impressive.
link |
02:33:28.000
That's a counter argument for your client though most people want to look good.
link |
02:33:32.960
Clearly, if we look at most people and their physical appearance, clearly most people are
link |
02:33:36.400
trying to look somewhat nice, right? They shower, they shave, they comb their hair,
link |
02:33:40.880
but we certainly see some people around who are not trying to look so nice, right? Is that a
link |
02:33:45.280
big challenge, the hypothesis that people want to look nice? Not that much, right? We can see
link |
02:33:50.240
in those particular people's context more particular reasons why they've chosen to
link |
02:33:55.840
be an exception to the more general rule.
link |
02:33:58.480
So the general rule does reveal something foundational generally.
link |
02:34:03.760
Right.
link |
02:34:04.240
That's the way things work. Let me ask you, you wrote a blog post about the accuracy of
link |
02:34:08.640
authorities since we were talking about this, especially in medicine. Just looking around us,
link |
02:34:15.600
especially during this time of the pandemic, there's been a growing distrust of authorities,
link |
02:34:21.840
of institutions, even an institution of science itself. What are the pros and cons of authorities,
link |
02:34:30.800
would you say? So what's nice about authorities? What's nice about institutions and what are
link |
02:34:38.240
their pitfalls?
link |
02:34:39.040
One standard function of authority is as something you can defer to respectively without
link |
02:34:46.640
needing to seem too submissive or ignorant or gullible. That is, when you're asking what should
link |
02:34:59.440
I act on or what belief should I act on, you might be worried if I chose something too contrarian,
link |
02:35:06.320
too weird, too speculative, that would make me look bad. So I would just choose something
link |
02:35:12.400
very conservative. So maybe an authority lets you choose something a little less conservative
link |
02:35:18.080
because the authority is your authorization. The authority will let you do it and somebody
link |
02:35:24.160
says, why did you do that thing? And they say, the authority authorized. The authority tells me,
link |
02:35:28.720
I should do this. Why aren't you doing it? Right.
link |
02:35:31.440
So the authority is often pushing for the conservative.
link |
02:35:34.400
Well, the authority can do more. So for example, we just think about, I don't know, in a pandemic
link |
02:35:40.240
even, you could just think, I'll just stay home and close all the doors or I'll just ignore it.
link |
02:35:44.640
You could just think of some very simple strategy that might be defensible if there were no
link |
02:35:49.040
authorities. But authorities might be able to know more than that. They might be able to look
link |
02:35:54.960
at some evidence, draw a more context dependent conclusion, declare it as the authority's opinion,
link |
02:36:00.400
and then other people might follow that. And that could be better than doing nothing.
link |
02:36:02.960
So what you mentioned, WHO, the world's most beloved organization. So this is me speaking
link |
02:36:12.960
in general, WHO and CDC has been kind of, depending on degrees and details, just not
link |
02:36:25.920
behaving as I would have imagined in the best possible evolution of human civilization,
link |
02:36:31.520
authorities should act. They seem to have failed in some fundamental way in terms of
link |
02:36:37.200
leadership in a difficult time for our society. Can you say what are the pros and cons of this
link |
02:36:43.360
particular authority? So again, if there were no authorities whatsoever, no accepted authorities,
link |
02:36:51.040
then people would sort of have to sort of randomly pick different local authorities
link |
02:36:56.240
who would conflict with each other and then they'd be fighting each other about that or just
link |
02:36:59.840
not believe anybody and just do some initial default action that you would always do without
link |
02:37:04.560
responding to context. So the potential gain of an authority is that they could know more than just
link |
02:37:10.560
basic ignorance. And if people followed them, they could both be more informed than ignorance
link |
02:37:16.240
and all doing the same thing. So they're each protected from being accused or complained about.
link |
02:37:21.680
That's the idea of an authority. That would be the good.
link |
02:37:24.720
What's the con? Okay.
link |
02:37:26.960
So the con is that if you think of yourself as the authority and asking what's my best
link |
02:37:33.760
strategy as an authority, it's unfortunately not to be maximally informative. So you might
link |
02:37:40.880
think the ideal authority would not just tell you more than ignorance, it would tell you as much as
link |
02:37:45.920
possible. Okay, it would give you as much detail as you could possibly listen to and manage to
link |
02:37:52.160
assimilate. And it would update that as frequently as possible or as frequently as you were able to
link |
02:37:57.840
listen and assimilate. And that would be the maximally informative authority. The problem is
link |
02:38:03.520
there's a conflict between being an authority or being seen as an authority and being maximally
link |
02:38:10.160
informative. That was the point of my blog post that you're pointing out to here. That is, if you
link |
02:38:16.320
look at it from their point of view, they won't long remain the perceived authority if they are too
link |
02:38:27.040
unconscious about how they use that authority. And one of the ways to be
link |
02:38:31.120
unconscious would be to be too informative. Okay, that's still in the pro call for me.
link |
02:38:36.960
Because you're talking about the tensions that are very data driven and very honest. And I would
link |
02:38:43.200
hope that authorities struggle with that how much information to provide to people to maximize
link |
02:38:51.760
to maximize outcomes. Now I'm generally somebody that believes more information is better because
link |
02:38:56.480
I trust in the intelligence of people. But I'd like to mention a bigger con authorities,
link |
02:39:02.800
which is the human question. This comes back to global government and so on, is that, you know,
link |
02:39:08.800
there's humans that sit in chairs during meetings in those authorities, they have different titles,
link |
02:39:15.760
it's for humans form hierarchies. And sometimes those titles get to your head a little bit.
link |
02:39:20.800
And you start to want to think how do I preserve my control over this authority, as opposed to
link |
02:39:26.960
thinking through like, what is the mission of the authority? What is the mission of WHO and
link |
02:39:32.720
other such organization? And how do I maximize the implementation of that mission? You start to think,
link |
02:39:39.040
well, I kind of like sitting in this big chair at the head of the table. I'd like to sit there
link |
02:39:43.760
for another few years. Or better yet, I want to be remembered as the person who in a time of crisis
link |
02:39:51.120
was at the head of this authority and did a lot of good things. So you stop trying to do good
link |
02:39:57.840
under what good means given the mission of the authority. And you start to try to carve a narrative
link |
02:40:04.320
to manipulate the narrative first in the meeting room, everybody around you, just a small little
link |
02:40:10.080
story you tell yourself, then you interns the managers throughout the whole hierarchy of the
link |
02:40:16.080
company. Okay, once you everybody in the company, or in the organization believes this narrative,
link |
02:40:21.360
now you start to control this, the release of information, not because you're trying to maximize
link |
02:40:28.080
outcomes, but because you're trying to maximize the effectiveness of the narrative that you are truly
link |
02:40:33.840
a great representative of this authority in human history. And I just feel like those human forces,
link |
02:40:41.600
whenever you have an authority, it starts getting to people's heads. One of the most,
link |
02:40:47.680
one of the most, this me as a scientist, one of the most disappointing things to see during the
link |
02:40:52.320
pandemic is the use of authority from colleagues of mine to roll their eyes to dismiss other human
link |
02:41:04.160
beings just because they got a PhD, just because they're an assistant associate full faculty,
link |
02:41:11.760
just because they are deputy head of X organization, NIH, whatever the heck the organization is,
link |
02:41:20.640
just because they got an award of some kind. And at a conference, they won a best paper award
link |
02:41:26.720
seven years ago, and then somebody shook their hand and gave them a medal, maybe it was a president.
link |
02:41:32.560
And there and it's been 20, 30 years that people have been patting them on the back saying how
link |
02:41:37.520
special they are, especially when they are controlling money and getting sucked up to
link |
02:41:43.040
from other scientists who really want the money in a self deception kind of way. They don't actually
link |
02:41:47.200
really care about your performance. And all of that gets to your head. And no longer are you the
link |
02:41:52.000
authority that's trying to do good and lessen the suffering in the world, you become an authority
link |
02:41:57.280
that just wants to maximize self preserve yourself in a sitting on a throne of power.
link |
02:42:05.440
So this is core to sort of what it is to be an economist. I'm a professor of economics.
link |
02:42:11.600
There you go with the authority again. No. So it's about saying we often have a situation
link |
02:42:18.000
where we see a world of behavior, and then we see ways in which particular behaviors are not
link |
02:42:24.400
sort of maximally socially useful. And we have a variety of reactions to that. So one kind of
link |
02:42:32.560
reaction is to sort of morally blame each individual for not doing the maximally socially
link |
02:42:37.760
useful thing under perhaps the idea that people could be identified and shamed for that and
link |
02:42:44.560
maybe induced into doing the better thing if only enough people were calling them out on it.
link |
02:42:51.040
But another way to think about it is to think that people sit in institutions
link |
02:42:55.040
with certain stable institutional structures and that institutions create particular incentives
link |
02:43:00.560
for individuals and that individuals are typically doing whatever is in their local interest in the
link |
02:43:07.360
context of that institution. And then perhaps to less blame individuals for winning their local
link |
02:43:14.560
institutional game and more blaming the world for having the wrong institutions. So economists are
link |
02:43:20.160
often like wondering what other institutions we could have instead of the ones we have and which
link |
02:43:24.560
of them might promote better behavior. And this is a common thing we do all across human behavior
link |
02:43:29.600
is to think of what are the institutions we're in and what are the alternative variations we
link |
02:43:33.760
could imagine and then to say which institutions would be most productive. I would agree with you
link |
02:43:39.840
that our information institutions, that is the institutions by which we collect information
link |
02:43:44.960
and aggregate it and share it with people, are especially broken in the sense of far from the
link |
02:43:51.200
ideal of what would be the most cost effective way to collect and share information. But then
link |
02:43:56.960
the challenge is to try to produce better institutions. And as an academic, I'm aware
link |
02:44:02.880
that academia is particularly broken in the sense that we give people incentives to do research
link |
02:44:09.600
that's not very interesting or important because basically they're being impressive and we actually
link |
02:44:14.880
care more about whether academics are impressive than whether they're interesting or useful.
link |
02:44:20.160
And I can go happy to go into detail with lots of different known institutions and their known
link |
02:44:25.920
institutional failings, ways in which those institutions produce incentives that are mistaken.
link |
02:44:31.520
And that was the point of the post we started with talking about the authorities. If I need to
link |
02:44:35.680
be seen as an authority, that's at odds with my being informative and I might choose to be the
link |
02:44:42.240
authority instead of being informative because that's my institutional incentives.
link |
02:44:46.160
And if I may, I'd like to, given that beautiful picture of incentives and individuals that you
link |
02:44:54.320
just painted, let me just apologize for a couple of things. One, I often put too much blame on
link |
02:45:03.440
leaders of institutions versus the incentives that govern those institutions. And as a result of that,
link |
02:45:11.280
I've been, I believe, too critical of Anthony Fauci, too emotional about my criticism of Anthony
link |
02:45:20.400
Fauci. And I'd like to apologize for that because I think there's a deep, there's deeper truth to
link |
02:45:26.560
think about. There's deeper incentives to think about. That said, I do sort of, I'm a romantic
link |
02:45:32.560
creature by nature. I romanticize Winston Churchill and I, when I think about Nazi Germany, I think
link |
02:45:42.720
about Hitler more than I do about the individual people of Nazi Germany. You think about leaders,
link |
02:45:47.680
you think about individuals, not necessarily the parameters, the incentives that govern
link |
02:45:52.560
the system that, because it's harder. It's harder to think through deeply about the models
link |
02:45:58.720
from which those individuals arise, but that's the right thing to do. But also, I don't apologize
link |
02:46:06.240
for being emotional sometimes. I'm happy to blame the individual leaders in the sense that I might
link |
02:46:12.480
say, well, you should be trying to reform these institutions if you're just there to get promoted
link |
02:46:17.280
and look good at being at the top. But maybe I can blame you for your motives and your priorities
link |
02:46:22.160
in there. But I can understand why the people at the top would be the people who are selected for
link |
02:46:26.240
having the priority of primarily trying to get to the top. I get that.
link |
02:46:30.160
Can I maybe ask you about particularly universities? They've received, like science has received an
link |
02:46:37.440
increase in distrust overall as an institution, which breaks my heart because I think science is
link |
02:46:43.840
beautiful as a, not maybe not as an institution, but as one of the things, one of the journeys that
link |
02:46:51.840
humans have taken on. The other one is university. I think university is actually a place for me at
link |
02:46:58.960
least in the way I see it is a place of freedom of exploring ideas, scientific ideas, engineering
link |
02:47:07.440
ideas, more than a corporate, more than a company, more than a lot of domains in life.
link |
02:47:17.040
It's not just in its ideal, but it's in its implementation, a place where you can be a kid
link |
02:47:23.440
for your whole life and play with ideas. And I think with all the criticism that universities
link |
02:47:29.040
still not currently receive, I think they, I don't think that criticism is representative
link |
02:47:35.440
of universities. They focus on very anecdotal evidence of particular departments, particular
link |
02:47:39.840
people, but I still feel like there's a lot of place for freedom of thought, at least, you know,
link |
02:47:48.400
MIT, at least in the fields I care about, you know, in particular kind of science,
link |
02:47:55.920
particular kind of technical fields, you know, mathematics, computer science, physics,
link |
02:48:01.040
engineering, so robotics, artificial intelligence. This is a place where you get to be a kid. Yet
link |
02:48:08.320
there is bureaucracy that's, that's rising up. There's like more rules, there's more meetings,
link |
02:48:15.920
and there's more administration, having like PowerPoint presentations, which to me, you should
link |
02:48:22.960
like be more of a renegade explorer of ideas and meetings destroy, they suffocate that radical
link |
02:48:34.080
thought that happens when you're an undergraduate student and you can do all kinds of wild things
link |
02:48:37.680
when you're a graduate student. Anyway, all that to say, you've thought about this aspect too. Is
link |
02:48:42.240
there something positive, insightful you could say about how we can make for better universities
link |
02:48:50.240
in the decades to come, this particular institution? How can we improve them?
link |
02:48:55.360
I hear that centuries ago, many scientists and intellectuals were aristocrats. They had time
link |
02:49:03.360
and could, if they chose, choose to be intellectuals. That's a feature of the
link |
02:49:11.360
combination that they had some source of resources that allowed them leisure and that the kind of
link |
02:49:17.680
competition they were faced in among aristocrats allowed that sort of a self indulgence or
link |
02:49:24.160
self pursuit at least at some point in their lives. The analogous observation is that
link |
02:49:32.240
university professors often have sort of the freedom and space to do a wide range of things,
link |
02:49:39.120
and I am certainly enjoying that as a tenured professor.
link |
02:49:41.600
You're a really, sorry to interrupt, a really good representative of that. Just the exploration
link |
02:49:48.080
you're doing, the depth of thought that most people are afraid to do, the kind of broad
link |
02:49:54.560
thinking that you're doing, which is great. The fact that that can happen is a combination of
link |
02:49:58.560
these two things analogously. One is that we have fierce competition to become a tenured
link |
02:50:03.280
professor, but then once you become tenured, we give you the freedom to do what you like,
link |
02:50:07.520
and that's a happenstance. It didn't have to be that way, and in many other walks of life,
link |
02:50:14.320
even though people have a lot of resources, etc., they don't have that kind of freedom set up.
link |
02:50:18.560
So I think we're kind of, I'm kind of lucky that tenure exists and that I'm enjoying it,
link |
02:50:25.280
but I can't be too enthusiastic about this unless I can approve of sort of the source
link |
02:50:30.160
of the resources that's paying for all this. So for the aristocrat, if you thought they
link |
02:50:33.840
they stole it in war or something, you wouldn't be so pleased. Whereas if you thought they had
link |
02:50:39.040
earned it or their ancestors had earned this money that they were spending as an aristocrat,
link |
02:50:43.360
then you could be more okay with that. So for universities, I have to ask, where are the main
link |
02:50:49.440
sources of resources that are going to the universities and are they getting their money
link |
02:50:53.680
source? Are they getting a good value for that payment? So first of all, they're students,
link |
02:50:59.440
and the question is, are students getting good value for their education? And
link |
02:51:06.560
each person is getting value in the sense that they are identified and shown to be a more capable
link |
02:51:12.080
person, which is then worth more salary as an employee later. But there is a case for saying
link |
02:51:17.440
there's a big waste to the system because we aren't actually changing the students who are
link |
02:51:22.560
educating them. We're more sorting them or labeling them. And that's a very expensive
link |
02:51:28.400
process to produce that outcome. And part of the expense is the freedom from tenure I get.
link |
02:51:33.600
So I feel like I can't be too proud of that because it's basically attacks on all these
link |
02:51:38.720
young students to pay this enormous amount of money in order to be labeled as better,
link |
02:51:42.720
whereas I feel like we should be able to find cheaper ways of doing that.
link |
02:51:46.960
The other main customer is researcher patrons like the government or other foundations. And
link |
02:51:53.360
then the question is, are they getting their money worth out of the money they're paying
link |
02:51:57.360
for research to happen? And my analysis is they don't actually care about the research progress.
link |
02:52:04.480
They are mainly buying an affiliation with credentialed impressiveness on the part of the
link |
02:52:08.400
researchers. They mainly pay money to researchers who are impressive and have
link |
02:52:13.840
impressive affiliations, and they don't really much care what research project happens as a result.
link |
02:52:17.920
Is that a cynical? So there's a deep truth to that cynical perspective. Is there a less cynical
link |
02:52:27.520
perspective that they do care about the long term investment into the progress of science and
link |
02:52:33.600
humanity? Well, they might personally care, but they're stuck in an equilibrium wherein they
link |
02:52:39.040
basically most foundations like governments or research or like the Ford Foundation,
link |
02:52:44.480
they are the individuals there are rated based on the prestige they bring to that organization.
link |
02:52:52.160
And even if they might personally want to produce more intellectual progress,
link |
02:52:55.360
they are in a competitive game where they don't have tenure and they need to produce this prestige.
link |
02:53:01.120
And so once they give grant money to prestigious people, that is the thing that shows that they
link |
02:53:05.520
have achieved prestige for the organization. And that's what they need to do in order to retain
link |
02:53:09.440
their position. And you do hope that there's a correlation between prestige and actual
link |
02:53:15.680
competence? Of course there is a correlation. The question is just, could we do this better
link |
02:53:20.400
some other way? Yes. I think it's almost, I think it's pretty clear we could. What is harder to do
link |
02:53:25.600
is move the world to a new equilibrium where we do that instead. What are the components
link |
02:53:31.440
of the better ways to do it? Is it money? So how the sources of money and how the money is allocated
link |
02:53:40.480
to give the individual researchers freedom? Years ago, I started studying this topic exactly
link |
02:53:47.360
because this was my issue. And this was many decades ago now. And I spent a long time. And
link |
02:53:51.920
my best guess still is prediction markets, betting markets. So if you as a research
link |
02:53:58.240
paper, patron want to know the answer to a particular question, like what's the mass of
link |
02:54:02.800
the electron neutrino, then what you can do is just subsidize a betting market in that question.
link |
02:54:09.040
And that will induce more research into answering that question because the people who then
link |
02:54:13.440
answer that question can then make money in that betting market with the new information they gain.
link |
02:54:17.680
So that's a robust way to induce more information on a topic. If you want to induce an accomplishment,
link |
02:54:23.600
you can create prizes. And there's, of course, a long history of prizes to induce accomplishments.
link |
02:54:29.520
And we moved away from prizes, even though we once used them a far more often than we did today.
link |
02:54:36.800
And there's a history to that. And for the customers who want to be affiliated with impressive
link |
02:54:44.080
academics, which is what most of the customers want, students, journalists and patrons, I think
link |
02:54:48.640
there's a better way of doing that, which I just wrote about in my second most recent
link |
02:54:53.200
blog post. Can you explain? Sure. What we do today is we take sort of acceptance by other
link |
02:54:59.200
academics recently as our best indication of their deserved prestige, that is recent publications,
link |
02:55:06.080
recent job affiliation, institutional affiliations, recent invitations to speak, recent grants.
link |
02:55:14.720
We are today taking other impressive academics recent choices to affiliate with them as our best
link |
02:55:22.480
estimate of their prestige. I would say we could do better by creating betting markets in what the
link |
02:55:28.960
distant future will judge to have been their deserved prestige looking back on them. I think
link |
02:55:34.960
most intellectuals, for example, think that if we looked back two centuries, say two intellectuals
link |
02:55:40.800
from two centuries ago, and tried to look in detail at their research and how it influenced
link |
02:55:46.560
future research and which path it was on, we could much but more accurately judge their actual
link |
02:55:53.920
deserved prestige, that is who was actually on the right track, who actually helped,
link |
02:55:58.400
which will be different than what people at the time judged using the immediate indications of
link |
02:56:03.200
the time at which position they had or which publications they had or things like that.
link |
02:56:06.880
So in this way, if you think from the perspective of multiple centuries,
link |
02:56:12.160
you would higher prioritize true novelty, you would disregard the temporal proximity,
link |
02:56:19.840
like how recent the thing is, and you would think like, what is the brave, the bold, the big novel
link |
02:56:27.280
idea that this, and you would actually be able to rate that because you could see the path with
link |
02:56:32.400
which ideas took, which things had dead ends, which led to what other followings. You could,
link |
02:56:36.880
looking back centuries later, have a much better estimate of who actually had what long term
link |
02:56:42.560
effects on intellectual progress. So my proposal is, we actually pay people in several centuries
link |
02:56:47.840
to do this historical analysis. And we have betting, we have prediction markets today,
link |
02:56:52.160
where we buy and sell assets, which will later off pay off in terms of those final evaluations.
link |
02:56:57.520
So now we'll be inducing people today to make their best estimate of those things by actually
link |
02:57:01.760
looking at the details of people and setting the prices according. And so my proposal would be,
link |
02:57:07.440
we rate people today on those prices today. So instead of looking at their list of publications
link |
02:57:12.240
or affiliations, you look at the actual price of assets that represent people's best guess
link |
02:57:17.760
of what the future will say about them.
link |
02:57:19.280
That's brilliant. So this concept of idea futures, can you elaborate what this would entail?
link |
02:57:26.400
Well, I've been elaborating two versions of it here. So one is, if there's a particular
link |
02:57:32.560
question, say the mass of the electron neutrino, and what you as a patron want to do is get an
link |
02:57:37.760
answer to that question, then what you would do is subsidize the betting market in that question
link |
02:57:43.040
under the assumption that eventually we'll just know the answer and we can pay off the bets that
link |
02:57:46.880
way. And that is a plausible assumption for many kinds of concrete intellectual questions like
link |
02:57:52.320
what's the mass of the electron neutrino. In this hypothetical world, constructing that may be a
link |
02:57:57.040
real world, do you mean literally financial? Yes, literal. Very literal. Very cash. Very direct
link |
02:58:06.320
and literal. Yes. So the idea would be research labs would be for profit. They would have as
link |
02:58:16.240
their expense paying researchers to study things. And then their profit would come from using the
link |
02:58:20.720
insights the researchers gains to trade in these financial markets. Just like hedge funds today
link |
02:58:26.800
make money by paying researchers to study firms and then making their profits by trading on those
link |
02:58:32.000
that insight in the ordinary financial market. And the market would, if it's efficient, would be
link |
02:58:38.080
able to become better and better at predicting the powerful ideas that the individual is able
link |
02:58:43.440
to generate. The variance around the mass of the electron neutrino would decrease with time as we
link |
02:58:47.920
learn that value of that parameter better and any other parameters that we want to estimate.
link |
02:58:52.480
You don't think those markets would also respond to recency of prestige and all those kinds of
link |
02:58:58.080
things? Well, they would respond. But the question is if they might respond incorrectly,
link |
02:59:02.560
but if you think they're doing it incorrectly, you have a profit opportunity where you can go
link |
02:59:07.280
fix it. So we'd be inviting everybody to ask whether they can find any biases or errors in the
link |
02:59:12.480
current ways in which people are estimating these things from whatever clues they have.
link |
02:59:15.680
Right. There's a big incentive for the correction mechanism in academia currently.
link |
02:59:22.320
It's the safe choice to go with the prestige and there's no.
link |
02:59:26.720
Even if you privately think that the prestige is overrated.
link |
02:59:30.880
Even if you think strongly that it's overrated. Still, you don't have an incentive to defy that
link |
02:59:35.920
publicly. You're going to lose a lot unless you're a contrarian that writes brilliant blogs.
link |
02:59:42.560
And then you could talk about or have. Right. Initially, this was my initial
link |
02:59:47.520
concept of having these betting markets on these key parameters. What I then realized over time was
link |
02:59:51.840
that that's more what people pretend to care about. What they really mostly care about is just who's
link |
02:59:56.480
how good. And that's what most of the system is built on is trying to rate people and rank them.
link |
03:00:01.760
And so I designed this other alternative based on historical evaluations centuries later just
link |
03:00:06.640
about who's how good because that's what I think most of the customers really care about. Customers.
link |
03:00:13.520
I like the word customers here. Humans. Right. Well, every major area of life,
link |
03:00:18.400
which has specialists who get paid to do that thing, must have some customers from elsewhere who
link |
03:00:23.040
are paying for it. Well, who are the customers for the mass of the neutrino? Yes. I understand
link |
03:00:29.280
the sense people who are willing to pay right for thing. That's an important thing to understand
link |
03:00:36.720
about anything who are the customers. So what I think and what's the product like medicine,
link |
03:00:40.640
education, academia, military, etc. That's part of the hidden motives analysis. Often people
link |
03:00:46.320
have a thing they say about what the product is and who the customer is. And maybe you need to
link |
03:00:49.840
dig a little deeper to find out what's really going on or a lot deeper. You've written that you
link |
03:00:56.640
seek out quote view quakes. You're able as a as an intelligent black box word generating machine,
link |
03:01:04.160
you're able to generate a lot of sexy words. I like it. I love it. View quakes,
link |
03:01:08.880
which are insights which dramatically changed my worldview, your worldview. You write,
link |
03:01:16.560
I loved science fiction as a child studied physics and artificial intelligence for a long time each
link |
03:01:22.000
and now study economics and political science, all fields full of such insights.
link |
03:01:28.800
So let me ask, what are some view quakes or a beautiful surprising idea to you from each of
link |
03:01:35.200
those fields? Physics, AI, economics, political science. I know it's a tough question. Something
link |
03:01:40.560
that springs to mind about physics, for example, that just is beautiful. I mean, right from the
link |
03:01:44.800
beginning, say, special relativity was a big surprise. You know, most of us have a simple
link |
03:01:50.720
concept of time and it seems perfectly adequate for everything we've ever seen.
link |
03:01:54.880
And to have it explained to you that you need to sort of have a mixture concept of time and space
link |
03:01:59.440
where you put it into the space time construct, how it looks different from different perspectives,
link |
03:02:05.040
that was quite a shock. And that was such a shock that it makes you think, what else do I know that
link |
03:02:11.760
isn't the way it seems? Certainly, quantum mechanics is certainly another enormous shock in
link |
03:02:16.480
terms of from your point, you have this idea that there's a space and then there's
link |
03:02:22.320
particles at points and maybe fields in between. And quantum mechanics is just a whole different
link |
03:02:28.800
representation. It looks nothing like what you would have thought as sort of the basic
link |
03:02:32.880
representation of the physical world. And that was quite a surprise.
link |
03:02:36.720
What would you say is the catalyst for the view quake in theoretical physics in the 20th century?
link |
03:02:42.960
Where does that come from? So the interesting thing about Einstein, it seems like a lot of that
link |
03:02:46.880
came from like almost thought experiments. It wasn't almost experimentally driven.
link |
03:02:52.800
And with, actually, I don't know the full story of quantum mechanics, how much of it is experiment,
link |
03:02:59.840
like where, if you look at the full trace of idea generation there, of all the weird stuff
link |
03:03:06.560
that falls out of quantum mechanics, how much of that was the experimentalist, how much was
link |
03:03:11.120
the theoreticians? But usually, in theoretical physics, the theories lead the way. So maybe,
link |
03:03:16.160
can you elucidate? What is the catalyst for these?
link |
03:03:21.600
CB – The remarkable thing about physics and about many other areas of academic intellectual
link |
03:03:27.440
life is that it just seems way over determined. That is, if it hadn't been for Einstein or if
link |
03:03:34.240
it hadn't been for Heisenberg, certainly within a half a century, somebody else would have come up
link |
03:03:39.440
with essentially the same things. CB – Is there something you believe?
link |
03:03:43.120
CB – Yes. CB – Or is there something?
link |
03:03:43.920
CB – Yes. So I think when you look at sort of just the history of physics and the history of
link |
03:03:48.000
other areas, some areas like that, there's just this enormous convergence that the different
link |
03:03:53.280
kind of evidence that was being collected was so redundant in the sense that so many different
link |
03:03:58.960
things revealed the same things that eventually you just kind of have to accept it because it just
link |
03:04:04.960
gets obvious. So if you look at the details, of course, Einstein did it for somebody else,
link |
03:04:11.600
and it's well worth celebrating Einstein for that. And we, by celebrating the particular
link |
03:04:17.120
people who did something first or came across something first, we are encouraging all the rest
link |
03:04:21.680
to move a little faster, to push us all a little faster, which is great. But I still think we would
link |
03:04:31.120
have gotten roughly to the same place within a half century. So sometimes people are special
link |
03:04:36.960
because of how much longer it would have taken. So some people say general relativity would have
link |
03:04:41.120
taken longer without Einstein than other things. I mean, Heisenberg quantum mechanics, I mean,
link |
03:04:45.920
there were several different formulations of quantum mechanics all around the same few years,
link |
03:04:50.240
means no one of them made that much of a difference. We would have had pretty much the same thing,
link |
03:04:54.480
regardless of which of them did it exactly when. Nevertheless, I'm happy to celebrate them all.
link |
03:04:59.920
But this is a choice I make in my research, that is when there's an area where there's lots of
link |
03:05:03.760
people working together who are sort of scoping each other and getting a result just before
link |
03:05:09.840
somebody else does, you ask, well, how much of a difference would I make there? At most,
link |
03:05:14.320
I could make something happen a few months before somebody else. And so I'm less worried about
link |
03:05:19.520
them missing things. So when I'm trying to help the world like doing research, I'm looking for
link |
03:05:23.360
neglected things. I'm looking for things that nobody's doing it. If I didn't do it, nobody
link |
03:05:27.040
would do it. Nobody would do it. Or at least for a long time. In the next 10, 20 years kind of
link |
03:05:30.880
things. Exactly. Same with general relativity, just, you know, who would do it. It might take
link |
03:05:35.440
another 10, 20, 30, 50 years. So that's the place where you can have the biggest impact,
link |
03:05:39.680
is finding the things that nobody would do unless you did them. And then that's when you get the
link |
03:05:44.720
big view quake, the insight. So what about artificial intelligence? Would it be the EMs,
link |
03:05:52.560
the emulated minds? What idea, whether that struck you in the shower one day, or are you just
link |
03:06:03.360
clearly the biggest view quake in artificial intelligence is the realization of just how
link |
03:06:08.800
complicated our human minds are. So most people who come to artificial intelligence from other
link |
03:06:14.640
fields or from relative ignorance, a very common phenomenon, which you must be familiar with,
link |
03:06:20.080
is that they come up with some concept, and then they think that must be it. Once we implement
link |
03:06:25.360
this new concept, we will have it. We will have full human level or higher artificial
link |
03:06:29.600
intelligence, right? And they're just not appreciating just how big the problem is,
link |
03:06:34.080
how long the road is, just how much is involved. Because that's actually hard to appreciate.
link |
03:06:38.640
When we just think, it seems really simple. And studying artificial intelligence, going
link |
03:06:43.920
through many particular problems, looking at each problem, all the different things you need
link |
03:06:47.440
to be able to do to solve a problem like that, makes you realize all the things your minds are
link |
03:06:51.760
doing that you are not aware of. That's that vast subconscious that you're not aware of.
link |
03:06:57.440
That's the biggest view cave from artificial intelligence. By far, for most people who study
link |
03:07:01.360
artificial intelligence, is to see just how hard it is. I think that's a good point. But I think
link |
03:07:07.680
it's a very early view quake. It's when the Dunning Kruger crashes hard. It's the first
link |
03:07:18.000
realization that humans are actually quite incredible. The human mind, the human body is
link |
03:07:22.640
quite incredible. There's a lot of different parts to it. But then, see, it's already been so long
link |
03:07:28.960
for me that I've experienced that view quake, that for me, I now experience the view quakes of,
link |
03:07:34.480
holy shit, this little thing is actually quite powerful. Like neural networks, I'm amazed.
link |
03:07:39.760
Because you've become almost cynical after that first view quake of like, this is so hard.
link |
03:07:47.840
Like evolution did some incredible work to create the human mind. But then you realize,
link |
03:07:53.840
just like you have, you've talked about a bunch of simple models, that simple things can actually
link |
03:07:59.200
be extremely powerful, that maybe emulating of the human mind is extremely difficult,
link |
03:08:05.840
but you can go a long way with a large neural network. You can go a long way with a dumb
link |
03:08:10.400
solution. It's that Stuart Russell thing with the reinforcement learning. Holy crap. You can do,
link |
03:08:15.680
you can go quite a long way with a simple thing. But we still have a very long road to go.
link |
03:08:18.960
Well, I can't. I refuse to sort of know. The road is full of surprises. So long is an interesting,
link |
03:08:29.680
like you said, with the six hard steps that humans have to take to arrive at where we are
link |
03:08:35.120
from the origin of life on earth. So it's long maybe in the statistical improbability of the
link |
03:08:42.240
steps that have to be taken. But in terms of how quickly those steps could be taken,
link |
03:08:47.200
I don't know if my intuition says it's, if it's hundreds of years away or if it's
link |
03:08:55.680
a couple of years away, I prefer to measure pretty confidence at least a decade. And
link |
03:09:01.280
well, we can find the confidence at least three decades. I can steal man either direction.
link |
03:09:05.360
I prefer to measure that journey in the Elon Musk's. That's the new, we don't get Elon Musk
link |
03:09:10.480
very often. So that's, that's a long time scale. For now, I don't know, maybe you can clone or
link |
03:09:15.520
maybe multiply or even know what Elon Musk, what that is, what is that? What is that? That's a good
link |
03:09:21.280
question. Exactly. Well, that's an excellent question. How does that fit into the models of
link |
03:09:26.480
three parameters that are required for becoming a grabby alien civilization? That's the question
link |
03:09:34.160
of how much any individual makes in the long path of civilization over time. Yes. And it's a favorite
link |
03:09:40.240
topic of historians and people to try to focus on individuals and how much of a difference they
link |
03:09:44.960
make. And certainly some individuals make a substantial difference in the modest term,
link |
03:09:49.760
right? Like, you know, certainly without Hitler being Hitler in the role he took,
link |
03:09:55.280
European history would have taken a different path for a while there. But if we're looking over
link |
03:10:00.240
like many centuries longer term things, most individuals do fade in their individual influence.
link |
03:10:05.520
So, no matter how sexy your hair is, you will also be forgotten in long arc of history.
link |
03:10:17.520
So you said at least 10 years. So let's talk a little bit about this AI point
link |
03:10:26.240
of where, how we achieve, how hard is the problem of solving intelligence
link |
03:10:30.800
by engineering artificial intelligence that achieves human level, human like
link |
03:10:39.120
qualities that we associate with intelligence? How hard is this? What are the different
link |
03:10:42.640
trajectories that take us there? One way to think about it is in terms of the scope of
link |
03:10:48.720
the technology space you're talking about. So let's take the biggest possible scope,
link |
03:10:53.440
all of human technology, right? The entire human economy. So the entire economy is composed of
link |
03:11:00.880
many industries, each of which have many products with many different technologies supporting each
link |
03:11:05.440
one. At that scale, I think we can accept that most innovations are a small fraction of the total
link |
03:11:14.480
that is usually has relatively gradual overall progress. And that individual innovations that
link |
03:11:22.000
are have a substantial effect that the total are rare and their total effect is still a small
link |
03:11:26.560
percentage of the total economy, right? There's very few individual innovations that made a
link |
03:11:32.160
substantial difference to the whole economy, right? What are we talking? Steam engine,
link |
03:11:35.920
shipping containers, a few things. Shipping containers deserves to be up there with steam
link |
03:11:42.400
engines, honestly. Can you say exactly what shipping containers revolutionized shipping?
link |
03:11:49.680
Shipping is very important. But placing that at shipping containers. So you're saying you wouldn't
link |
03:11:56.160
have some of the magic of the supply chain, all that without shipping containers?
link |
03:12:00.720
Made a big difference, absolutely. Interesting. That's something we're looking at.
link |
03:12:04.960
We shouldn't take that tangent all the time to do. But anyway, so there's a few, just a few
link |
03:12:10.400
innovations. So at the scale of the whole economy, right? Now, as you move down to a much smaller
link |
03:12:15.920
scale, you will see individual innovations having a bigger effect, right? So if you look at, I don't
link |
03:12:22.560
know, lawn mowers or something, I don't know about the innovations lawn mower, but there are probably
link |
03:12:26.720
like steps where you just had a new kind of lawn mower and that made a big difference to mowing
link |
03:12:31.920
lawns because you're focusing on a smaller part of the whole technology space, right? So,
link |
03:12:39.680
and you know, sometimes like military technology, there's a lot of military technologies,
link |
03:12:43.760
a lot of small ones, but every once in a while, a particular military weapon like makes a big
link |
03:12:47.520
difference. But still, even so, mostly overall, they're making modest differences to a something
link |
03:12:54.320
that's increasing relatively, say like US military is the strongest in the world.
link |
03:12:58.480
Consistently for a while, no one weapon in the last 70 years has like made a big difference in
link |
03:13:04.320
terms of the overall prominence of the US military, right? Because that's just saying,
link |
03:13:08.240
even though every once in a while, even the recent Soviet hyper missiles or whatever they are,
link |
03:13:13.200
they aren't changing the overall balance dramatically, right?
link |
03:13:18.000
So when we get to AI, now that now I can frame the question, how big is AI? Basically, if so,
link |
03:13:25.440
one way of thinking about AI is it's just all mental tasks. And then you ask what fraction
link |
03:13:29.680
of tasks are mental tasks? And then I go a lot. And then if I think of AI is like half of everything,
link |
03:13:38.160
then I think, well, it's got to be composed of lots of parts where anyone innovation is
link |
03:13:42.560
only a small impact, right? Now, if you think, no, no, no, AI is like AGI. And then you think
link |
03:13:51.200
AGI is a small thing, right? There's only a small number of key innovations that will enable it.
link |
03:13:56.640
Now you're thinking there could be a bigger chunk that you might find that would have a
link |
03:14:01.920
bigger impact. So the way I would ask you to frame these things in terms of the chunkiness
link |
03:14:07.120
of different areas of technology, in terms of how big they are, if you take 10 chunky areas
link |
03:14:13.360
and you add them together, the total is less chunky. Yeah. But don't you, are you able until
link |
03:14:18.560
you solve the fundamental core parts of the problem to estimate the chunkiness of that problem?
link |
03:14:25.360
Well, if you have a history of prior chunkiness, that could be your best estimate for future
link |
03:14:29.920
chunkiness. So for example, I mean, even at the level of the world economy, right,
link |
03:14:34.000
we've had this, what, 10,000 years of civilization, well, that's only a short time,
link |
03:14:39.040
you might say, oh, that doesn't predict future chunkiness. But it looks relatively steady and
link |
03:14:44.960
consistent. We can say even in computer science, we've had seven years of computer science,
link |
03:14:50.720
we have enough data to look at chunkiness of computer science. Like, when were there algorithms
link |
03:14:55.920
or approaches that made a big chunky difference and how large a fraction of those that was that?
link |
03:15:03.600
And I'd say mostly in computer science, most innovation has been relatively small chunks,
link |
03:15:07.760
the bigger chunks have been rare. Well, this is the interesting thing. This is about AI and just
link |
03:15:12.800
algorithms in general is, you know, PageRank. So Google's, right? So sometimes it's a simple
link |
03:15:22.560
algorithm that by itself is not that useful, but the scale of context and in the context that's
link |
03:15:30.800
scalable, yeah, depending on the context, all of a sudden the power is revealed. And there's
link |
03:15:36.720
something, I guess that's the nature of chunkiness is that you get things that can reach a lot of
link |
03:15:43.760
people simply can be quite chunky. So one standard story about algorithms is to say
link |
03:15:49.600
algorithms have a fixed cost plus a marginal cost. And so in history, when you had computers
link |
03:15:56.720
that were very small, you tried all the algorithms that had low fixed costs. And you look for the
link |
03:16:02.560
best of those. But over time, as computers got bigger, you could afford to do larger fixed costs
link |
03:16:06.960
and try those. And some of those had more effective algorithms in terms of their marginal cost.
link |
03:16:12.400
And that, in fact, you know, that roughly explains the long term history where, in fact,
link |
03:16:17.440
the rate of algorithmic improvement is about the same as the rate of hardware improvement,
link |
03:16:20.800
which is a remarkable coincidence. But it would be explained by saying, well,
link |
03:16:27.040
there's all these better algorithms you can't try until you have a big enough computer to pay
link |
03:16:31.360
the fixed cost of doing some trials to find out if that algorithm actually saves you on the marginal
link |
03:16:36.400
cost. And so that's an explanation for this relatively continuous history where, so we
link |
03:16:42.000
have a good story about why hardware is so continuous, right? And you might think, why
link |
03:16:45.200
would software be so continuous with the hardware? But if there's a distribution of algorithms in
link |
03:16:49.920
terms of their fixed costs, and it's, say, spread out at a wide log normal distribution,
link |
03:16:55.120
then we could be sort of marching through that log normal distribution, trying out algorithms
link |
03:16:59.680
with larger fixed costs and finding the ones that have lower marginal costs.
link |
03:17:04.240
So would you say AGI, human level AI, even EM, emulated mines, is chunky? Like a few breakthroughs
link |
03:17:19.600
can take this. So an M is by its nature chunky in the sense that if you have an emulated brain
link |
03:17:25.440
and you're 25% effective at emulating it, that's crap. That's nothing. Okay. Okay. You pretty much
link |
03:17:32.640
need to emulate a full human brain. Is that obvious? Is that obvious? It's pretty obvious.
link |
03:17:38.640
I'm talking about like, you know, so the key thing is you're emulating various brain cells,
link |
03:17:43.440
and so you have to emulate the input output pattern of those cells. So if you get that pattern
link |
03:17:48.000
somewhat close, but not close enough, then the whole system just doesn't have the overall
link |
03:17:52.720
behavior you're looking for, right? But it could have functionally some of the power of the overall
link |
03:17:57.920
system. So there'd be some threshold. The point is when you get close enough, then it goes over
link |
03:18:01.600
the threshold. It's like taking a computer chip and deleting every one percent of the gates, right?
link |
03:18:07.120
No, that's very chunky. But the hope is that the emulating the human brain, I mean, the human
link |
03:18:13.200
brain itself is not. Right. So it has a certain level of redundancy and a certain level of robustness.
link |
03:18:17.680
And so there's some threshold when you get close to that level of redundancy and robustness,
link |
03:18:20.960
then it starts to work. But until you get to that level, it's just going to be crap, right?
link |
03:18:25.440
It's going to be just a big thing that isn't working for us. So we can be pretty sure that
link |
03:18:30.080
emulations is a big chunk in an economic sense, right? At some point, you'll be able to make one
link |
03:18:35.920
that's actually effective in enable substituting for humans. And then that will be this huge
link |
03:18:41.600
economic product that people will try to buy crazy. Now,
link |
03:18:44.640
you'll bring a lot of out of people's lives so they'll be willing to pay for it.
link |
03:18:48.880
But it could be that the first emulation costs a billion dollars each, right? And then we have
link |
03:18:54.480
them, but we can't really use them. They're too expensive. And then the cost slowly comes down.
link |
03:18:57.920
And now we have less of a chunky adoption, right? That as the cost comes down, then we use more
link |
03:19:04.880
and more of them in more and more contexts. And that's a more continuous curve. So it's only if
link |
03:19:11.520
the first emulations are relatively cheap that you get a more sudden disruption to society.
link |
03:19:17.760
And that could happen if the algorithm is the last thing you figure out how to do or something.
link |
03:19:21.760
What about robots that capture some magic in terms of social connection? Robots,
link |
03:19:29.440
like we have a robot dog on the carpet right there. Robots that are able to capture some magic
link |
03:19:36.000
of human connection as they interact with humans, but are not emulating the brain. What about
link |
03:19:43.200
those? How far away? So we're thinking about chunkiness or distance now. So if you ask how
link |
03:19:49.840
chunky is the task of making a emulatable robot or something, which chunkiness and time are
link |
03:19:58.400
correlated. Right. But it's about how far away it is or how suddenly it would happen.
link |
03:20:03.280
And chunkiness is how suddenly and difficulty is just how far away it is. But it could be a
link |
03:20:09.440
continuous difficulty. It could just be far away, but we'll slowly, steadily get there.
link |
03:20:12.960
Or there could be these thresholds where we reach a threshold and suddenly we can do a lot better.
link |
03:20:16.800
Yeah. That's a good question for both. I tend to believe that all of it, not just the M,
link |
03:20:23.120
but AGI2 is chunky. And human level intelligence embodied in robots is also chunky.
link |
03:20:31.680
The history of computer science and chunkiness so far seems to be my rough best guess for the
link |
03:20:36.400
chunkiness of AGI. It is chunky. It's modestly chunky, not that chunky. Our ability to use
link |
03:20:44.880
computers to do many things in the economy has been moving relatively steadily. Overall,
link |
03:20:49.120
in terms of our use of computers in society, they have been relatively steadily improving for 70
link |
03:20:55.040
years. No, but I would say that's hard. Okay. I would have to really think about that because
link |
03:21:00.640
neural networks are quite surprising. Sure. But every once in a while, we have a new thing that's
link |
03:21:05.680
surprising. But if you stand back, we see something like that every 10 years or so,
link |
03:21:10.240
some new innovation that has a big effect. So, modestly chunky.
link |
03:21:19.280
Yeah. The history of the level of disruption we've seen in the past would be a rough estimate of
link |
03:21:23.200
the level of disruption in the future, unless the future is, we're going to hit a chunky territory
link |
03:21:27.440
much chunkier than we've seen in the past. Well, I do think it's like Kunian, revolution type.
link |
03:21:36.640
It seems like the data, especially on AI, is difficult to reason with because it's so recent,
link |
03:21:46.560
it's such a recent field. AI's been around for 50 years. I mean, 50, 60, 70, 80 years being recent.
link |
03:21:53.440
Okay. It's enough time to see a lot of trends.
link |
03:21:58.720
A few trends. I think the internet computing, there's really a lot of interesting stuff
link |
03:22:06.880
that's happened over the past 30 years that I think the possibility of revolutions is
link |
03:22:14.080
likelier than it was in the... I think for the last 70 years, there have always been a lot of
link |
03:22:19.200
things that look like that have a potential for revolution. So, we can't reason well about this.
link |
03:22:23.040
I mean, we can reason well by looking at the past trends. I would say the past trend is roughly
link |
03:22:27.760
your best guess for the future of this... No, but if I look back at the things that might have
link |
03:22:32.800
looked like revolutions in the 70s and 80s and 90s, they are less like the revolutions of
link |
03:22:40.240
that appear to be happening now or the capacity of revolutions that appear to be there now.
link |
03:22:45.440
First of all, there's a lot of more money to be made. So, there's a lot more incentive for markets
link |
03:22:50.720
to do a lot of kind of innovation. It seems like in the AI space. But then again, there's a history
link |
03:22:56.480
of winters and summers and so on. So, maybe we're just like riding a nice wave right now.
link |
03:23:00.960
One of the biggest issues is the difference between impressive demos and commercial value.
link |
03:23:05.760
Yes. So, we often, through the history of AI, we saw very impressive demos
link |
03:23:10.160
that never really translated much into commercial value. So, somebody who works on and cares about
link |
03:23:14.880
autonomous and semi autonomous vehicles, tell me about it. And there again, we return to the
link |
03:23:20.960
number of Elon Musk's per earth per year generated. That's the M. Coincidentally,
link |
03:23:28.720
same initials as the M. Very suspicious. We're going to have to look into that.
link |
03:23:35.120
All right. Two more fields that I would like to force and twist your arm to look for view
link |
03:23:41.520
quakes and for beautiful ideas, economics. What is a beautiful idea to you about economics?
link |
03:23:51.440
You've mentioned a lot of them. Sure. So, as you said before, there's going to be the first
link |
03:23:56.960
view cake most people encounter that makes the biggest difference on average in the world,
link |
03:24:00.880
because that's the only thing most people ever see is the first one. And so,
link |
03:24:05.360
you know, with AI, the first one is just how big the problem is. But once you get past that,
link |
03:24:11.600
you'll find out there's certainly for economics, the first one is just the power of markets.
link |
03:24:18.880
You might have thought it was just really hard to figure out how to optimize in a big
link |
03:24:23.120
complicated space and markets just do a good first pass for an awful lot of stuff. And they
link |
03:24:29.280
are really quite robust and powerful. And that's just quite the view crank where you just say,
link |
03:24:35.840
you know, just let up. If you want to get in the ballpark, just let a market handle it and step
link |
03:24:41.760
back. And that's true for a wide range of things. It's not true for everything, but it's a very
link |
03:24:47.680
good first approximation. Most people's intuitions for how they should limit markets
link |
03:24:51.360
are actually messing them up. They're that good in sense, right? Most people, when you go,
link |
03:24:55.840
I don't know if we want to trust that. Well, you should be trusting that. What are markets?
link |
03:25:03.040
Just a couple of words. So the idea is if people want something, then let other companies form
link |
03:25:11.040
to try to supply that thing. Let those people pay for their cost of whatever they're making and try
link |
03:25:15.840
to offer that product to those people. Let many people, many such firms enter that industry
link |
03:25:21.120
and let the customers decide which ones they want. And if the firm goes out of business,
link |
03:25:24.720
let it go bankrupt and let other people invest in whichever ventures they want to try to try
link |
03:25:29.040
to attract customers to their version of the product. And that just works for a wide range
link |
03:25:33.120
of products and services. And through all of this, there's a free exchange of information too.
link |
03:25:37.760
There's a hope that there's no manipulation of information and so on.
link |
03:25:43.360
Even when those things happen, still just the simple market solution is usually better than
link |
03:25:48.240
the things you'll try to do to fix it. Then the alternative. That's a view crank,
link |
03:25:53.680
it's surprising. It's not what you would initially thought.
link |
03:25:56.960
That's one of the great, I guess, inventions of human civilization that trust the markets.
link |
03:26:04.240
Now, another view kick that I learned in my research that's not all of economics,
link |
03:26:07.840
but something more specialized is the rationality of disagreement. That is,
link |
03:26:12.960
basically people who are trying to believe what's true in a complicated situation would not actually
link |
03:26:18.240
disagree. And of course, humans disagree all the time. So it was quite the striking fact for
link |
03:26:23.840
me to learn in grad school that actually rational agents would not knowingly disagree.
link |
03:26:29.920
And so that makes disagreement more puzzling and it makes you less willing to disagree.
link |
03:26:37.520
Humans are to some degree rational and are able to...
link |
03:26:41.360
Their priorities are different than just figuring out the truth, which might not be
link |
03:26:49.600
the same as being irrational. That's another tangent that could take an hour.
link |
03:26:56.480
In the space of human affairs, political science, what is a beautiful, foundational,
link |
03:27:04.480
interesting idea to you, a view quake in the space of political science?
link |
03:27:08.160
The main thing that goes wrong in politics is people not agreeing on what the best thing to do
link |
03:27:17.360
is. That's a wrong thing. So that's what goes wrong. That is, when you say what's
link |
03:27:22.800
fundamental behind most political failures, it's that people are ignorant of what the
link |
03:27:28.160
consequences of policy is. And that's surprising because it's actually feasible to solve that
link |
03:27:33.920
problem, which we aren't solving. So it's a bug, not a feature that there's an inability to arrive
link |
03:27:41.600
at a consensus. So most political systems, if everybody looked to some authority, say, on a
link |
03:27:47.200
question and that authority told them the answer, then most political systems are capable of just
link |
03:27:51.280
doing that thing. And so it's the failure to have trust for the authorities that is sort of the
link |
03:28:00.800
underlying failure behind most political failure. We invade Iraq, say, when we don't have an
link |
03:28:08.240
authority to tell us that's a really stupid thing to do. And it is possible to create
link |
03:28:15.840
more informative trust for the authorities. That's a remarkable fact about the world of
link |
03:28:21.280
institutions that we could do that, but we aren't.
link |
03:28:24.640
Yeah. So that's surprising. We could and we aren't.
link |
03:28:28.000
Right. Another big view correct about politics is from the Elf in the Brain that most people,
link |
03:28:31.920
when they're interacting with politics, they say they want to make the world better,
link |
03:28:35.760
they make their city better, their country better, and that's not their priority.
link |
03:28:39.280
What is it? They want to show loyalty to their allies. They want to show their people they're
link |
03:28:44.000
on their side. Yes. They're various tribes they're in. That's their primary priority,
link |
03:28:49.520
and they do accomplish that. Yeah. And the tribes are usually color coded conveniently enough.
link |
03:28:55.120
What would you say, you know, it's the Churchill question.
link |
03:29:01.600
Democracy is the crappiest form of government, but it's the best one we got.
link |
03:29:06.560
What's the best form of government for this, our 7 billion human civilization and the,
link |
03:29:13.760
maybe as we get farther and farther, you mentioned a lot of stuff that's fascinating
link |
03:29:18.080
about human history as we become more forager like and looking out beyond what's the best
link |
03:29:24.160
form of government in the next 50, 100 years as we become a multi planetary species.
link |
03:29:28.400
So the key failing is that we have existing political institutions and related institutions
link |
03:29:36.160
like media institutions and other authority institutions, and these institutions sit in
link |
03:29:41.760
a vast space of possible institutions. And the key failing we're just not exploring that space.
link |
03:29:47.520
So I have made my proposals in that space, and I think I can identify many provinces
link |
03:29:52.400
solutions. And many other people have made many other promising proposals in that space.
link |
03:29:56.960
But the key thing is we're just not pursuing those proposals. We're not trying them out on
link |
03:30:00.640
small scales. We're not doing tests. We're not exploring the space of these options.
link |
03:30:05.760
That is the key thing we're failing to do. And if we did that, I am confident we would find much
link |
03:30:11.600
better institutions than when we're using now, but we would have to actually try.
link |
03:30:15.600
So there's a lot of those topics. I do hope we get a chance to talk again. You're a fascinating
link |
03:30:23.040
human being. So I'm skipping a lot of tangents on purpose that I would love to take. You're such
link |
03:30:28.320
a brilliant person with so many different topics. Let me take a stroll into the deep human psyche
link |
03:30:39.760
of Robin Hansen himself. So first, may not be that deep. I might just be all on the surface.
link |
03:30:49.760
What you see, what you get, there might not be much hiding behind it.
link |
03:30:52.400
Some of the fun is on the surface. I actually think this is true of many of the most successful,
link |
03:30:59.840
most interesting people you see in the world. That is, they have put so much effort into the
link |
03:31:05.120
surface that they've constructed. And that's where they put all their energy. So somebody
link |
03:31:10.240
might be a statesman or an actor or something else. And people want to interview them and they
link |
03:31:14.640
want to say, what are you behind the scenes? What do you do in your free time? Those people
link |
03:31:18.640
don't have free time. They don't have another life behind the scenes. They put all their energy
link |
03:31:23.040
into that surface, the one we admire, the one we're fascinated by. And they kind of have to
link |
03:31:28.240
make up the stuff behind the scenes to supply it for you. But it's not really there.
link |
03:31:32.720
Well, there's several ways of phrasing this. One of it is authenticity, which is
link |
03:31:38.640
if you become the thing you are on the surface, if the depths mirror the surface,
link |
03:31:45.440
then that's what authenticity is. You're not concealing something. To push back on the idea
link |
03:31:50.880
of actors, they actually have often a manufactured surface that they put on and they try on different
link |
03:31:57.680
masks. And the depths are very different from the surface. And that's actually what makes them very
link |
03:32:03.040
not interesting to interview. If you're an actor who actually lives the role that you play. So
link |
03:32:12.720
like, I don't know, a Clint Eastwood type character who clearly represents the cowboy.
link |
03:32:18.480
Like at least rhymes or echoes the person you play on the surface. That's authenticity.
link |
03:32:24.400
Some people are typecasts and they have basically one persona. They play in all of their movies and
link |
03:32:28.880
TV shows. And so those people, it probably is the actual persona that they are. Or it has become
link |
03:32:35.040
that over time. Clint Eastwood would be one, I think of Tom Hanks as an ever, right? They just
link |
03:32:39.600
always play the same person. And you and I are just both surface players. You're the fun, brilliant
link |
03:32:46.160
thinker. And I am the suit wearing idiot full of silly questions. All right. That said,
link |
03:32:59.200
let's put on your wise sage hat and ask you what advice would you give to young people today in
link |
03:33:06.000
high school and college about life, about how to live a successful life in career or just in
link |
03:33:14.320
general that they can be proud of? Most young people, when they actually ask you that question,
link |
03:33:21.360
what they usually mean is, how can I be successful by usual standards? I'm not very good at giving
link |
03:33:27.520
advice about that because that's not how I tried to live my life. So I would more flip it around and
link |
03:33:34.880
say, you live in a rich society. You will have a long life. You have many resources available to you.
link |
03:33:42.000
Whatever career you take, you'll have plenty of time to make progress on something else. Yes,
link |
03:33:49.760
it might be better if you find a way to combine your career and your interests in a way that gives
link |
03:33:54.240
you more time and energy, but there are often big compromises there as well. So if you have a passion
link |
03:34:00.000
about some topic or something that you think is worth pursuing, you can just do it. You don't need
link |
03:34:05.520
other people's approval and you can just start doing whatever it is you think it worth doing.
link |
03:34:12.400
It might take you decades, but decades are enough to make enormous progress on most all
link |
03:34:16.800
interesting things. And don't worry about the commitment of it. I mean, that's a lot of what
link |
03:34:21.280
people worry about is, well, there's so many options and if I choose a thing and I stick with it,
link |
03:34:26.480
I sacrifice all the other paths I could have taken. So I switched my career at the age of 34
link |
03:34:32.240
with two kids, age zero and two, went back to grad school in social science after being a
link |
03:34:36.480
software, research software engineer. So it's quite possible to change your mind later in life.
link |
03:34:45.120
How can you have an age of zero?
link |
03:34:48.480
Less than one.
link |
03:34:50.880
Okay. Oh, oh, you index was, I got it. Okay.
link |
03:34:55.040
Right. Like people also ask what to read and I say textbooks. And until you've read lots of
link |
03:35:01.120
textbooks or maybe review articles, I'm not so sure you should be reading blog posts and Twitter
link |
03:35:08.400
feeds and even podcasts. I would say at the beginning read the read, this is our best,
link |
03:35:14.720
humanity's best summary of how to learn things is crammed into textbooks.
link |
03:35:18.160
Especially the ones on like introduction to biology.
link |
03:35:22.560
Introduction to everything. Just read all the algorithms. Read as many textbooks as you
link |
03:35:26.800
can stomach. And then maybe if you want to know more about a subject, find review articles.
link |
03:35:31.120
You don't need to read the latest stuff for most topics.
link |
03:35:33.520
Yeah. And actually textbooks often have the prettiest pictures.
link |
03:35:37.280
There you go. And then depending on the field, if it's technical,
link |
03:35:40.000
then doing the homework problems at the end, it's actually extremely, extremely useful.
link |
03:35:44.800
Extremely powerful way to understand something if you allow it. I actually think of like high
link |
03:35:50.240
school and college, which you kind of remind me of people don't often think of it that way, but you
link |
03:35:57.600
will almost not again get an opportunity to spend a time with a fundamental subject and like
link |
03:36:06.000
all the basics and everybody's forcing you like everybody wants you to do it.
link |
03:36:11.120
And like you'll never get that chance again to sit there, even though it's outside of your
link |
03:36:15.840
interest biology. Like in high school, I took AP biology, AP chemistry. I'm thinking of subjects
link |
03:36:23.920
I never again really visited seriously. And it was so nice to be forced into anatomy and physiology,
link |
03:36:32.080
to be forced into that world, to stay with it, to look at the pretty pictures,
link |
03:36:36.720
to certain moments to actually for a moment, enjoy the beauty of these, of like how cell
link |
03:36:42.960
works and all those kinds of things. And somehow that stays, like the ripples of that fascination
link |
03:36:48.560
that stays with you, even if you never utilize those learnings in your actual work.
link |
03:36:56.880
A common problem, at least of many young people I meet, is that they're
link |
03:37:01.200
feeling idealistic and altruistic, but in a rush. So the usual human tradition that goes
link |
03:37:08.960
back hundreds, thousands of years is that people's productivity rises with time and
link |
03:37:13.600
maybe peaks around the age of 40 or 50. The age of 40, 50 is when you will be having the
link |
03:37:18.800
highest income, you'll have the most contacts, you will sort of be wise about how the world works.
link |
03:37:25.680
Expect to have your biggest impact then. Before then, you can have impacts, but you're also mainly
link |
03:37:31.280
building up your resources and abilities. That's the usual human trajectory, expect that to be
link |
03:37:38.320
true of you too. Don't be in such a rush to like accomplish enormous things at the age of 18 or
link |
03:37:43.040
whatever. I mean, you might as well practice trying to do things, but that's mostly about
link |
03:37:47.600
learning how to do things by practicing. There's a lot of things you can't do unless you just keep
link |
03:37:50.880
trying them. And when all else fails, try to maximize the number of offspring however way you
link |
03:37:57.760
can. That's certainly something I've neglected. I would tell my younger version of myself,
link |
03:38:02.640
pay, try to have more descendants. Yes, absolutely. It matters more than I realized at the time.
link |
03:38:11.840
Both in terms of making copies of yourself in mutated form and just the joy of raising them.
link |
03:38:21.120
Sure. I mean, the meaning even. In the literature on the value people get out of life,
link |
03:38:29.520
there's a key distinction between happiness and meaning. So happiness is how do you feel right
link |
03:38:34.160
now about right now. And meaning is how do you feel about your whole life. And many things that
link |
03:38:42.000
produce happiness don't produce meaning as reliably. And if you have to choose between them,
link |
03:38:45.840
you'd rather have meaning. And meaning is more goes along with sacrificing happiness sometimes.
link |
03:38:53.920
And children are an example of that. Do you get a lot more meaning out of children,
link |
03:38:58.800
even if there are a lot more work? Why do you think kids, children are so magical,
link |
03:39:07.360
like raising kids? Because I would love to have kids. And whenever I work with robots,
link |
03:39:15.920
there's some of the same magic when there's an entity that comes to life. And in that case,
link |
03:39:20.720
I'm not trying to draw too many parallels, but there's some echo to it, which is when you program
link |
03:39:28.400
a robot, there's some aspect of your intellect that is now instilled in this other moving being
link |
03:39:35.280
that's kind of magical. Or why do you think that's magical? And you said happiness and meaning
link |
03:39:42.560
as opposed to a short. Meaningful. Why is it meaningful?
link |
03:39:45.920
It's overdetermined. I can give you several different reasons, all of which is sufficient.
link |
03:39:51.840
And so the question is, we don't know which ones are the correct reasons.
link |
03:39:54.640
Such a technical, it's overdetermined. Look it up.
link |
03:39:58.720
So I meet a lot of people interested in the future, interested in thinking about the future,
link |
03:40:02.800
they're thinking about how can I influence the future? But overwhelmingly in history so far,
link |
03:40:08.400
the main way people have influenced the future is by having children.
link |
03:40:11.200
Overwhelmingly. And that's just not an incidental fact. You are built for that. That is,
link |
03:40:19.840
you're the sequence of thousands of generations, each of which successfully had a descendant.
link |
03:40:25.280
And that affected who you are. You just have to expect, and it's true that who you are is built
link |
03:40:32.000
to be expected to have a child, to want to have a child, to have that be a natural and
link |
03:40:38.880
meaningful interaction for you. And it's just true. It's just one of those things you just
link |
03:40:42.800
should have expected. And it's not a surprise. Well, to push back in terms of influencing the
link |
03:40:50.400
future as we get more and more technology, more and more of us are able to influence the future
link |
03:40:56.720
in all kinds of other ways. Right. Being a teacher, educator.
link |
03:40:59.840
Even so though, still most of our influence on the future has probably happened being kids,
link |
03:41:05.200
even though we've accumulated more ways, other ways to do it. You mean at scale,
link |
03:41:09.840
I guess the depth of influence, like really how much of much effort, how much of yourself
link |
03:41:14.880
you really put another human being. Do you mean both the raising of a kid or you mean raw genetic
link |
03:41:22.000
information? Well, both, but raw genetics is probably more than half of it. More than half.
link |
03:41:27.840
More than half. Even in this modern world. Yep. Genetics. Let me ask some dark,
link |
03:41:37.040
difficult questions if I might. Let's take a stroll into that place that may, may not exist
link |
03:41:45.280
according to you. What's the darkest place you've ever gone to in your mind, in your life, a dark
link |
03:41:51.680
time, a challenging time in your life that you had to overcome? You know, probably just feeling
link |
03:42:01.200
strongly rejected. And so I've been, I'm apparently somewhat emotionally scarred by
link |
03:42:07.920
just being very rejection averse, which must have happened because some rejections were just very
link |
03:42:13.360
scarring. At a scale in what kinds of communities on the individual scale?
link |
03:42:19.760
I mean, lots of different scales. Yeah. All the different, many different scales,
link |
03:42:24.480
still that rejection stings. Hold on a second, but you're a contrarian thinker. You challenged
link |
03:42:33.040
that knows why, if you were scarred by rejection, why welcome it in so many ways at a much larger
link |
03:42:43.840
scale, constantly with your ideas. It could be that I'm just stupid or that I've just categorized
link |
03:42:50.640
them differently than I should or something. You know, the most rejection that I've faced
link |
03:42:57.520
hasn't been because of my intellectual ideas. So the intellectual ideas haven't been the thing
link |
03:43:06.080
to risk the rejection. The one that the things that put challenge your mind taking you to a
link |
03:43:14.560
dark place are the more psychological rejections. So you just asked me, what took me to a dark
link |
03:43:21.360
place? You didn't specify it as sort of an intellectual dark place, I guess. Yeah, I just
link |
03:43:25.600
meant like what? So intellectual is disjoint or at least at a more surface level than something
link |
03:43:33.760
emotional. Yeah, I would just think there are times in your life when you're just in a dark
link |
03:43:39.520
place and that can have many different causes. And most intellectuals are still just people
link |
03:43:45.520
and most of the things that will affect them are the kinds of things that affect people.
link |
03:43:49.200
They aren't that different necessarily. I mean, that's going to be true for like I presume most
link |
03:43:53.120
basketball players are still just people. If you ask them what was the worst part of their life,
link |
03:43:56.480
it's going to be this kind of thing that was the worst part of life for most people.
link |
03:44:00.080
So rejection early in life? Yeah, I mean, not in grade school probably, but yeah,
link |
03:44:06.400
sort of being a young, nerdy guy and feeling not in much demand or interest or later on
link |
03:44:17.200
lots of different kinds of rejection. But yeah, most of us like to pretend we don't
link |
03:44:24.560
that much need other people. We don't care what they think. It's a common sort of stance if somebody
link |
03:44:29.040
rejects you or something. I didn't care about them anyway. But I think to be honest, people really
link |
03:44:33.920
do care. Yeah, we do seek that connection, that love. What do you think is the role of love and
link |
03:44:39.440
the human condition? Opacity in part. Love is one of those things where we know at some level
link |
03:44:53.200
it's important to us, but it's not very clearly shown to us exactly how or why or in what ways.
link |
03:45:00.320
There are some kinds of things we want where we can just clearly see that we want and why
link |
03:45:03.440
that we want it right. We know when we're thirsty and we know why we were thirsty and we know what
link |
03:45:07.120
to do about being thirsty and we know when it's over that we're no longer thirsty. Love isn't
link |
03:45:13.040
like that. Like what do we seek from this? We're drawn to it, but we do not understand why we're
link |
03:45:20.160
drawn exactly because it's not just affection. Because if it was just affection, we don't seem
link |
03:45:25.520
to be drawn to pure affection. We don't seem to be drawn to somebody who's like a servant. We don't
link |
03:45:33.120
seem to be necessarily drawn to somebody that satisfies all your needs or something like that.
link |
03:45:39.120
So it's clearly something we want or need, but we're not exactly very clear about it. And that
link |
03:45:44.240
is kind of important to it. So I've also noticed there are some kinds of things you can't imagine
link |
03:45:49.200
very well. So if you imagine a situation, there are some aspects of the situation that you can
link |
03:45:53.440
clearly imagine it being bright or dim. You can imagine it being windy or imagine being hot or
link |
03:45:58.560
cold. But there are some aspects about your emotional stance in a situation that's actually
link |
03:46:03.840
just hard to imagine or even remember. It's hard to like, you can often remember an emotion only
link |
03:46:08.880
when you're in a similar sort of emotion situation. And otherwise, you just can't bring the emotion
link |
03:46:13.520
to your mind. And you can't even imagine it, right? So there's certain kinds of emotions you
link |
03:46:19.920
can have. And when you're in that emotion, you can know that you have it and you can have a name
link |
03:46:23.120
and it's associated. But later on, I tell you, you know, remember joy and it doesn't come to mind.
link |
03:46:28.880
You're not able to replay it. Right. And it's sort of a reason why we have one of the reasons
link |
03:46:33.840
that pushes us to re consume it and reproduce it is that we can't reimagine it.
link |
03:46:38.720
Right. Well, there's a, it's interesting because there's a Daniel Kahneman type of thing of like
link |
03:46:44.720
reliving memories because I'm able to summon some aspect of that emotion again by thinking of that
link |
03:46:50.880
situation that from which that emotion came. Right. So like a certain song, you can listen to it
link |
03:46:58.000
and you can feel the same way you felt the first time you remember that song associated with
link |
03:47:02.080
certain things. Right. But you need to remember that situation in some sort of complete package.
link |
03:47:05.600
Yes. You can just take one part off of it. And then if you get the whole package again,
link |
03:47:09.440
if you remember the whole feeling. Yes. Or some fundamental aspect of that whole experience that
link |
03:47:14.720
arouse from which the feeling arose. And actually the feeling is probably different
link |
03:47:19.680
in some way. It could be more pleasant or less pleasant than the feeling you felt originally.
link |
03:47:23.920
And that's more so over time, every time you replay that memory. It is interesting. You're
link |
03:47:28.160
not able to replay the feeling perfectly. You don't remember the feeling. You remember the
link |
03:47:33.200
facts of the events. So there's a sense of which over time, we expand our vocabulary as a community
link |
03:47:38.160
of language. And that allows us to sort of have more feelings and know that we are feeling them.
link |
03:47:43.440
Because you can have a feeling but not have a word for it. And then you don't know how to categorize
link |
03:47:47.440
it or even what it is and whether it's the same as something else. But once you have a word for it,
link |
03:47:52.160
you can sort of pull it together more easily. And so I think over time, we are having a richer
link |
03:47:57.680
palette of feelings. Because we have more words for them. What has been a painful loss in your
link |
03:48:04.720
life? Maybe somebody or something that's no longer in your life but played an important part of your
link |
03:48:12.400
life? Youth. That's a concept. No, it has to be. But I was once younger. I had health and I had
link |
03:48:19.920
vitality. I was insommer. I mean, you know, I've lost that over time. Do you see that as a different
link |
03:48:23.920
person? Maybe you've lost that person? Certainly. Yes, absolutely. I'm a different person than
link |
03:48:28.480
I was when I was younger. And I'm not who I don't even remember exactly what he was. So I don't
link |
03:48:34.000
remember as many things from the past as many people do. So in some sense, I've just lost a lot
link |
03:48:38.240
of my history by not remembering it. And I'm not that person anymore. That person's gone.
link |
03:48:43.600
Is that a painful loss? Is it a painful loss, though? Yeah. Or is it a, why is it painful?
link |
03:48:50.080
Because you're wiser. I mean, there's so many things that are beneficial to getting older.
link |
03:48:58.480
Right. But are you just... I just was this person and I felt assured that I could continue to be
link |
03:49:05.600
that person. And you're no longer that person. And he's gone. And I'm not him anymore. And he died
link |
03:49:11.440
without fanfare or a funeral. And that the person you are today talking to me, that person
link |
03:49:17.280
will be changed too. Yes. And maybe in 20 years, he won't be there anymore. And a future person,
link |
03:49:28.480
we'll look back. For M's, this will be less of a problem. For M's, they would be able to save
link |
03:49:34.080
an archived copy of themselves at each different age. And they could turn it on periodically
link |
03:49:38.960
and go back and talk to it. Do we play? You think some of that will be... So with emulated minds,
link |
03:49:46.320
with M's, there's a digital cloning that happens. And do you think that makes your
link |
03:49:59.680
you less special if you're cloneable? Like, does that make you the experience of life,
link |
03:50:10.080
the experience of a moment, the scarcity of that moment, the scarcity of that experience?
link |
03:50:14.880
Isn't that a fundamental part of what makes that experience so delicious, so rich of feeling?
link |
03:50:20.160
I think if you think of a song that lots of people listen to that are copies all over the
link |
03:50:24.560
world, we're going to call that a more special song. Yeah. So there's a perspective on copying
link |
03:50:36.800
and cloning where you're just scaling happiness versus degrading. Each copy of a song is less
link |
03:50:44.240
special if there are many copies, but the song itself is more special if there are many copies.
link |
03:50:48.480
In a mass, right, you're actually spreading the happiness even if it diminishes over a large
link |
03:50:55.440
number of people at scale and that increases the overall happiness in the world. And then you're
link |
03:50:59.920
able to do that with multiple songs. Is a person who has an identical twin more or less special?
link |
03:51:06.800
Well, the problem with identical twins is, you know, you, it's like just two with Ms.
link |
03:51:16.720
Right, but two is different than one. So I think an identical twin's life is richer for
link |
03:51:22.800
having this other identical twin, somebody who understands them better than anybody else can.
link |
03:51:27.760
From the point of view of an identical twin, I think they have a richer life for being part of
link |
03:51:32.400
this couple, which each of which is very similar. Now, if you said, will the world, you know, if
link |
03:51:37.040
we lose one of the identical twins, will the world miss it as much because you've got the other one
link |
03:51:41.200
and they're pretty similar? Maybe from the rest of the world's point of view, they are,
link |
03:51:44.960
they suffer less of a loss when they lose one of the identical twins. But from the point of view
link |
03:51:49.040
of the identical twin themselves, their life is enriched by having a twin.
link |
03:51:53.600
See, but the identical twin copying happens at the place of birth. That's different than copying
link |
03:52:01.280
after you've done some of the environment, like the nurture at the teenage or the in the 20s.
link |
03:52:08.000
Yes. That'll be an interesting thing for Ms. to find out all the different ways that
link |
03:52:12.000
can have different relationships to different people who have different degrees of similarity
link |
03:52:16.080
to them in time. Yeah. Yeah, I, man. But it seems like a rich space to explore. And I don't feel
link |
03:52:26.320
sorry for them. This sounds like interesting world to live in. And there could be some ethical
link |
03:52:30.640
conundrums there. There will be many new choices to make them. They don't make now. And then we
link |
03:52:36.160
discussed that in the book, Age of M. Say you have a lover and you make a copy of yourself,
link |
03:52:42.160
but the lover doesn't make a copy. Well, now, which one of you or are both still related to the
link |
03:52:48.240
lover? Socially entitled to show up. Yes. So you'll have to make choices then when you split
link |
03:52:56.800
yourself. Which of you inherit which unique things? Yeah. And of course, there'll be
link |
03:53:05.360
an equivalent increase in lawyers. Well, I guess you can clone the lawyers to help
link |
03:53:11.680
manage some of these negotiations of how to split property. The nature of owning, I mean,
link |
03:53:18.080
property is connected to individuals, right? You only really need lawyers for this with
link |
03:53:24.000
an inefficient, awkward law that is not very transparent and able to do things. So, you know,
link |
03:53:29.040
for example, an operating system of a computer is a law for that computer. When the operating
link |
03:53:34.240
system is simple and clean, you don't need to hire a lawyer to make a key choice with the
link |
03:53:38.560
operating system. You don't need a human in the loop. You just make a choice.
link |
03:53:41.120
There are both fine rules. Yeah. Right. So ideally, we want a legal system that makes
link |
03:53:45.840
the common choices easy and not require much overhead. And that's the digitization of things
link |
03:53:52.400
further and further enables that. So the loss of a younger self. What about the loss of your life
link |
03:54:00.640
overall? Do you ponder your death, your mortality? Are you afraid of it? I am a cryonics customer.
link |
03:54:06.960
That's what this little tag around my deck says. It says that if you find me in a medical situation,
link |
03:54:12.720
you should call these people to enable the cryonics transfer. So I am taking a long shot chance at
link |
03:54:20.640
living a much longer life. Can you explain what cryonics is? So when medical science gives up
link |
03:54:28.400
on me in this world, instead of burning me or letting worms eat me, they will freeze me,
link |
03:54:35.520
or at least freeze my head. And there's damage that happens in the process of freezing the head.
link |
03:54:40.400
But once it's frozen, it won't change for a very long time. Chemically, it'll just be
link |
03:54:45.280
completely exactly the same. So future technology might be able to revive me. And in fact,
link |
03:54:51.600
I would be mainly counting on the brain emulation scenario, which doesn't require
link |
03:54:56.000
reviving my entire biological body. It means I would be in a computer simulation.
link |
03:55:02.080
And so that's, I think I've got at least a 5% shot at that. And that's immortality.
link |
03:55:07.680
So most likely, it won't happen. And therefore, I'm sad that it won't happen. Do you think immortality
link |
03:55:16.640
is something that you would like to have? Well, I mean, just like infinity, I mean,
link |
03:55:23.360
you can't know until forever, which means never, right? So all you can really,
link |
03:55:28.640
the better choices at each moment, do you want to keep going? So I would like at every moment
link |
03:55:33.360
to have the option to keep going. The interesting thing about human experience is that
link |
03:55:43.360
the way you phrase it is exactly right. At every moment, I would like to keep going.
link |
03:55:48.720
But the thing that happens, I'll leave them wanting more of whatever that
link |
03:55:56.160
phrase is, the thing that happens is over time, it's possible for certain experiences to become
link |
03:56:04.560
bland. And you become tired of them. And that actually makes life really unpleasant.
link |
03:56:13.920
Sorry, it makes that experience really unpleasant. And perhaps you can generalize that to life itself
link |
03:56:19.040
if you have a long enough horizon. And so might happen, but might as well wait and find out.
link |
03:56:24.560
But then you're ending on suffering, you know? So in the world of brain emulations,
link |
03:56:30.640
I have more options. You can return yourself. That is, I can make copies of myself,
link |
03:56:36.800
archive copies at various ages. And at a later age, I could decide that I'd rather replace
link |
03:56:41.920
myself with a new copy from a younger age. So does a brain emulation still operate in physical
link |
03:56:48.320
space? So can we do, what do you think about like the metaverse and operating virtual reality?
link |
03:56:53.360
So we can conjure up not just emulate, not just your own brain and body, but the entirety of
link |
03:57:00.320
the environment? Well, most brain emulations will in fact, most of their time in virtual reality.
link |
03:57:06.000
But they wouldn't think of it as virtual reality, they would just think of it as their usual reality.
link |
03:57:11.200
I mean, the thing to notice, I think in our world, most of us spend most time indoors.
link |
03:57:15.600
And indoors, we are surrounded by walls covered with paint and floors covered with tile or rugs.
link |
03:57:23.600
Most of our environment is artificial. It's constructed to be convenient for us. It's
link |
03:57:28.720
not the natural world that was there before. A virtual reality is basically just like that.
link |
03:57:33.840
It is the environment that's comfortable and convenient for you. But when it's the right,
link |
03:57:39.600
that environment for you, it's real for you. Just like the room you're in right now,
link |
03:57:43.200
most likely is very real for you. You're not focused on the fact that the paint is hiding
link |
03:57:47.920
the actual studs behind the wall and the actual wires and pipes and everything else.
link |
03:57:52.880
The fact that we're hiding that from you doesn't make it fake or unreal.
link |
03:57:58.400
What are the chances that we're actually in the very kind of system that you're describing where
link |
03:58:04.640
the environment and the brain is being emulated and you're just replaying an experience when you
link |
03:58:09.760
were first did a podcast with Lex. And now the person that originally launched this already
link |
03:58:18.080
did hundreds of podcasts with Lex. This is just the first time and you like this time
link |
03:58:22.640
because there's so much uncertainty. There's nerves. It could have gone any direction.
link |
03:58:28.320
At the moment, we don't have the technical ability to create that emulation. So we'd have to be
link |
03:58:33.360
postulating that in the future. We have that ability and then they choose to evaluate this
link |
03:58:38.560
moment now to simulate it. Don't you think we could be in the simulation of that exact experience
link |
03:58:44.880
right now? We wouldn't be able to know. So one scenario would be this never really happened.
link |
03:58:51.040
This only happens as a reconstruction later on. That's different than the scenario. This did happen
link |
03:58:57.040
the first time and now it's happening again as a reconstruction. That second scenario is harder
link |
03:59:02.640
to put together because it requires this coincidence where between the two times we
link |
03:59:07.680
produce the ability to do it. No, but don't you think replay of memories,
link |
03:59:13.760
poor replay of memories is something that might be a possible thing in the future?
link |
03:59:19.600
You're saying it's harder than to conjure up things from scratch.
link |
03:59:23.600
It's certainly possible. So the main way I would think about it is in terms of the demand
link |
03:59:28.640
for simulation versus other kinds of things. So I've given this a lot of thought because
link |
03:59:33.120
I first wrote about this long ago when Bostrom first wrote his papers about simulation argument
link |
03:59:38.720
and I wrote about how to live in a simulation. So the key issue is the fraction of creatures in
link |
03:59:48.000
the universe that are really experiencing what you appear to be really experiencing relative
link |
03:59:52.880
to the fraction that are experiencing it in a simulation way, i.e. simulated. So then the key
link |
04:00:01.040
parameter is at any one moment in time, creatures at that time, many of them, most of them are
link |
04:00:07.360
presumably really experiencing what they're experiencing, but some fraction of them are
link |
04:00:11.600
experiencing some past time where that past time is being remembered via their simulation.
link |
04:00:19.680
So to figure out this ratio, what we need to think about is basically two functions. One is
link |
04:00:26.720
how fast in time does the number of creatures grow? And then how fast in time does the interest
link |
04:00:32.640
in the past decline? Because at any one time, people will be simulating different periods in
link |
04:00:38.960
the past with different emphasis. I love the way you think so much. That's exactly right, yeah.
link |
04:00:44.160
So if the first function grows slower than the second one declines, then in fact,
link |
04:00:51.360
your chances of being simulated are low. So the key question is how fast does interest in the
link |
04:00:58.080
past decline relative to the rate at which the population grows with time? Does this correlate
link |
04:01:02.720
to you earlier suggested that the interest in the future increases over time? Are those correlated
link |
04:01:08.720
interest in the future versus interest in the past? Why are we interested in the past?
link |
04:01:13.360
But the simple way to do it is, as you know, like Google Ngrams has a way to type in a word
link |
04:01:18.160
and see how interest in it declines or rises over time. You can just type in a year and get
link |
04:01:23.520
the answer for that. If you type in a particular year like 1900 or 1950, you can see with Google
link |
04:01:30.480
Ngram how interest in that year increased up until that date and decreased after it.
link |
04:01:36.080
And you can see that interest in a date declines faster than does the population grow with time.
link |
04:01:42.240
That is brilliant. And so interesting. You have the answer. Wow. And that was your argument
link |
04:01:51.920
against, not against, to this particular aspect of the simulation, how much past simulation there
link |
04:01:59.520
will be replay of past memories. First of all, if we assume that like simulation of the past is
link |
04:02:04.560
a small fraction of all the creatures at that moment. Yes. Right. And then it's about how fast.
link |
04:02:10.400
Now, some people have argued plausibly that maybe most interest in the past falls with
link |
04:02:15.680
this fast function, but some unusual category of interest in the past won't fall that fat quickly,
link |
04:02:20.240
and then that eventually would dominate. So that's a other hypothesis you want.
link |
04:02:24.240
Some category. So that very outlier specific kind of, yeah, okay. Yeah, yeah, yeah. Like
link |
04:02:30.000
really popular kinds of memories, but like probably sexual. In a trillion years, there's
link |
04:02:36.480
some small research institute that tries to randomly select from all possible people in
link |
04:02:41.360
history or something to simulate. Yeah. Yeah. Yeah. Some questions, how big is this research
link |
04:02:48.320
institute and how big is the future in a trillion years, right? And that's, that would be hard to
link |
04:02:52.480
say. But if we just look at the ordinary process by which people simulate recent here. So if you
link |
04:02:58.000
look at, it's also true for movies and plays and video games, overwhelming, they're interested
link |
04:03:03.280
in the recent past. There's very few video games where you play someone in the Roman Empire.
link |
04:03:08.000
Right. Even fewer where you play someone in the Egyptian Empire.
link |
04:03:14.240
Yeah, just different. It's just declined very quickly. But every once in a while,
link |
04:03:17.840
that's brought back. But yeah, you're right. I mean, just if you look at the mass of entertainment
link |
04:03:25.120
movies and games, it's focusing on the present recent past. And maybe some, I mean,
link |
04:03:30.400
where does science fiction fit into this? Because it's sort of a, what is science fiction? I mean,
link |
04:03:39.200
it's a mix of the past and the present and some kind of manipulation of that to make it more
link |
04:03:44.480
efficient for us to ask deep philosophical questions about humanity. So the closest genre
link |
04:03:49.600
to science fiction is clearly fantasy, fantasy and science fiction in many bookstores and even
link |
04:03:53.840
Netflix or whatever categories, they're just lumped together. So clearly they have a similar
link |
04:03:57.840
function. So the function of fantasy is more transparent than the function of science fiction,
link |
04:04:02.800
so use that as your guide. What's fantasy for is just to take away the constraints of the
link |
04:04:08.240
ordinary world and imagine stories with much fewer constraints. But that's what fantasy is.
link |
04:04:12.400
You're much less constrained. What's the purpose to remove constraints? Is it to escape
link |
04:04:17.120
from the harshness of the constraints of the real world? Or is it to just remove constraints in
link |
04:04:22.000
order to explore some, get a deeper understanding of our world? What is it? I mean, why do people
link |
04:04:28.240
read fantasy? I'm not a cheap fantasy reading kind of person. So I need to...
link |
04:04:36.320
One story that it sounds plausible to me is that there are sort of these deep story structures
link |
04:04:40.720
that we love and we want to realize. And then many details of the world get in their way.
link |
04:04:46.560
Fantasy takes all those obstacles out of the way and lets you tell the essential hero story or
link |
04:04:51.280
the essential love story, whatever essential story you want to tell. The reality and constraints
link |
04:04:56.320
are not in the way. And so science fiction can be thought of as like fantasy, except you're not
link |
04:05:02.000
willing to admit that it can't be true. So the future gives the excuse of saying, well, it could
link |
04:05:06.720
happen. And you accept some more reality constraints for the illusion, at least, that maybe it could
link |
04:05:13.920
really happen. Maybe it could happen. And that it stimulates the imagination. The imagination is
link |
04:05:21.760
something really interesting about human beings. And it seems also to be an important part of
link |
04:05:27.040
creating really special things is to be able to first imagine them. With you and Nick Bostrom,
link |
04:05:32.800
where do you land on the simulation and all the mathematical ways of thinking it and just
link |
04:05:39.360
the thought experiment of it? Are we living in a simulation? Well, that was the discussion we
link |
04:05:46.160
just had. That is, you should grant the possibility of being a simulation. You shouldn't be 100%
link |
04:05:51.040
confident that you're not. You should certainly grant a small probability. The question is,
link |
04:05:54.560
how large is that probability? Are you saying we would be misunderstood because I thought our
link |
04:06:00.560
discussion was about replaying things that already happened? Right. But the whole question is,
link |
04:06:04.480
right now, is that what I am? Am I actually a replay from some distant future?
link |
04:06:11.760
But it doesn't necessarily need to be a replay. It could be a totally new. You don't have to be
link |
04:06:16.560
an NPC. Right. But clearly, I'm in a certain era with a certain kind of world around me, right? So
link |
04:06:21.040
either this is a complete fantasy or it's a past of somebody else in the future.
link |
04:06:25.600
But no, it could be a complete fantasy, though. It could be, right. But then you might,
link |
04:06:29.520
then you have to talk about what's the fraction of complete fantasies, right?
link |
04:06:32.400
I would say it's easier to generate a fantasy than to replay a memory, right?
link |
04:06:37.120
Sure. But if we just look at the entire history, if we just look at the entire history of everything,
link |
04:06:41.680
we just say, sure, but most things are real. Most things aren't fantasies, right? Therefore,
link |
04:06:45.600
the chance that my thing is real, right? So the simulation argument works stronger about sort of
link |
04:06:50.160
the past. We say, ah, but there's more future people than there are today. So you being in
link |
04:06:55.040
the past of the future makes you special relative to them, which makes you more likely to be in a
link |
04:06:59.280
simulation, right? If we're just taking the full count and saying, in all creatures ever,
link |
04:07:03.600
what percentage are in simulations? Probably no more than 10%. So what's the good argument for
link |
04:07:09.520
that? That most things are real? Yeah. Because as Foster says the other way, right?
link |
04:07:14.960
In a competitive world, in a world where people have to work and have to get things done,
link |
04:07:20.800
then they have a limited budget for leisure. And so leisure things are less common than work
link |
04:07:28.880
things, like real things, right? But if you look at the stretch of history in the universe,
link |
04:07:37.760
doesn't the ratio of leisure increase? Isn't that the forgery?
link |
04:07:45.440
Right. But now we're looking at the fraction of leisure, which takes the form of something
link |
04:07:48.960
where the person doing the leisure doesn't realize it. And there could be some fraction,
link |
04:07:53.040
but that's much smaller, right? Yeah. Clueless forgers. Or somebody is clueless in the process
link |
04:08:00.720
of supporting this leisure, right? It might not be the person leisureing, somebody,
link |
04:08:04.320
they're a supporting character or something, but still that's got to be a pretty small fraction
link |
04:08:07.360
of leisure. What you mentioned that children are one of the things that are a source of meaning.
link |
04:08:13.760
Broadly speaking, then let me ask the big question. What's the meaning of this whole thing?
link |
04:08:17.920
The Robin meaning of life. What is the meaning of life? We talked about alien civilizations,
link |
04:08:26.080
but this is the one we got. Where are the aliens? Where are the human?
link |
04:08:30.960
Seem to be conscious, be able to introspect. Why are we here?
link |
04:08:37.280
This is the thing I told you before about how we can predict that future creatures will be
link |
04:08:41.680
different from us. We, our preferences are this amalgam of various sorts of random sort of patched
link |
04:08:49.920
together preferences about thirst and sex and sleep and attention and all these sorts of things.
link |
04:08:57.360
So we don't understand it very well. It's not very transparent and it's a mess, right?
link |
04:09:03.360
That is the source of our motivation. That is how we were made and how we are induced to do things.
link |
04:09:10.160
But we can't summarize it very well and we don't even understand it very well.
link |
04:09:14.320
That's who we are. And often we find ourselves in a situation where we don't feel very motivated.
link |
04:09:18.640
We don't know why. In other situations, we find ourselves very motivated and we don't know why either.
link |
04:09:25.040
And so that's the nature of being a human of the sort that we are because even though we can
link |
04:09:30.400
think abstractly and reason abstractly, this package of motivations is just opaque and a mess.
link |
04:09:35.120
And that's what it means to be a human today and the motivation. We can't very well tell the meaning
link |
04:09:40.720
of our life. It is this mess that our descendants will be different. They will actually know exactly
link |
04:09:46.000
what they want and it will be to have more descendants. That will be the meaning for them.
link |
04:09:51.520
Well, it's funny that you have the certainty. You have more certainty. You have more transparency
link |
04:09:56.640
about our descendants than you do about your own self. So it's really interesting to think,
link |
04:10:05.760
because you mentioned this about love, that something that's fundamental about love is
link |
04:10:12.160
this opaqueness that we're not able to really introspect what the heck it is or all the feelings,
link |
04:10:18.480
the complex feelings involved with it. And that's true about many of our motivations.
link |
04:10:21.440
And that's what it means to be human of the 20th and the 21st century variety.
link |
04:10:29.680
Why is that not a feature that we will choose to persist in civilization then?
link |
04:10:36.960
This opaqueness put another way, maintaining a certain mystery about ourselves and about those
link |
04:10:42.960
around us. Maybe that's a really nice thing to have. Maybe, but so this is the fundamental
link |
04:10:49.760
issue in analyzing the future. What will set the future? One theory about what will set the future
link |
04:10:55.440
is, what do we want the future to be? So under that theory, we should sit and talk about what
link |
04:11:00.480
we want the future to be, have some conferences, have some conventions, discussion things,
link |
04:11:04.800
vote on it maybe, and then hand out off to the implementation people to make the future
link |
04:11:09.360
the way we've decided it should be. That's not the actual process that's changed the world
link |
04:11:15.040
over history up to this point. It has not been the result of us deciding what we want and making
link |
04:11:20.080
it happen. In our individual lives, we can do that and we might decide what career we want or
link |
04:11:24.800
where we want to live, who we want to live with. In our individual lives, we often do slowly make
link |
04:11:29.680
our lives better according to our plan and our things, but that's not the whole world.
link |
04:11:34.400
The whole world so far has mostly been a competitive world where things happen if
link |
04:11:38.880
anybody anywhere chooses to adopt them and they have an advantage. And then it spreads and other
link |
04:11:43.200
people are forced to adopt it by competitive pressures. So that's the kind of analysis I
link |
04:11:47.920
can use to predict the future. And I do use that to predict the future. It doesn't tell us
link |
04:11:51.440
it'll be a future we like, it just tells us what it'll be. And it'll be one where we're trying to
link |
04:11:56.080
maximize the number of our descendants. And we know that abstractly and directly, and it's not opaque.
link |
04:12:01.600
With some probability that's nonzero, that will lead us to become grabby in expanding
link |
04:12:09.360
being aggressively out into the cosmos until we meet other aliens.
link |
04:12:13.680
The timing isn't clear. We might become grabby and then this happens. These are
link |
04:12:18.240
grabbing us and this are both the results of competition, but it's less clear which happens
link |
04:12:22.080
first. Does this future excite you or scare you? How do you feel about this whole thing?
link |
04:12:27.920
Well, again, I told you compared to sort of a dead cosmology, at least it's energizing and
link |
04:12:32.720
having a living story with real actors and characters and agendas, right?
link |
04:12:36.160
Right. Yeah. And that's one hell of a fun universe to live in.
link |
04:12:40.720
Robin, you're one of the most fascinating, fun people to talk to, brilliant,
link |
04:12:45.760
humble, systematic in your analysis. Hold on to my wallet here. What's he looking for?
link |
04:12:50.800
I already stole your wallet long ago. I really, really appreciate you spending your valuable
link |
04:12:54.960
time with me. I hope you get a chance to talk many more times in the future.
link |
04:12:59.520
Thank you so much for sitting down. Thank you.
link |
04:13:01.680
Thanks for listening to this conversation with Robin Hansen. To support this podcast,
link |
04:13:07.280
please check out our sponsors in the description. And now let me leave you with some words from
link |
04:13:12.160
Ray Bradbury. We are an impossibility in an impossible universe. Thank you for listening
link |
04:13:19.680
and hope to see you next time.