back to index

Robin Hanson: Alien Civilizations, UFOs, and the Future of Humanity | Lex Fridman Podcast #292


small model | large model

link |
00:00:00.000
we can actually figure out where are the aliens out there in space time by being clever about the
link |
00:00:04.160
few things we can see, one of which is our current date. And so now that you have this living
link |
00:00:09.040
cosmology, we can tell the story that the universe starts out empty. And then at some point, things
link |
00:00:14.640
like us appear very primitive, and then some of those stop being quiet and expand. And then for
link |
00:00:20.880
a few billion years, they expand, and then they meet each other. And then for the next hundred
link |
00:00:25.120
billion years, they commune with each other. That is, the usual models of cosmology say that in
link |
00:00:30.800
roughly 150 billion years, the expansion of the universe will happen so much that all you'll have
link |
00:00:37.520
left is some galaxy clusters that are sort of disconnected from each other. But before then,
link |
00:00:43.200
they will interact. There will be this community of all the grabby alien civilizations, and each
link |
00:00:48.160
one of them will hear about and even meet thousands of others. And we might hope to join
link |
00:00:54.480
them someday and become part of that community. The following is a conversation with Robin Hansen,
link |
00:01:01.520
an economist at George Mason University, and one of the most fascinating, wild, fearless,
link |
00:01:06.640
and fun minds I've ever gotten the chance to accompany for a time in exploring questions
link |
00:01:11.520
of human nature, human civilization, and alien life out there in our impossibly big universe.
link |
00:01:19.200
He is the coauthor of a book titled The Elephant in the Brain, Hidden Motives in Everyday Life,
link |
00:01:25.200
The Age of M, Work, Love, and Life When Robots Rule the Earth, and a fascinating recent paper
link |
00:01:31.920
I recommend on quote, Grabby Aliens, titled If Loud Aliens Explain Human Earliness,
link |
00:01:39.200
Quiet Aliens Are Also Rare. This is the Lex Friedman podcast. To support it, please check
link |
00:01:45.600
out our sponsors in the description. And now, dear friends, here's Robin Hansen.
link |
00:01:52.320
You are working on a book about quote, grabby aliens. This is a technical term, like the Big
link |
00:01:58.400
Bang. So what are grabby aliens? Grabby aliens expand fast into the universe and they change
link |
00:02:07.680
stuff. That's the key concept. So if they were out there, we would notice. That's the key idea. So
link |
00:02:16.400
the question is, where are the grabby aliens? So Fermi's question is, where are the aliens? And we
link |
00:02:22.240
could vary that in two terms, right? Where are the quiet, hard to see aliens? And where are the
link |
00:02:27.520
big, loud, grabby aliens? So it's actually hard to say where all the quiet ones are, right?
link |
00:02:33.840
There could be a lot of them out there because they're not doing much. They're not making a big
link |
00:02:38.720
difference in the world. But the grabby aliens, by definition, are the ones you would see.
link |
00:02:43.920
We don't know exactly what they do with where they went, but the idea is they're in some sort
link |
00:02:48.720
of competitive world where each part of them is trying to grab more stuff and do something with
link |
00:02:55.280
it. And almost surely, whatever is the most competitive thing to do with all the stuff they
link |
00:03:02.240
grab isn't to leave it alone the way it started, right? So we humans, when we go around the Earth
link |
00:03:08.480
and use stuff, we change it. We would turn a forest into a farmland, turn a harbor into a city.
link |
00:03:14.720
So the idea is aliens would do something with it. And so we're not exactly sure what it would look
link |
00:03:20.160
like, but it would look different. So somewhere in the sky, we would see big spheres of different
link |
00:03:25.280
activity where things had been changed because they had been there. Expanding spheres. Right.
link |
00:03:30.720
So as you expand, you aggressively interact and change the environment.
link |
00:03:34.480
So the word grabby versus loud, you're using them sometimes synonymously, sometimes not.
link |
00:03:40.560
Grabby to me is a little bit more aggressive. What does it mean to be loud? What does it mean
link |
00:03:48.000
to be grabby? What's the difference? And loud in what way? Is it visual? Is it sound? Is it some
link |
00:03:53.760
other physical phenomena like gravitational waves? Are you using this kind of in a broad
link |
00:03:59.840
philosophical sense or there's a specific thing that it means to be loud in this universe of ours?
link |
00:04:07.280
My coauthors and I put together a paper with a particular mathematical model. And so we use the
link |
00:04:14.160
term grabby aliens to describe that more particular model. And the idea is it's a
link |
00:04:18.800
more particular model of the general concept of loud. So loud would just be the general idea that
link |
00:04:23.920
they would be really obvious. So grabby is the technical term,
link |
00:04:27.360
is it in the title of the paper? It's in the body. The title is actually about loud and quiet.
link |
00:04:33.040
Right. So the idea is you want to distinguish your particular model of things from the general
link |
00:04:38.000
category of things everybody else might talk about. So that's how we distinguish.
link |
00:04:41.280
The paper title is If Loud Aliens Explain Human Earliness, Quiet Aliens Are Also Rare.
link |
00:04:48.080
If life on earth, God, this is such a good abstract. If life on earth had to achieve
link |
00:04:52.640
N hard steps to reach humanity's level, then the chance of this event rose as time to the nth
link |
00:05:00.240
power. So we'll talk about power, we'll talk about linear increase. So what is the technical definition
link |
00:05:06.800
of grabby? How do you envision grabbiness? And why are in contrast with humans, why aren't humans
link |
00:05:17.520
grabby? So like, where's that line? Is it well definable? What is grabbing what is not grabby?
link |
00:05:23.680
We have a mathematical model of the distribution of advanced civilizations, i.e. aliens in space
link |
00:05:29.920
and time. That model has three parameters. And we can set each one of those parameters from data.
link |
00:05:37.200
And therefore, we claim this is actually what we know about where they are in space time.
link |
00:05:42.080
So the key idea is they appear at some point in space time. And then after some short delay,
link |
00:05:48.880
they start expanding. And they expand at some speed. And the speed is one of those parameters.
link |
00:05:54.480
That's one of the three. And the other two parameters are about how they appear in time.
link |
00:05:59.280
That is they appear at random places. And they appear in time according to a power law.
link |
00:06:04.880
And that power law has two parameters. And we can fit each of those parameters to data. And so then
link |
00:06:09.840
we can say, now we know, we know the distribution of advanced civilizations in space and time. So
link |
00:06:16.240
we are right now a new civilization, and we have not yet started to expand. But plausibly,
link |
00:06:21.840
we would start to do that within say, 10 million years of the current moment. That's plenty of time.
link |
00:06:27.200
And 10 million years is a really short duration in the history of the universe. So we are at the
link |
00:06:33.120
moment, a sort of random sample of the kind of times at which an advanced civilization might
link |
00:06:37.920
appear. Because we may or may not become grabby. But if we do, we'll do it soon. And so our current
link |
00:06:42.880
date is a sample. And that gives us one of the other parameters. The second parameter is the
link |
00:06:47.760
constant in front of the power law. And that's arrived from our current date.
link |
00:06:51.920
So power law, what is the N in the power law?
link |
00:06:58.400
That's the more complicated thing to explain.
link |
00:07:00.720
Right. Advanced life appeared by going through a sequence of hard steps. So starting with very
link |
00:07:08.400
simple life, and here we are at the end of this process at pretty advanced life. And so we had
link |
00:07:12.800
to go through some intermediate steps such as sexual selection, photosynthesis, multicellular
link |
00:07:19.520
animals. And the idea is that each of those steps was hard. Evolution just took a long time searching
link |
00:07:26.480
in a big space of possibilities to find each of those steps. And the challenge was to achieve
link |
00:07:32.800
all of those steps by a deadline of when the planets would no longer host simple life. And so
link |
00:07:40.080
Earth has been really lucky compared to all the other billions of planets out there,
link |
00:07:44.800
and that we managed to achieve all these steps in the short time of the five billion years that
link |
00:07:51.200
Earth can support simple life. So not all steps, but a lot of them, because we don't know how many
link |
00:07:56.560
steps there are before you start the expansion. So these are all the steps from the birth of life
link |
00:08:02.160
to the initiation of major expansion. Right. So we're pretty sure that it would happen really
link |
00:08:07.920
soon so that it couldn't be the same sort of a hard step as the last one. So in terms of taking
link |
00:08:12.720
a long time. So when we look at the history of Earth, we look at the durations of the major
link |
00:08:18.720
things that have happened. That suggests that there's roughly say six hard steps that happened,
link |
00:08:25.280
say between three and 12, and that we have just achieved the last one that would take a long time.
link |
00:08:32.240
Which is?
link |
00:08:34.000
We don't know. But whatever it is, we've just achieved the last one.
link |
00:08:38.480
We're talking about humans or aliens here. So let's talk about some of these steps. So
link |
00:08:42.960
Earth is really special in some way. We don't exactly know the level of specialness. We don't
link |
00:08:47.920
really know which steps were the hardest or not because we just have a sample of one. But you're
link |
00:08:53.600
saying that there's three to 12 steps that we have to go through to get to where we are that are hard
link |
00:08:58.800
steps, hard to find by something that took a long time and is unlikely. There's a lot of ways to fail.
link |
00:09:07.680
There's a lot more ways to fail than to succeed. The first step would be sort of the very simplest
link |
00:09:13.040
form of life of any sort. And then we don't know whether that first sort is the first sort that we
link |
00:09:20.400
see in the historical record or not. But then some other steps are, say, the development of
link |
00:09:24.560
photosynthesis, the development of sexual reproduction. There's the development of
link |
00:09:30.800
eukaryote cells, which are certain kind of complicated cells that seems to have only
link |
00:09:35.120
appeared once. And then there's multicellularity, that is multiple cells coming together to large
link |
00:09:40.960
organisms like us. And in this statistical model of trying to fit all these steps into a finite
link |
00:09:48.240
window, the model actually predicts that these steps could be of varying difficulties. That is,
link |
00:09:53.360
they could each take different amounts of time on average. But if you're lucky enough that they all
link |
00:09:58.400
appear in a very short time, then the durations between them will be roughly equal. And the time
link |
00:10:04.960
remaining leftover in the rest of the window will also be the same length. So we at the moment have
link |
00:10:10.080
roughly a billion years left on Earth until simple life like us would no longer be possible.
link |
00:10:16.400
Life appeared roughly 400 million years after the very first time when life was possible at the very
link |
00:10:21.120
beginning. So those two numbers right there give you the rough estimate of six hard steps.
link |
00:10:26.880
Just to build up an intuition here. So we're trying to create a simple mathematical model
link |
00:10:31.760
of how life emerges and expands in the universe. And there's a section in this paper, how many
link |
00:10:39.120
hard steps? Question mark. Right. The two most plausibly diagnostic Earth durations seem to be
link |
00:10:45.200
the one remaining after now before Earth becomes uninhabitable for complex life. So you estimate
link |
00:10:50.960
how long Earth lasts, how many hard steps. There's windows for doing different hard steps,
link |
00:10:59.600
and you can sort of like cueing theory, mathematically estimate of like the solution
link |
00:11:09.760
or the passing of the hard steps or the taking of the hard steps. Sort of like coldly mathematical
link |
00:11:15.920
look. If life, pre expansionary life, requires n number of steps, what is the probability of taking
link |
00:11:25.280
those steps on an Earth that lasts a billion years or two billion years or five billion years
link |
00:11:30.000
or 10 billion years? And you say solving for E using the observed durations of 1.1 and 0.4
link |
00:11:38.400
then gives E values of 3.9 and 12.5 range 5.7 to 26 suggesting a middle estimate of at least six.
link |
00:11:46.800
That's where you said six hard steps. Right. Just to get to where we are. Right. We started at the
link |
00:11:54.000
bottom. Now we're here. That took six steps on average. The key point is on average, these things
link |
00:12:00.880
on any one random planet would take trillions or trillions of years, just a really long time.
link |
00:12:07.680
And so we're really lucky that they all happened really fast in a short time before our window
link |
00:12:12.640
closed. And the chance of that happening in that short window goes as that time period to the power
link |
00:12:19.520
of the number of steps. And so that was where the power we talked about before came from. And so
link |
00:12:25.200
that means in the history of the universe, we should overall roughly expect advanced life to
link |
00:12:30.080
appear as a power law in time. So that very early on, there was very little chance of anything
link |
00:12:36.080
appearing. And then later on as things appear, other things are appearing somewhat closer to
link |
00:12:40.480
them in time because they're all going as this power law. What is the power law? Can we, for
link |
00:12:46.240
people who are not math inclined, can you describe what a power law is? So say the function X is
link |
00:12:52.240
linear and X squared is quadratic. So it's the power of two. If we make X to the three, that's
link |
00:12:59.440
cubic or the power of three. And so X to the sixth is the power of six. And so we'd say
link |
00:13:06.800
life appears in the universe on a planet like Earth in that proportion to the time that it's
link |
00:13:12.320
been ready for life to appear. And that over the universe in general, it'll appear at roughly a
link |
00:13:22.720
power law like that. What is the X, what is N? Is it the number of hard steps?
link |
00:13:27.600
Yes, the number of hard steps. So that's the idea.
link |
00:13:30.160
It's like if you're gambling and you're doubling up every time, this is the probability you just
link |
00:13:35.760
keep winning. So it gets very unlikely very quickly. And so we're the result of this unlikely
link |
00:13:45.040
chain of successes. It's actually a lot like cancer. So the dominant model of cancer in an
link |
00:13:50.400
organism like each of us is that we have all these cells and in order to become cancerous,
link |
00:13:55.520
a single cell has to go through a number of mutations and these very unlikely mutations.
link |
00:14:00.640
And so any one cell is very unlikely to have any, have all these mutations happen by the time
link |
00:14:05.280
your lifespan's over. But we have enough cells in our body that the chance of any one cell
link |
00:14:10.720
producing cancer by the end of your life is actually pretty high, more like 40%.
link |
00:14:15.200
And so the chance of cancer appearing in your lifetime also goes as a power law,
link |
00:14:19.440
this power of the number of mutations that's required for any one cell in your body to become
link |
00:14:24.000
cancerous.
link |
00:14:24.480
The longer you live, the likely you are to have cancer cells.
link |
00:14:28.800
And the power is also roughly six. That is the chance of you getting cancer is
link |
00:14:34.160
roughly the power of six of the time you've been since you were born.
link |
00:14:37.680
It is perhaps not lost on people that you're comparing power laws of the survival or the
link |
00:14:45.840
arrival of the human species to cancerous cells.
link |
00:14:50.160
The same mathematical model, but of course, we might have a different value assumption
link |
00:14:55.040
about the two outcomes. But of course, from the point of view of cancer, it's more similar.
link |
00:15:00.720
For the point of view of cancer, it's a win win. We both get to thrive, I suppose.
link |
00:15:09.120
It is interesting to take the point of view of all kinds of life forms on earth,
link |
00:15:13.200
of viruses, of bacteria. They have a very different view.
link |
00:15:18.160
It's like the Instagram channel, Nature is Metal.
link |
00:15:22.480
The ethic under which nature operates doesn't often coincide, correlate with human
link |
00:15:29.600
morals. It seems cold and machine like in the selection process that it performs.
link |
00:15:38.560
I am an analyst, I'm a scholar, an intellectual, and I feel I should carefully distinguish
link |
00:15:44.960
predicting what's likely to happen and then evaluating or judging what I think would be
link |
00:15:50.800
better to happen. And it's a little dangerous to mix those up too closely because then we can
link |
00:15:56.000
then we can have wishful thinking. And so I try typically to just analyze what seems likely to
link |
00:16:01.920
happen regardless of whether I like it or that we do anything about it. And then once you see
link |
00:16:07.360
a rough picture of what's likely to happen if we do nothing, then we can ask, well, what might we
link |
00:16:12.240
prefer? And ask, where could the levers be to move it at least a little toward what we might prefer?
link |
00:16:19.360
But often doing that just analysis of what's likely to happen if we do nothing offends many
link |
00:16:24.800
people. They find that dehumanizing or cold or metal, as you say, to just say, well, this is
link |
00:16:31.920
what's likely to happen and it's not your favorite, sorry, but maybe we can do something, but maybe
link |
00:16:39.280
we can't do that much. This is very interesting that the cold analysis, whether it's geopolitics,
link |
00:16:48.160
whether it's medicine, whether it's economics, sometimes misses some very specific aspect of
link |
00:16:59.440
human condition. Like for example, when you look at a doctor and the act of a doctor helping a
link |
00:17:07.680
single patient, if you do the analysis of that doctor's time and cost of the medicine or the
link |
00:17:14.320
surgery or the transportation of the patient, this is the Paul Farmer question, you know, is it worth
link |
00:17:20.960
spending ten, twenty, thirty thousand dollars on this one patient? When you look at all the people
link |
00:17:26.320
that are suffering in the world, that money could be spent so much better. And yet there's something
link |
00:17:31.840
about human nature that wants to help the person in front of you, and that is actually the right
link |
00:17:39.120
thing to do, despite the analysis. And sometimes when you do the analysis, there's something
link |
00:17:46.800
about the human mind that allows you to not take that leap, that irrational leap to act in this way,
link |
00:17:54.880
that the analysis explains it away. Well it's like, for example, the U.S. government, you know, the
link |
00:18:02.240
DOT, Department of Transportation, puts a value of I think like nine million dollars on a human life.
link |
00:18:09.120
And the moment you put that number on a human life, you can start thinking, well okay, I can start
link |
00:18:13.840
making decisions about this or that, and with a sort of cold economic perspective, and then you
link |
00:18:20.800
might lose, you might deviate from a deeper truth of what it means to be human somehow. You have to
link |
00:18:28.720
dance, because then if you put too much weight on the anecdotal evidence on these kinds of human
link |
00:18:35.680
emotions, then you're going to lose, you could also probably more likely deviate from truth.
link |
00:18:42.800
But there's something about that cold analysis. Like I've been listening to a lot of people
link |
00:18:47.120
coldly analyze wars. War in Yemen, war in Syria, Israel, Palestine, war in Ukraine, and there's
link |
00:18:56.640
something lost when you do a cold analysis of why something happened. When you talk about energy,
link |
00:19:03.920
talking about sort of conflict, competition over resources, when you talk about geopolitics,
link |
00:19:11.200
sort of models of geopolitics, and why a certain war happened, you lose something about the suffering
link |
00:19:16.640
that happens. I don't know. It's an interesting thing, because you're both, you're exceptionally good
link |
00:19:22.000
at models in all domains, literally, but also there's a humanity to you. So it's an interesting
link |
00:19:31.760
dance. I don't know if you can comment on that dance. Sure. It's definitely true, as you say,
link |
00:19:37.360
that for many people, if you are accurate in your judgment of, say, for a medical patient,
link |
00:19:43.920
what's the chance that this treatment might help? And what's the cost? And compare those
link |
00:19:50.640
to each other, and you might say, this looks like a lot of cost for a small medical gain.
link |
00:19:58.480
And at that point, knowing that fact, that might take the air out of your sails. You might
link |
00:20:06.560
not be willing to do the thing that maybe you feel is right anyway, which is still to pay for it.
link |
00:20:13.840
And then somebody knowing that might want to keep that news from you and not tell you about
link |
00:20:18.640
the low chance of success or the high cost in order to save you this
link |
00:20:22.880
tension, this awkward moment where you might fail to do what they and you think is right.
link |
00:20:30.080
But I think the higher calling, the higher standard to hold you to, which many people
link |
00:20:36.000
can be held to, is to say, I will look at things accurately, I will know the truth,
link |
00:20:41.280
and then I will also do the right thing with it. I will be at peace with my judgment about what
link |
00:20:47.360
the right thing is in terms of the truth. I don't need to be lied to in order to figure out what the
link |
00:20:52.800
right thing to do is. And I think if you do think you need to be lied to in order to figure out
link |
00:20:57.520
what the right thing to do is, you're at a great disadvantage because then people will be lying
link |
00:21:03.120
to, you will be lying to yourself, and you won't be as effective at achieving whatever good you
link |
00:21:10.080
were trying to achieve. But getting the data, getting the facts is step one, not the final
link |
00:21:15.440
step. So I would say having a good model, getting the good data is step one, and it's a burden.
link |
00:21:24.720
Because you can't just use that data to arrive at sort of the easy convenient thing. You have
link |
00:21:33.520
to really deeply think about what is the right thing. So the dark aspect of data, of models,
link |
00:21:42.720
is you can use it to excuse away actions that aren't ethical. You can use data to basically
link |
00:21:50.720
excuse away anything. But not looking at data lets you excuse yourself to pretend and think
link |
00:21:57.120
that you're doing good when you're not. Exactly. But it is a burden. It doesn't excuse you from
link |
00:22:03.920
still being human and deeply thinking about what is right. That very kind of gray area,
link |
00:22:09.360
that very subjective area, that's part of the human condition. But let us return for a time
link |
00:22:16.720
to aliens. So you started to define sort of the model, the parameters of grabbiness.
link |
00:22:26.640
As we approach grabbiness. So what happens? So again, there was three parameters. There's the
link |
00:22:32.320
speed at which they expand, there's the rate at which they appear in time, and that rate has a
link |
00:22:38.400
constant and a power. So we've talked about the history of life on Earth suggests that power is
link |
00:22:42.560
around 6, but maybe 3 to 12. We can say that constant comes from our current date, sort of
link |
00:22:48.560
sets the overall rate. And the speed, which is the last parameter, comes from the fact that when we
link |
00:22:54.320
look in the sky, we don't see them. So the model predicts very strongly that if they were expanding
link |
00:22:59.280
slowly, say 1% of the speed of light, our sky would be full of vast spheres that were full
link |
00:23:05.440
of activity. That is, at a random time when a civilization is first appearing, if it looks out
link |
00:23:11.520
into its sky, it would see many other grabby alien civilizations in the sky, and they would be much
link |
00:23:15.920
bigger than the full moon. There'd be huge spheres in the sky, and they would be visibly different.
link |
00:23:20.320
We don't see them. Can we pause for a second? Okay. There's a bunch of hard steps that Earth had to
link |
00:23:26.800
pass to arrive at this place we are currently, which we're starting to launch rockets onto space.
link |
00:23:33.040
We're kind of starting to expand a bit, very slowly. Okay. But this is like the birth. If you
link |
00:23:39.920
look at the entirety of the history of Earth, we're now at this precipice of like expansion.
link |
00:23:46.560
We could, we might not choose to, but if we do, we will do it in the next 10 million years.
link |
00:23:51.680
10 million. Wow. Time flies when you're having fun.
link |
00:23:55.440
10 million is a short time on the cosmological scale. So that is, it might be only a thousand,
link |
00:23:59.920
but the point is if it's, even if it's up to 10 million, that hardly makes any difference to the
link |
00:24:03.440
model. So I might as well give you 10 million. This, this makes me feel, I was, I was so stressed
link |
00:24:08.720
about planning what I'm going to do today. And now you got plenty of time, plenty of time.
link |
00:24:13.840
Uh, just need to be generating some offspring quickly here. Okay. Um, so, and there's this moment
link |
00:24:23.040
this 10 million, uh, year gap, uh, or window when we start expanding and you're saying, okay,
link |
00:24:29.280
so this is an interesting moment where there's a bunch of other alien civilizations that might at
link |
00:24:34.720
some history of the universe arrived at this moment we're here, they passed all the hard steps.
link |
00:24:39.520
There's a, there's a model for how likely it is that that happens. And then they start expanding
link |
00:24:45.440
and you think of an expansion is almost like a sphere. Right. That's when you say speed,
link |
00:24:50.640
we're talking about the speed of the radius growth. Exactly. Like the surface, how fast the
link |
00:24:55.440
surface. Okay. And so you're saying that there is some speed for that expansion, average speed,
link |
00:25:01.600
and then we can play with that parameter. And if that speed is super slow, then maybe that
link |
00:25:08.000
explains why we haven't seen anything. If it's super fast, the slow would create the puzzle.
link |
00:25:14.000
It's slow predicts, we would see them, but we don't see them as a way to explain that is that
link |
00:25:18.240
they're fast. So the idea is if they're moving really fast, then we don't see them until they're
link |
00:25:23.040
almost here. Okay, this is counterintuitive. All right, hold on a second. So I think this
link |
00:25:28.640
works best when I say a bunch of dumb things. Okay. And then you elucidate the full complexity
link |
00:25:37.520
and the beauty of the dumbness. Okay. So there's these spheres out there in the universe that are
link |
00:25:44.960
made visible because they're sort of using a lot of energy. So they're generating a lot of light
link |
00:25:49.760
stuff. They're changing things. They're changing things. And change would be visible a long way
link |
00:25:55.680
off. Yes. They would take apart stars, rearrange them, restructure galaxies. They would do all
link |
00:26:00.640
kinds of big, huge stuff. Okay. If they're expanding slowly, we would see a lot of them
link |
00:26:08.240
because the universe is old, is old enough to where we would see them. That is we're assuming
link |
00:26:13.200
we're just typical, you know, maybe at the 50th percentile of them. So like half of them have
link |
00:26:17.760
appeared so far. The other half will still appear later. And the math of our best estimate is that
link |
00:26:26.240
they appear roughly once per million galaxies. And we would meet them in roughly a billion years
link |
00:26:33.040
if we expanded out to meet them. So we're looking at a grabby aliens model
link |
00:26:37.840
3D sim. That's the actual name of the video. By the time we get to 13.8 billion years, the fun
link |
00:26:48.160
begins. Okay. So this is, we're watching a three dimensional sphere rotating. I presume that's the
link |
00:26:56.880
universe. And then grabby aliens are expanding and filling that universe with all kinds of fun.
link |
00:27:04.000
Pretty soon it's all full. It's full. So that's how the grabby aliens come in contact. First of all,
link |
00:27:11.600
with other aliens and then with us humans. The following is a simulation of the grabby aliens
link |
00:27:18.240
model of alien civilizations. Civilizations are born that expand outwards at constant speed.
link |
00:27:24.400
A spherical region of space is shown. By the time we get to 13.8 billion years,
link |
00:27:29.520
this sphere will be about 3000 times as wide as the distance from the Milky Way to Andromeda.
link |
00:27:36.960
Okay. This is fun.
link |
00:27:38.320
It's huge.
link |
00:27:38.960
Okay. It's huge. All right. So why don't we see, we're one little tiny, tiny, tiny, tiny dot in
link |
00:27:48.800
that giant, giant sphere. Why don't we see any of the grabby aliens?
link |
00:27:53.760
It depends on how fast they expand. So you could see that if they expanded at the speed of light,
link |
00:27:58.960
you wouldn't see them until they were here. So like out there, if somebody is destroying the
link |
00:28:03.440
universe with a vacuum decay, there's this, there's this doomsday scenario where somebody
link |
00:28:09.840
somewhere could change the vacuum of the universe and that would expand at the speed of light and
link |
00:28:14.160
basically destroy everything it hit. But you'd never see that until I got here because it's
link |
00:28:18.000
expanding at the speed of light. If you're expanding really slow, then you see it from
link |
00:28:21.840
a long way off. So the fact we don't see anything in the sky tells us they're expanding fast,
link |
00:28:26.960
say over a third the speed of light. And that's really, really fast. But that's what you have to
link |
00:28:32.720
believe if we look out and you don't see anything. Now you might say, well, how, maybe I just don't
link |
00:28:37.840
want to believe this whole model. Why should I believe this whole model at all? And our best
link |
00:28:42.400
evidence why you should believe this model is our early date. We are right now almost 14 billion
link |
00:28:49.360
years into the universe on a planet around a star that's roughly 5 billion years old.
link |
00:28:56.720
But the average star out there will last roughly 5 trillion years. That is a thousand times longer.
link |
00:29:05.120
And remember that power law, it says that the chance of advanced life appearing on a planet
link |
00:29:09.520
goes as the power of sixth of the time. So if a planet lasts a thousand times longer,
link |
00:29:14.720
then the chance of it appearing on that planet, if everything would stay empty at least, is a
link |
00:29:19.840
thousand to the sixth power or 10 to the 18. So enormous, overwhelming chance that if the universe
link |
00:29:27.520
would just stay sit and empty and waiting for advanced life to appear, when it would appear
link |
00:29:31.920
would be way at the end of all these planet lifetimes. That is the long planets near the end
link |
00:29:39.360
of the lifetime, trillions of years into the future. So, but we're really early compared to that. And
link |
00:29:44.480
our explanation is at the moment, as you saw in the video, the universe is filling up in roughly a
link |
00:29:49.120
billion years, it'll all be full. And at that point, it's too late for advanced life to show up.
link |
00:29:53.520
So you had to show up now before that deadline. Okay. Can we break that apart a little bit? Okay.
link |
00:29:59.120
Or linger on some of the things you said. So with the power law, the things we've done on earth,
link |
00:30:03.680
the model you have says that it's very unlikely, like we're lucky SOBs. Is that mathematically
link |
00:30:11.280
correct to say? We're crazy early. That is when early means like in the history of the universe.
link |
00:30:18.240
In the history. Okay. So given this model, how do we make sense of that? If we're super,
link |
00:30:25.920
can we just be the lucky ones? Well, 10 to the 18 lucky, you know,
link |
00:30:30.480
how lucky do you feel? So, you know, that's a pretty lucky, right? 10 to the 18 is a billion,
link |
00:30:37.600
billion. So then if you were just being honest and humble, that that means, what does that mean?
link |
00:30:45.360
It means one of the assumptions that calculated this crazy early must be wrong. That's what it
link |
00:30:49.600
means. So the key assumption we suggest is that the universe would stay empty. So most life would
link |
00:30:56.160
stay empty. So most life would appear like a thousand times longer later than now if everything
link |
00:31:04.000
would stay empty, waiting for it to appear. So what is not empty?
link |
00:31:08.160
So the grabby aliens are filling the universe right now. Roughly at the moment, they filled
link |
00:31:11.760
half of the universe and they've changed it. And when they fill everything, it's too late for stuff
link |
00:31:16.480
like us to appear. But wait, hold on a second. Did anyone help us get lucky? If it's so difficult,
link |
00:31:24.480
what, how do like, so it's like cancer, right? There's all these cells, each of which randomly
link |
00:31:30.560
does or doesn't get cancer. And eventually some cell gets cancer and you know, we were one of
link |
00:31:35.840
those, but hold on a second. Okay. But we got it early. We got early compared to the prediction
link |
00:31:44.000
with an assumption that's wrong. That's so that's how we do a lot of, you know, theoretical
link |
00:31:48.560
analysis. You have a model that makes a prediction that's wrong. Then that helps you reject that
link |
00:31:52.640
model. Okay. Let's try to understand exactly where the wrong is. So the assumption is that the
link |
00:31:57.600
universe is empty, stays empty, stays empty and waits until this advanced life appears in trillions
link |
00:32:04.640
of years. That is if the universe would just stay empty, if there was just, you know, nobody else
link |
00:32:09.760
out there, then when you should expect advanced life to appear, if you're the only one in the
link |
00:32:14.880
universe, when should you expect to appear? You should expect to appear trillions of years in the
link |
00:32:18.480
future. I see. Right, right. So this is a very sort of nuanced mathematical assumption. I don't
link |
00:32:25.280
think we can intuit it cleanly with words. But if you assume that you're just wait, the universe
link |
00:32:33.680
stays empty and you're waiting for one life civilization to pop up, then it's gonna, it
link |
00:32:41.840
should happen very late, much later than now. And if you look at Earth, the way things happen on
link |
00:32:48.640
Earth, it happened much, much, much, much, much earlier than it was supposed to according to this
link |
00:32:53.120
model. If you take the initial assumption, therefore you can say, well, the initial assumption of the
link |
00:32:58.240
universe staying empty is very unlikely. Right. And the other alternative theory is the universe
link |
00:33:04.880
is filling up and will fill up soon. And so we are typical for the origin data of things that
link |
00:33:10.560
can appear before the deadline. Before the deadline. Okay, it's filling up. So why don't we see anything
link |
00:33:15.280
if it's filling up? Because they're expanding really fast. Close to the speed of light. Exactly.
link |
00:33:20.560
So we will only see it when it's here. Almost here. Okay. What are the ways in which we might see
link |
00:33:28.160
a quickly expanding? This is both exciting and terrifying. It is terrifying. It's like watching
link |
00:33:34.240
a truck, like driving at you at 100 miles an hour. And so we would see spheres in the sky,
link |
00:33:41.600
at least one sphere in the sky, growing very rapidly. And like very rapidly, right? Yes,
link |
00:33:49.440
very rapidly. So we're not, so there's, there's, you know, different def because we were just
link |
00:33:54.720
talking about 10 million years. This would be, you might see it 10 million years in advance coming.
link |
00:34:00.400
I mean, you still might have a long warning. Again, the universe is 14 billion years old.
link |
00:34:05.920
The typical origin times of these things are spread over several billion years. So the chance
link |
00:34:10.560
of one originating at a, you know, very close to you in time is very low. So they still might take
link |
00:34:16.720
millions of years from the time you see it, from the time it gets here. You'll have a million years
link |
00:34:22.080
of your years to be terrified of this mass sphere coming at you. But coming at you very fast. So if
link |
00:34:27.280
they're traveling close to the speed of light, but they're coming from a long way away. So remember,
link |
00:34:32.320
the rate at which they appear is one per million galaxies, right? So they're roughly a hundred
link |
00:34:38.240
galaxies away. I see. So the Delta between the speed of light and their actual travel speed is
link |
00:34:45.360
very important, right? So even if they're going at say half the speed of light, we'll have a long
link |
00:34:50.320
time then. Yeah. But what if they're traveling exactly at a speed of light? Then we see them,
link |
00:34:55.600
like then we wouldn't have much warning, but that's less likely. Well, we can't exclude it.
link |
00:35:00.560
And they could also be somehow traveling faster than the speed of light.
link |
00:35:04.800
But I think we can't exclude because if they could go faster than the speed of light, then
link |
00:35:08.720
they would just already be everywhere. So in a universe where you can travel faster than the
link |
00:35:13.200
speed of light, you can go backwards in space time. So any time you appeared anywhere in space
link |
00:35:17.520
time, you could just fill up everything. Yeah. And so anybody in the future, whoever appeared,
link |
00:35:22.880
they would have been here by now. Can you exclude the possibility that those kinds of aliens aren't
link |
00:35:27.600
already here? Well, you have, we should have a different discussion of that. Okay. So let's
link |
00:35:33.840
actually lead that. Let's leave that discussion aside just to linger and understand the grabby
link |
00:35:38.800
alien expansion, which is beautiful and fascinating. Okay. So there's these giant expanding
link |
00:35:45.680
spheres of alien civilizations. Now, when those spheres collide, mathematically,
link |
00:35:59.920
it's very likely that we're not the first collision of grabby alien civilizations,
link |
00:36:07.120
I suppose is one way to say it. So there's like the first time the spheres touch each other,
link |
00:36:12.560
recognize each other. They meet. They recognize each other first before they meet.
link |
00:36:19.760
They see each other coming. They see each other coming. And then, so there's a bunch of them.
link |
00:36:23.600
There's a combinatorial thing where they start seeing each other coming. And then there's a
link |
00:36:27.520
third neighbor. It's like, what the hell? And then there's a fourth one. Okay. So what does that,
link |
00:36:31.760
you think, look like? What lessons from human nature, that's the only data we have,
link |
00:36:38.640
well, can you draw the story of the history of the universe here is what I would call a living
link |
00:36:44.800
cosmology. So what I'm excited about in part by this model is that it lets us tell a story of
link |
00:36:51.120
cosmology where there are actors who have agendas. So most ancient peoples, they had cosmologies,
link |
00:36:57.840
stories they told about where the universe came from and where it's going and what's happening
link |
00:37:01.120
out there. And their stories, they like to have agents and actors, gods or something out there
link |
00:37:04.960
doing things. And lately our favorite cosmology is dead, kind of boring. We're the only activity
link |
00:37:12.160
we know about or see and everything else just looks dead and empty. But this is now telling us,
link |
00:37:17.920
no, that's not quite right. At the moment, the universe is filling up and in a few billion years,
link |
00:37:22.960
it'll be all full. And from then on, the history of the universe will be the universe full of aliens.
link |
00:37:29.040
LW. Yeah. So that's a really good reminder, a really good way to think about cosmology is we're
link |
00:37:35.360
surrounded by a vast darkness and we don't know what's going on in that darkness until the light
link |
00:37:42.880
from whatever generate lights arrives here. So we kind of, yeah, we look up at the sky,
link |
00:37:48.080
okay, there's stars, oh, they're pretty, but you don't think about the giant expanding spheres of
link |
00:37:55.360
aliens because you don't see them. But now our date, looking at the clock, if you're clever,
link |
00:38:01.200
the clock tells you. So I like the analogy with the ancient Greeks. So you might think that an
link |
00:38:06.080
ancient Greek staring at the universe couldn't possibly tell how far away the sun was or how
link |
00:38:11.280
far away the moon is or how big the earth is. All you can see is just big things in the sky,
link |
00:38:16.240
you can't tell. But they were clever enough actually to be able to figure out the size of
link |
00:38:19.600
the earth and the distance to the moon and the sun and the size of the moon and sun. That is,
link |
00:38:24.880
they could figure those things out actually by being clever enough. And so similarly,
link |
00:38:28.560
we can actually figure out where are the aliens out there in space time by being clever about the
link |
00:38:32.720
few things we can see, one of which is our current date. And so now that you have this living
link |
00:38:37.600
cosmology, we can tell the story that the universe starts out empty and then at some point, things
link |
00:38:43.200
like us appear very primitive and then some of those stop being quiet and expand. And then for
link |
00:38:49.440
a few billion years, they expand and then they meet each other. And then for the next hundred
link |
00:38:53.680
billion years, they commune with each other. That is, the usual models of cosmology say that in
link |
00:38:59.360
roughly 150 billion years, the expansion of the universe will happen so much that all you'll have
link |
00:39:06.080
left is some galaxy clusters that are sort of disconnected from each other. But before then,
link |
00:39:11.840
for the next hundred billion years, they will interact. There will be this community of all the
link |
00:39:19.680
grabby alien civilizations and each one of them will hear about and even meet thousands of others.
link |
00:39:24.880
And we might hope to join them someday and become part of that community. That's an interesting
link |
00:39:30.720
thing to aspire to. Yes, interesting is an interesting word. Is the universe of alien
link |
00:39:38.000
civilizations defined by war as much or more than war defined human history?
link |
00:39:47.360
I would say it's defined by competition and then the question is how much competition implies war.
link |
00:39:57.120
So up until recently, competition defined life on Earth. Competition between species and organisms
link |
00:40:07.520
and among humans, competitions among individuals and communities and that competition often took
link |
00:40:12.800
the form of war in the last 10,000 years. Many people now are hoping or even expecting to sort
link |
00:40:20.720
of suppress and end competition in human affairs. They regulate business competition, they prevent
link |
00:40:28.320
military competition and that's a future I think a lot of people will like to continue and
link |
00:40:34.880
strengthen. People will like to have something close to world government or world governance or
link |
00:40:39.280
at least a world community and they will like to suppress war and many forms of business and
link |
00:40:44.640
personal competition over the coming centuries. And they may like that so much that they prevent
link |
00:40:51.760
interstellar colonization which would become the end of that era. That is interstellar colonization
link |
00:40:56.720
would just return severe competition to human or our descendant affairs and many civilizations may
link |
00:41:03.280
prefer that and ours may prefer that. But if they choose to allow interstellar colonization,
link |
00:41:08.960
they will have chosen to allow competition to return with great force. That is, there's really
link |
00:41:13.680
not much of a way to centrally govern a rapidly expanding sphere of civilization. And so I think
link |
00:41:20.560
one of the most solid things we can predict about Graviolians is they have accepted competition
link |
00:41:26.640
and they have internal competition and therefore they have the potential for competition when they
link |
00:41:32.080
meet each other at the borders. But whether that's military competition is more of an open question.
link |
00:41:37.440
LW. So military meaning physically destructive, right.
link |
00:41:46.080
So there's a lot to say there. So one idea that you kind of proposed is progress might be maximized
link |
00:41:55.200
through competition, through some kind of healthy competition, some definition of healthy. So like
link |
00:42:03.120
constructive not destructive competition. So like we would likely grab the alien civilizations would
link |
00:42:11.200
be likely defined by competition because they can expand faster because competition allows
link |
00:42:17.520
innovation and sort of the battle of ideas.
link |
00:42:19.600
LW. The way I would take the logic is to say competition just happens if you can't coordinate
link |
00:42:26.160
to stop it and you probably can't coordinate to stop it in an expanding interstellar way.
link |
00:42:31.920
LW. So competition is a fundamental force in the universe.
link |
00:42:37.280
LW. It has been so far and it would be within an expanding Graviolian civilization. But we today
link |
00:42:44.320
have the chance, many people think and hope, of greatly controlling and limiting competition
link |
00:42:50.000
within our civilization for a while. And that's an interesting choice whether to allow competition
link |
00:42:57.600
to sort of regain its full force or whether to suppress and manage it.
link |
00:43:02.960
LW. Well one of the open questions that has been raised in the past less than 100 years
link |
00:43:13.440
is whether our desire to lessen the destructive nature of competition or the destructive kind
link |
00:43:20.720
of competition will be outpaced by the destructive power of our weapons. Sort of if nuclear weapons
link |
00:43:32.000
and weapons of that kind become more destructive than our desire for peace then all it takes is
link |
00:43:41.840
one asshole at the party to ruin the party.
link |
00:43:45.040
LW. It takes one asshole to make a delay, but not that much of a delay on the cosmological
link |
00:43:51.040
scales we're talking about. So even a vast nuclear war, if it happened here right now on Earth,
link |
00:43:59.520
it would not kill all humans and it certainly wouldn't kill all life.
link |
00:44:05.200
And so human civilization would return within 100,000 years.
link |
00:44:09.360
LW. So all the history of atrocities and if you look at the Black Plague,
link |
00:44:23.280
which is not human caused atrocities or whatever.
link |
00:44:26.320
LW. There are a lot of military atrocities in history, absolutely.
link |
00:44:29.440
LW. In the 20th century. Those are, those challenge us to think about human nature,
link |
00:44:36.480
but the cosmic scale of time and space, they do not stop the human spirit, essentially.
link |
00:44:44.400
The humanity goes on through all the atrocities, it goes on.
link |
00:44:48.960
LW. Most likely.
link |
00:44:50.240
LW. So even a nuclear war isn't enough to destroy us or to stop our potential from expanding,
link |
00:44:57.280
but we could institute a regime of global governance that limited competition,
link |
00:45:03.920
including military and business competition of sorts, and that could prevent our expansion.
link |
00:45:08.880
LW. Of course, to play devil's advocate, global governance is centralized power,
link |
00:45:20.880
power corrupts, and absolute power corrupts absolutely. One of the aspects of competition
link |
00:45:27.760
that's been very productive is not letting any one person, any one country, any one center of power
link |
00:45:36.800
become absolutely powerful, because that's another lesson, is it seems to corrupt.
link |
00:45:43.200
There's something about ego in the human mind that seems to be corrupted by power,
link |
00:45:47.440
so when you say global governance, that terrifies me more than the possibility of war,
link |
00:45:55.440
because it's...
link |
00:45:57.920
LW. I think people will be less terrified than you are right now,
link |
00:46:01.280
and let me try to paint the picture from their point of view. This isn't my point of view,
link |
00:46:05.440
but I think it's going to be a widely shared point of view.
link |
00:46:07.920
LW. Yes. This is two devil's advocates arguing.
link |
00:46:10.160
LW. Two devils.
link |
00:46:10.800
LW. Okay. So for the last half century and into the continuing future, we actually have had
link |
00:46:18.560
a strong elite global community that shares a lot of values and beliefs and has created a lot
link |
00:46:26.720
of convergence in global policy. So if you look at electromagnetic spectrum or medical experiments
link |
00:46:33.680
or pandemic policy or nuclear power energy or regulating airplanes or just in a wide range
link |
00:46:40.400
of area, in fact, the world has very similar regulations and rules everywhere, and it's not
link |
00:46:46.880
a coincidence because they are part of a world community where people get together at places
link |
00:46:51.680
like Davos, et cetera, where world elites want to be respected by other world elites, and they
link |
00:46:59.280
have a convergence of opinion, and that produces something like global governance,
link |
00:47:05.520
but without a global center. This is what human mobs or communities have done for a long time,
link |
00:47:11.120
that is, humans can coordinate together on shared behavior without a center by having
link |
00:47:16.080
gossip and reputation within a community of elites. And that is what we have been doing and
link |
00:47:22.160
are likely to do a lot more of. So for example, one of the things that's happening, say, with the
link |
00:47:27.680
war in Ukraine is that this world community of elites has decided that they disapprove of the
link |
00:47:33.360
Russian invasion and they are coordinating to pull resources together from all around the world in
link |
00:47:38.480
order to oppose it, and they are proud of that, sharing that opinion in there, and they feel that
link |
00:47:45.520
they are morally justified in their stance there. And that's the kind of event that actually brings
link |
00:47:53.440
world elite communities together, where they come together and they push a particular policy and
link |
00:47:59.360
position that they share and that they achieve successes. And the same sort of passion animates
link |
00:48:04.160
global elites with respect to, say, global warming or global poverty and other sorts of things. And
link |
00:48:09.600
they are, in fact, making progress on those sorts of things through shared global community of
link |
00:48:16.320
elites. And in some sense, they are slowly walking toward global governance, slowly strengthening
link |
00:48:23.120
various world institutions of governance, but cautiously, carefully watching out for the
link |
00:48:28.560
possibility of a single power that might corrupt it. I think a lot of people over the coming
link |
00:48:34.240
centuries will look at that history and like it. It's an interesting thought. And thank you for
link |
00:48:41.440
playing that devil's advocate there. But I think the elites too easily lose touch of the morals
link |
00:48:52.640
that the best of human nature and power corrupts. Sure, but their view is the one that determines
link |
00:48:59.600
what happens. Their view may still end up there, even if you or I might criticize it from that
link |
00:49:06.320
point of view. So from a perspective of minimizing human suffering, elites can use topics of the war
link |
00:49:14.000
in Ukraine and climate change and all of those things to sell an idea to the world. And with
link |
00:49:25.520
disregard to the amount of suffering it causes, their actual actions. So like you can tell all
link |
00:49:33.040
kinds of narratives. That's the way propaganda works. Hitler really sold the idea that everything
link |
00:49:39.920
Germany is doing is either it's the victim is defending itself against the cruelty of the world,
link |
00:49:45.760
and it's actually trying to bring out about a better world. So every power center thinks they're
link |
00:49:52.640
doing good. And so this is the positive of competition, of having multiple power centers.
link |
00:50:01.600
This kind of gathering of elites makes me very, very, very nervous. The dinners, the meetings
link |
00:50:11.200
and the closed rooms. I don't know. But remember we talked about separating our cold analysis of
link |
00:50:19.440
what's likely or possible from what we prefer. And so this isn't exactly enough time for that.
link |
00:50:24.480
We might say, I would recommend we don't go this route of a strong world governance. And because
link |
00:50:32.320
I would say it'll preclude this possibility of becoming rabid aliens, of filling the next
link |
00:50:37.600
nearest million galaxies for the next billion years with vast amounts of activity and interest
link |
00:50:43.760
and value of life out there. That's the thing we would lose by deciding that we wouldn't expand,
link |
00:50:50.640
that we would stay here and keep our comfortable shared governance.
link |
00:50:55.280
So you wait, you think that global governance is, makes it more likely or less likely that
link |
00:51:06.560
we expand out into the universe?
link |
00:51:08.080
Less.
link |
00:51:09.440
Okay.
link |
00:51:10.000
This is the key, this is the key point.
link |
00:51:11.840
Right. Right. So screw the elites.
link |
00:51:16.400
We want to, wait, do we want to expand?
link |
00:51:19.360
So again, I want to separate my neutral analysis from my evaluation and say,
link |
00:51:25.920
first of all, I have an analysis that tells us this is a key choice that we will face and that
link |
00:51:30.480
it's key choice other aliens have faced out there. And it could be that only one in 10 or one in 100
link |
00:51:35.760
civilizations chooses to expand and the rest of them stay quiet. And that's how it goes out there.
link |
00:51:40.720
And we face that choice too. And it'll happen sometime in the next 10 million years,
link |
00:51:46.640
maybe the next thousand. But the key thing to notice from our point of view is that
link |
00:51:52.000
even though you might like our global governance, you might like the fact that we've come together,
link |
00:51:56.000
we no longer have massive wars and we no longer have destructive competition.
link |
00:52:01.520
And that we could continue that, the cost of continuing that would be to prevent
link |
00:52:06.640
interstellar colonization. That is once you allow interstellar colonization, then you've lost
link |
00:52:11.200
control of those colonies and whatever they change into, they could come back here and compete with
link |
00:52:16.720
you back here as a result of having lost control. And I think if people value that global governance
link |
00:52:23.600
and global community and regulation and all the things it can do enough, they would then
link |
00:52:29.200
want to prevent interstellar colonization.
link |
00:52:31.600
I want to have a conversation with those people. I believe that both for humanity,
link |
00:52:37.680
for the good of humanity, for what I believe is good in humanity and for expansion, exploration,
link |
00:52:44.880
innovation, distributing the centers of power is very beneficial. So this whole meeting of elites
link |
00:52:51.280
and I've been very fortunate to meet quite a large number of elites. They make me nervous
link |
00:52:59.040
because it's easy to lose touch of reality. I'm nervous about that in myself to make sure that
link |
00:53:10.000
you never lose touch as you get sort of older, wiser, you know, how you generally get like
link |
00:53:19.280
disrespectful of kids, kids these days. No, the kids are okay. But I think you should hear
link |
00:53:24.560
a stronger case for their position. So I'm going to play for the elites. Yes. Well, for the limiting
link |
00:53:32.720
of expansion and for the regulation of behavior. Okay. Can I link on that? So you're saying those
link |
00:53:39.920
two are connected. So the human civilization and alien civilizations come to a crossroads.
link |
00:53:47.760
They have to decide, do we want to expand or not? And connected to that, do we want to give a lot
link |
00:53:54.160
of power to a central elite? Or do we want to distribute the power centers, which is naturally
link |
00:54:03.200
connected to the expansion? When you expand, you distribute the power. If say over the next thousand
link |
00:54:10.640
years, we fill up the solar system, right? We go out from earth and we colonize Mars and we change
link |
00:54:15.920
a lot of things. Within a solar system, still everything is within reach. That is, if there's
link |
00:54:20.960
a rebellious colony around Neptune, you can throw rocks at it and smash it and then teach them
link |
00:54:25.200
discipline. Okay. A central control over the solar system is feasible. But once you let it escape the
link |
00:54:34.400
solar system, it's no longer feasible. But if you have a solar system that doesn't have a central
link |
00:54:38.640
control, maybe broken into a thousand different political units in the solar system, then any one
link |
00:54:44.640
part of that that allows interstellar colonization and it happens. That is interstellar colonization
link |
00:54:50.240
happens when only one party chooses to do it and is able to do it. And that's what it is there for.
link |
00:54:55.760
So we can just say in a world of competition, if interstellar colonization is possible, it will
link |
00:55:00.800
happen and then competition will continue. And that will sort of ensure the continuation of
link |
00:55:04.640
competition into the indefinite future. And competition, we don't know, but competition
link |
00:55:10.480
can take violent forms and many forms. And the case I was going to make is that I think one of
link |
00:55:15.840
the things that most scares people about competition is not just that it creates holocausts and death
link |
00:55:21.280
on massive scales, is that it's likely to change who we are and what we value.
link |
00:55:28.480
Yes. So this is the other thing with power. As we grow, as human civilization grows,
link |
00:55:37.120
becomes multi planetary, multi solar system potentially, how does that change us, do you think?
link |
00:55:43.200
I think the more you think about it, the more you realize it can change us a lot.
link |
00:55:48.080
So first of all, this is pretty dark, by the way. Well, it's just honest.
link |
00:55:53.440
Right. Well, I'm trying to get there. But I think the first thing you should say,
link |
00:55:55.920
if you look at history, just human history over the last 10,000 years,
link |
00:55:59.760
if you really understood what people were like a long time ago, you'd realize they were really
link |
00:56:04.160
quite different. Ancient cultures created people who were really quite different. Most historical
link |
00:56:09.520
fiction lies to you about that. It often offers you modern characters in an ancient world.
link |
00:56:14.640
But if you actually study history, you will see just how different they were and how differently
link |
00:56:19.040
they thought. And they've changed a lot many times, and they've changed a lot across time.
link |
00:56:25.120
So I think the most obvious prediction about the future is, even if you only have the mechanisms
link |
00:56:29.920
of change we've seen in the past, you should still expect a lot of change in the future.
link |
00:56:33.840
But we have a lot bigger mechanisms for change in the future than we had in the past.
link |
00:56:37.920
So I have this book called The Age of M, Work, Love, and Life, and Robots Rule the Earth. And
link |
00:56:44.880
it's about what happens if brain emulations become possible. So a brain emulation is where you take
link |
00:56:49.760
a actual human brain, and you scan it and find spatial and chemical detail to create
link |
00:56:55.040
a computer simulation of that brain. And then those computer simulations of brains
link |
00:57:00.160
are basically citizens in a new world. They work, and they vote, and they fall in love,
link |
00:57:04.560
and they get mad, and they lie to each other. And this is a whole new world. And my book is
link |
00:57:08.800
about analyzing how that world is different than our world, basically using competition as my key
link |
00:57:14.640
lever of analysis. That is, if that world remains competitive, then I can figure out how they change
link |
00:57:19.920
in that world, what they do differently than we do. And it's very different. And it's different in
link |
00:57:26.640
ways that are shocking sometimes to many people and ways some people don't like. I think it's an
link |
00:57:32.080
okay world, but I have to admit, it's quite different. And that's just one technology.
link |
00:57:37.920
If we add dozens more technologies, changes into the future, we should just expect it's possible
link |
00:57:45.200
to become very different than who we are. I mean, in the space of all possible minds,
link |
00:57:49.760
our minds are a particular architecture, a particular structure, a particular set of habits,
link |
00:57:54.960
and they are only one piece in a vast base of possibilities. The space of possible minds is
link |
00:58:00.400
really huge. So yeah, let's linger on the space of possible minds for a moment, just to sort of
link |
00:58:07.840
humble ourselves. How peculiar our peculiarities are, like the fact that we like a particular kind
link |
00:58:19.040
of sex, and the fact that we eat food through one hole and poop through another hole. And that seems
link |
00:58:27.440
to be a fundamental aspect of life, is very important to us. And that life is finite in a
link |
00:58:35.840
certain kind of way, we have a meat vehicle. So death is very important to us. I wonder which
link |
00:58:41.520
aspects are fundamental, or would be common throughout human history and also throughout,
link |
00:58:47.440
sorry, throughout history of life on Earth, and throughout other kinds of lives. Like what is
link |
00:58:53.600
really useful? You mentioned competition seems to be a one fundamental thing.
link |
00:58:57.680
I've tried to do analysis of where our distant descendants might go in terms of what are robust
link |
00:59:03.600
features we could predict about our descendants. So again, I have this analysis of sort of the
link |
00:59:08.240
next generation, so the next era after ours. If you think of human history as having three eras
link |
00:59:13.680
so far, right? There was the forager era, the farmer era, and the industry era. Then my attempt
link |
00:59:18.800
in age of M is to analyze the next era after that. And it's very different, but of course,
link |
00:59:22.640
there could be more and more errors after that. So analyzing a particular scenario and thinking
link |
00:59:28.080
it through is one way to try to see how different the future could be, but that doesn't give you
link |
00:59:32.800
some sort of sense of what's typical. But I have tried to analyze what's typical.
link |
00:59:38.960
And so I have two predictions I think I can make pretty solidly. One thing is that we know at the
link |
00:59:45.440
moment that humans discount the future rapidly. So we discount the future in terms of caring
link |
00:59:52.240
about consequences, roughly a factor of two per generation. And there's a solid evolutionary
link |
00:59:56.960
analysis why sexual creatures would do that. Because basically your descendants only share
link |
01:00:01.920
half of your genes and your descendants are a generation away. So we only care about our
link |
01:00:06.640
grandchildren. Basically that's a factor of four later because it's later. So this actually
link |
01:00:14.320
explains typical interest rates in the economy. That is interest rates are greatly influenced by
link |
01:00:19.360
our discount rates. And we basically discount the future by a factor of two per generation.
link |
01:00:25.920
But that's a side effect of the way our preferences evolved as sexually selected
link |
01:00:32.400
creatures. We should expect that in the longer run creatures will evolve who don't discount the
link |
01:00:37.920
future. They will care about the long run and they will therefore not neglect the long run.
link |
01:00:43.360
So for example, for things like global warming or things like that, at the moment, many commenters
link |
01:00:48.880
are sad that basically ordinary people don't seem to care much, market prices don't seem to care
link |
01:00:52.640
much and more ordinary people, it doesn't really impact them much because humans don't care much
link |
01:00:57.200
about the longterm future. And futurists find it hard to motivate people and to engage people about
link |
01:01:04.240
the longterm future because they just don't care that much. But that's a side effect of this
link |
01:01:08.640
particular way that our preferences evolved about the future. And so in the future, they will neglect
link |
01:01:14.640
the future less. And that's an interesting thing that we can predict robustly. Eventually,
link |
01:01:19.680
you know, maybe a few centuries, maybe longer, eventually our descendants will
link |
01:01:24.720
care about the future. Can you speak to the intuition behind that? Is it
link |
01:01:29.520
useful to think more about the future? Right. If evolution rewards creatures for having many
link |
01:01:35.520
descendants, then if you have decisions that influence how many descendants you have,
link |
01:01:40.720
then that would be good if you made those decisions. But in order to do that, you'll have to
link |
01:01:44.400
care about them. You have to care about that future. So to push back, that's if you're trying
link |
01:01:49.840
to maximize the number of descendants. But the nice thing about not caring too much about the
link |
01:01:54.320
longterm future is you're more likely to take big risks or you're less risk averse. And it's possible
link |
01:02:01.200
that both evolution and just life in the universe rewards the risk takers. Well, we actually have
link |
01:02:11.760
analysis of the ideal risk preferences too. So there's a literature on ideal preferences that
link |
01:02:19.200
evolution should promote. And for example, there's literature on competing investment funds and what
link |
01:02:24.400
the managers of those funds should care about in terms of risk, various kinds of risks, and in terms
link |
01:02:29.680
of discounting. And so managers of investment funds should basically have logarithmic risk, i.e. in
link |
01:02:38.880
shared risk, in correlated risk, but be very risk neutral with respect to uncorrelated risk. So
link |
01:02:47.040
that's a feature that's predicted to happen about individual personal choices in biology and also
link |
01:02:54.000
for investment funds. So that's other things. That's also something we can say about the long
link |
01:02:57.200
run. What's correlated and uncorrelated risk? If there's something that would affect all of your
link |
01:03:03.440
descendants, then if you take that risk, you might have more descendants, but you might have zero.
link |
01:03:11.040
And that's just really bad to have zero descendants. But an uncorrelated risk would be a
link |
01:03:16.080
risk that some of your descendants would suffer, but others wouldn't. And then you have a portfolio
link |
01:03:20.880
of descendants. And so that portfolio ensures you against problems with any one of them.
link |
01:03:26.000
I like the idea of portfolio descendants. And we'll talk about portfolios with your idea of
link |
01:03:31.680
you briefly mentioned, we'll return there with M, EM, the age of EM, work, love, and life when
link |
01:03:37.840
robots rule the earth. EM, by the way, is emulated minds. So this one of the...
link |
01:03:44.160
M is short for emulations.
link |
01:03:46.160
M is short for emulations. And it's kind of an idea of how we might create artificial minds,
link |
01:03:51.600
artificial copies of minds, or human like intelligences.
link |
01:03:56.880
I have another dramatic prediction I can make about long term preferences.
link |
01:04:00.640
Yes.
link |
01:04:01.440
Which is, at the moment, we reproduce as the result of a hodgepodge of preferences that
link |
01:04:07.280
aren't very well integrated, but sort of in our ancestral environment induced us to reproduce.
link |
01:04:12.240
So we have preferences over being sleepy and hungry and thirsty and wanting to have sex and
link |
01:04:17.920
wanting to be excitement, et cetera, right? And so in our ancestral environment, the packages
link |
01:04:23.600
of preferences that we evolved to have did induce us to have more descendants. That's why we're here.
link |
01:04:31.040
But those packages of preferences are not a robust way to promote having more descendants.
link |
01:04:36.560
They were tied to our ancestral environment, which is no longer true. So that's one of the
link |
01:04:40.480
reasons we are now having a big fertility decline because in our current environment,
link |
01:04:45.360
our ancestral preferences are not inducing us to have a lot of kids,
link |
01:04:48.720
which is, from evolution's point of view, a big mistake.
link |
01:04:52.320
We can predict that in the longer run, there will arise creatures who
link |
01:04:56.880
just abstractly know that what they want is more descendants.
link |
01:05:00.800
That's a very robust way to have more descendants is to have that as your direct preference.
link |
01:05:05.840
First of all, your thinking is so clear. I love it. So mathematical. And thank you
link |
01:05:11.360
for thinking so clear with me and bearing with my interruptions and going on the tangents when we go
link |
01:05:19.360
there. So you're just clearly saying that successful long term civilizations will prefer to have
link |
01:05:27.920
descendants, more descendants.
link |
01:05:30.080
Not just prefer, consciously and abstractly prefer. That is, it won't be the indirect
link |
01:05:35.920
consequence of other preference. It will just be the thing they know they want.
link |
01:05:39.920
There'll be a president in the future that says, we must have more sex.
link |
01:05:44.640
We must have more descendants and do whatever it takes to do that.
link |
01:05:47.600
Whatever. We must go to the moon and do the other things. Not because they're easy,
link |
01:05:52.640
but because they're hard. But instead of the moon, let's have lots of sex. Okay.
link |
01:05:56.000
But there's a lot of ways to have descendants, right?
link |
01:05:58.880
Right. So that's the whole point. When the world gets more complicated and there are many possible
link |
01:06:03.040
strategies, it's having that as your abstract preference that will force you to think through
link |
01:06:07.520
those possibilities and pick the one that's most effective.
link |
01:06:09.920
So just to clarify, descendants doesn't necessarily mean the narrow definition of
link |
01:06:15.360
descendants, meaning humans having sex and then having babies.
link |
01:06:18.480
Exactly.
link |
01:06:18.880
You can have artificial intelligence systems in whom you instill some capability of cognition
link |
01:06:27.120
and perhaps even consciousness. You can also create through genetics and biology clones of yourself
link |
01:06:32.720
or slightly modified clones, thousands of them. So all kinds of descendants. It could be descendants
link |
01:06:41.600
in the space of ideas too, for somehow we no longer exist in this meat vehicle. It's now just
link |
01:06:47.360
like whatever the definition of a life form is, you have descendants of those life forms.
link |
01:06:54.320
Yes. And they will be thoughtful about that. They will have thought about what counts as a
link |
01:06:58.800
descendant and that'll be important to them to have the right concept.
link |
01:07:02.240
So the they there is very interesting, who the they are.
link |
01:07:05.920
But the key thing is we're making predictions that I think are somewhat robust about what
link |
01:07:10.160
our distant descendants will be like. Another thing I think you would automatically accept is
link |
01:07:14.000
they will almost entirely be artificial. And I think that would be the obvious prediction
link |
01:07:17.840
about any aliens we would meet. That is they would long since have given up reproducing
link |
01:07:22.800
biologically.
link |
01:07:24.080
Well, it's like organic or something. It's all real.
link |
01:07:28.400
It might be squishy and made out of hydrocarbons, but it would be artificial in the sense of made
link |
01:07:33.040
in factories with designs on CAD things, right? Factories with scale economy. So the factories
link |
01:07:37.840
we have made on earth today have much larger scale economies than the factories in ourselves.
link |
01:07:42.000
So the factories in ourselves are, there are marvels, but they don't achieve very many scale
link |
01:07:46.320
economies. They're tiny little factories.
link |
01:07:47.920
But they're all factories.
link |
01:07:49.040
Yes.
link |
01:07:49.440
Factories on top of factories. So everything, the factories that are designed is different
link |
01:07:54.800
than sort of the factories that have evolved.
link |
01:07:56.480
Yeah. I think the nature of the word design is very interesting to uncover there. But
link |
01:08:02.480
let me, in terms of aliens, let me go, let me analyze your Twitter like it's Shakespeare.
link |
01:08:09.440
Okay.
link |
01:08:10.240
There's a tweet says, define hello, in quotes, alien civilizations as one that might the
link |
01:08:16.880
next million years identify humans as intelligent and civilized, travel to earth and say hello
link |
01:08:24.000
by making their presence and advanced abilities known to us. The next 15 polls, this is a
link |
01:08:29.360
Twitter thread, the next 15 polls ask about such hello aliens. And what these polls ask
link |
01:08:35.680
is your Twitter followers, what they think those aliens will be like certain particular
link |
01:08:42.960
qualities. So poll number one is what percent of hello aliens evolved from biological species
link |
01:08:49.520
with two main genders? And you know, the popular vote is above 80%. So most of them have two
link |
01:08:58.000
genders. What do you think about that? I'll ask you about some of these because they're
link |
01:09:00.880
so interesting. It's such an interesting question.
link |
01:09:02.480
It is a fun set of questions.
link |
01:09:03.520
Yes, it's a fun set of questions. So the genders as we look through evolutionary history, what's
link |
01:09:08.400
the usefulness of that as opposed to having just one or like millions?
link |
01:09:13.520
So there's a question in evolution of life on earth, there are very few species that
link |
01:09:18.240
have more than two genders. There are some, but they aren't very many. But there's an
link |
01:09:22.800
enormous number of species that do have two genders, much more than one. And so there's
link |
01:09:27.440
a literature on why did multiple genders evolve, and that's sort of what's the point of having
link |
01:09:34.080
males and females versus hermaphrodites. So most plants are hermaphrodites, that is they
link |
01:09:40.960
would mate male female, but each plant can be either role. And then most animals have
link |
01:09:47.520
chosen to split into males and females. And then they're differentiating the two genders.
link |
01:09:52.880
And there's an interesting set of questions about why that happens.
link |
01:09:56.320
Because you can do selection, you basically have like one gender competes for the affection
link |
01:10:03.760
of other and there's sexual partnership that creates the offspring. So there's sexual
link |
01:10:08.240
selection. It's nice to have like to a party, it's nice to have dance partners. And then
link |
01:10:14.000
then each one get to choose based on certain characteristics. And that's an efficient
link |
01:10:18.880
mechanism for adapting to the environment, being successfully adapted to the environment.
link |
01:10:24.400
It does look like there's an advantage. If you have males, then the males can take higher
link |
01:10:29.760
variants. And so there can be stronger selection among the males in terms of weeding out genetic
link |
01:10:34.240
mutations because the males have a higher variance in their mating success.
link |
01:10:38.240
Yes. Sure. Okay. Question number two, what percent of hello aliens evolved from land
link |
01:10:44.720
animals as opposed to plants or ocean slash air organisms? By the way, I did recently
link |
01:10:53.680
see that there's only 10% of species on earth are in the ocean. So there's a lot more variety
link |
01:11:03.600
on land. There is. It's interesting. So why is that? I can't even intuit exactly why that would
link |
01:11:10.480
be. Maybe survival on land is harder and so you get a lot more. The story that I understand is
link |
01:11:16.160
it's about small niches. So speciation can be promoted by having multiple different species.
link |
01:11:23.200
So in the ocean, species are larger. That is there are more creatures in each species because the
link |
01:11:29.520
ocean environments don't vary as much. So if you're good in one place, you're good in many
link |
01:11:33.040
other places. But on land, and especially in rivers, rivers contain an enormous percentage of
link |
01:11:38.400
the kinds of species on land, you see, because they vary so much from place to place. And so
link |
01:11:46.800
a species can be good in one place and then other species can't really compete because they came
link |
01:11:51.440
from a different place where things are different. So it's a remarkable fact actually that speciation
link |
01:11:58.640
promotes evolution in the long run. That is more evolution has happened on land because there have
link |
01:12:03.440
been more species on land because each species has been smaller. And that's actually a warning
link |
01:12:08.800
about something called rot that I've thought a lot about, which is one of the problems with
link |
01:12:13.360
even a world government, which is large systems of software today just consistently rot and decay
link |
01:12:19.120
with time and have to be replaced. And that plausibly also is a problem for other large
link |
01:12:23.440
systems, including biological systems, legal systems, regulatory systems. And it seems like
link |
01:12:29.760
large species actually don't evolve as effectively as small ones do. And that's an important thing
link |
01:12:36.720
to notice about that. And that's different from ordinary sort of evolution in economies on Earth
link |
01:12:44.640
in the last few centuries, say. On Earth, the more technical evolution and economic growth happens in
link |
01:12:51.280
larger integrated cities and nations. But in biology, it's the other way around. More evolution
link |
01:12:56.800
happened in the fragmented species. Yeah, it's such a nuanced discussion because you can also
link |
01:13:02.800
push back in terms of nations and at least companies. It's like large companies seems to evolve
link |
01:13:08.640
less effectively. There is something that they have more resources, they don't even have better
link |
01:13:17.760
resilience. And when you look at the scale of decades and centuries, it seems like a lot of
link |
01:13:23.440
large companies die. But still large economies do better, like large cities grow better than small
link |
01:13:29.440
cities. Large integrated economies like the United States or the European Union do better than small
link |
01:13:34.240
fragmented ones. So, yeah, sure. That's a very interesting, long discussion. But so most of the
link |
01:13:41.040
people, and obviously votes on Twitter represent the absolute objective truth of things.
link |
01:13:48.240
But an interesting question about oceans is that, okay, remember I told you about how most
link |
01:13:52.800
planets would last for trillions of years and be later, right? So people have tried to explain why
link |
01:13:58.640
life appeared on Earth by saying, oh, all those planets are going to be unqualified for life
link |
01:14:02.800
because of various problems. That is, they're around smaller stars, which last longer, and
link |
01:14:06.320
smaller stars have some things like more solar flares, maybe more tidal locking. But almost
link |
01:14:11.920
all of these problems with longer lived planets aren't problems for ocean worlds. And a large
link |
01:14:17.680
fraction of planets out there are ocean worlds. So if life can appear on an ocean world, then
link |
01:14:23.520
that pretty much ensures that these planets that last a very long time could have advanced life
link |
01:14:30.240
because there's a huge fraction of ocean worlds. So that's actually an open question.
link |
01:14:34.480
So when you say, sorry, when you say life appear, you're kind of saying life and intelligent life.
link |
01:14:41.840
So that's an open question. Is land, and that's I suppose the question behind
link |
01:14:50.640
the Twitter poll, which is a grabby alien civilization that comes to say hello,
link |
01:14:57.360
what's the chance that they first began their early steps, the difficult steps they took on
link |
01:15:04.000
land? What do you think? 80%, most people on Twitter think it's very likely on land.
link |
01:15:14.320
I think people are discounting ocean worlds too much. That is, I think people tend to assume that
link |
01:15:20.480
whatever we did must be the only way it's possible. And I think people aren't giving
link |
01:15:23.840
enough credit for other possible paths. Dolphins, Waterworld, by the way,
link |
01:15:28.720
people criticize that movie. I love that movie. Kevin Costner can do me no wrong.
link |
01:15:32.960
Okay, next question. What percent of hello aliens once had a nuclear war with greater
link |
01:15:39.600
than 10 nukes fired in anger? So not in the incompetence and as an accident,
link |
01:15:47.680
intentional firing of nukes and less than 20% was the most popular vote.
link |
01:15:54.000
And that just seems wrong to me.
link |
01:15:56.240
So like, I wonder what, so most people think once you get nukes, we're not going to fire them.
link |
01:16:02.400
They believe in the power.
link |
01:16:04.880
I think they're assuming that if you had a nuclear war, then that would just end
link |
01:16:08.240
civilization for good. I think that's the thinking.
link |
01:16:10.720
That's the main thing.
link |
01:16:11.760
And I think that's just wrong. I think you could rise again after a nuclear war.
link |
01:16:15.120
It might take 10,000 years or 100,000 years, but it could rise again.
link |
01:16:18.800
So what do you think about mutual assured destruction
link |
01:16:21.520
as a force to prevent people from firing nuclear weapons? That's a question that I knew
link |
01:16:28.480
to a terrifying degree has been raised now and what's going on.
link |
01:16:31.920
Well, I mean, clearly it has had an effect. The question is just how strong an effect for how
link |
01:16:36.800
long. I mean, clearly we have not gone wild with nuclear war and clearly the devastation that you
link |
01:16:43.680
would get if you initiated a nuclear war is part of the reasons people have been reluctant to start
link |
01:16:47.520
a war. The question is just how reliably will that ensure the absence of a war?
link |
01:16:52.800
Yeah. The night is still young.
link |
01:16:54.400
Exactly.
link |
01:16:54.800
This has been 70 years or whatever it's been.
link |
01:16:57.360
I mean, but what do you think? Do you think we'll see nuclear war in the century?
link |
01:17:06.880
I don't know if in the century, but it's the sort of thing that's likely to happen eventually.
link |
01:17:12.800
That's a very loose statement. Okay. I understand. Now this is where I pull you out of your
link |
01:17:17.200
mathematical model and ask a human question. Do you think this particular human question...
link |
01:17:22.480
I think we've been lucky that it hasn't happened so far.
link |
01:17:24.720
But what is the nature of nuclear war? Let's think about this. There's dictators, there's democracies,
link |
01:17:36.800
miscommunication. How do wars start? World War I, World War II.
link |
01:17:40.480
So the biggest datum here is that we've had an enormous decline in major war over the last
link |
01:17:46.000
century. So that has to be taken into account now. So the problem is war is a process that has a very
link |
01:17:52.960
long tail. That is, there are rare, very large wars. So the average war is much worse than the
link |
01:18:00.640
median war because of this long tail. And that makes it hard to identify trends over time. So
link |
01:18:08.080
the median war has clearly gone way down in the last century at a medium rate of war. But it could
link |
01:18:12.480
be that's because the tail has gotten thicker. And in fact, the average war is just as bad,
link |
01:18:17.200
but most wars are gonna be big wars. So that's the thing we're not so sure about.
link |
01:18:21.440
There's no strong data on wars with one, because of the destructive nature of the weapons,
link |
01:18:31.600
kill hundreds of millions of people. There's no data on this.
link |
01:18:35.440
So, but we can start intuiting.
link |
01:18:37.360
But we can see that the power law, we can do a power law fit to the rate of wars and it's a
link |
01:18:42.080
power law with a thick tail. So it's one of those things that you should expect most of the damage
link |
01:18:46.560
to be in the few biggest ones. So that's also true for pandemics and a few other things. For
link |
01:18:51.200
pandemics, most of the damages in the few biggest ones. So the median pandemics of ours, less than
link |
01:18:55.840
the average that you should expect in the future. But those, that fitting of data is very questionable
link |
01:19:02.880
because everything you said is correct. The question is like, what can we infer about the
link |
01:19:09.120
future of civilization, threatening pandemics or nuclear war from studying the history of the
link |
01:19:19.520
20th century? So like, you can't just fit it to the data, the rate of wars and the destructive
link |
01:19:25.120
nature. Like that's not, that's not how nuclear war will happen. Nuclear war happens with two
link |
01:19:31.360
assholes or idiots that have access to a button.
link |
01:19:35.120
Small wars happen that way too.
link |
01:19:36.880
No, I understand that, but that's, it's very important. Small wars aside, it's very important
link |
01:19:41.600
to understand the dynamics, the human dynamics and the geopolitics of the way nuclear war happens
link |
01:19:46.720
in order to predict how we can minimize the chance of a...
link |
01:19:51.520
But it is a common and useful intellectual strategy to take something that could be really
link |
01:19:56.800
big or, but is often very small and fit the distribution of the data, small things, which
link |
01:20:01.120
you have a lot of them and then ask, do I believe the big things are really that different? Right?
link |
01:20:05.280
I see.
link |
01:20:05.760
So sometimes it's reasonable to say like, say with tornadoes or even pandemics or something,
link |
01:20:10.400
the underlying process might not be that different for the big and small ones.
link |
01:20:14.960
It might not be. The fact that mutual sure destruction seems to work to some degree
link |
01:20:23.680
shows you that to some degree it's different than the small wars.
link |
01:20:31.040
So it's a really important question to understand is, are humans capable, one human, like how many
link |
01:20:40.880
humans on earth, if I give them a button now, say you pressing this button will kill everyone on
link |
01:20:46.960
earth, everyone, right? How many humans will press that button? I want to know those numbers,
link |
01:20:53.600
like day to day, minute to minute, how many people have that much irresponsibility, evil,
link |
01:21:01.040
incompetence, ignorance, whatever word you want to assign, there's a lot of dynamics of the
link |
01:21:06.240
psychology that leads you to press that button, but how many? My intuition is the number, the more
link |
01:21:12.320
destructive that press of a button, the fewer humans you find. And that number gets very close
link |
01:21:17.520
to zero very quickly, especially people have access to such a button, but that's perhaps
link |
01:21:24.560
a hope than a reality. And unfortunately we don't have good data on this,
link |
01:21:28.240
which is like how destructive are humans willing to be?
link |
01:21:34.480
So I think part of this just has to think about, ask what your time scales you're looking at,
link |
01:21:39.920
right? So if you say, if you look at the history of war, you know, we've had a lot of wars pretty
link |
01:21:44.880
consistently over many centuries. So if I ask, if you ask, will we have a nuclear war in the
link |
01:21:50.000
next 50 years? I might say, well, probably not. If I say 500 or 5,000 years, like if the same sort
link |
01:21:56.400
of risks are underlying and they just continue, then you have to add that up over time and think
link |
01:22:00.960
the risk is getting a lot larger the longer a timescale we're looking at.
link |
01:22:04.400
But okay, let's generalize nuclear war because what I was more referring to is something that
link |
01:22:09.920
kills more than 20% of humans on earth and injures or makes the other 80%
link |
01:22:25.680
suffer horribly, survive, but suffer. That's what I was referring to. So when you look at 500 years
link |
01:22:32.320
from now, that might not be nuclear war. That might be something else. That's that kind of,
link |
01:22:36.640
has that destructive effect. And I don't know, these feel like novel questions in the history
link |
01:22:45.280
of humanity. I just don't know. I think since nuclear weapons, this has been, you know,
link |
01:22:52.400
engineering pandemics, for example, robotics, so nanobots. It just seems like a real new
link |
01:23:02.560
possibility that we have to contend with it. We don't have good models or from my perspective.
link |
01:23:08.160
So if you look on say the last thousand years or 10,000 years, we could say we've seen a certain
link |
01:23:13.280
rate at which people are willing to make big destruction in terms of war. Okay. And if you're
link |
01:23:19.680
willing to project that data forward, that I think like if you want to ask over periods of
link |
01:23:23.920
thousands or tens of thousands of years, you would have a reasonable data set. So the key
link |
01:23:28.000
question is what's changed lately? Okay. And so a big question of which I've given a lot of thought
link |
01:23:34.960
to what are the major changes that seem to have happened in culture and human attitudes over the
link |
01:23:39.920
last few centuries and what's our best explanation for those so that we can project them forward into
link |
01:23:44.160
the future. And I have a story about that, which is the story that we have been drifting back toward
link |
01:23:51.600
forager attitudes in the last few centuries as we get rich. So the idea is we spent a million years
link |
01:23:57.920
being a forager and that was a very sort of standard lifestyle that we know a lot about.
link |
01:24:04.480
Foragers sort of live in small bands. They make decisions cooperatively. They share food. They,
link |
01:24:10.080
you know, they don't have much property, et cetera. And humans liked that. And then 10,000 years ago,
link |
01:24:16.800
farming became possible, but it was only possible because we were plastic enough to really change
link |
01:24:21.280
our culture. Farming styles and cultures are very different. They have slavery, they have war,
link |
01:24:25.840
they have property, they have inequality, they have kings. They stay in one place instead of
link |
01:24:30.880
wandering. They don't have as much diversity of experience or food. They have more disease.
link |
01:24:35.920
This farming life is just very different. But humans were able to sort of introduce conformity
link |
01:24:41.600
and religion and all sorts of things to become just a very different kind of creature as farmers.
link |
01:24:45.280
Farmers are just really different than foragers in terms of their values and their lives.
link |
01:24:49.920
But the pressures that made foragers into farmers were part mediated by poverty.
link |
01:24:55.120
Farmers are poor. And if they deviated from the farming norms that people around them supported,
link |
01:25:00.080
they were quite at risk of starving to death. And then in the last few centuries,
link |
01:25:05.440
we've gotten rich. And as we've gotten rich, the social pressures that turned foragers into farmers
link |
01:25:11.920
have become less persuasive to us. So, for example, a farming young woman who was told,
link |
01:25:18.000
if you have a child out of wedlock, you and your child may starve, that was a credible threat.
link |
01:25:22.640
She would see actual examples around her to make that a believable threat. Today,
link |
01:25:28.000
if you say to a young woman, you shouldn't have a child out of wedlock, she will see other young
link |
01:25:31.760
woman around her doing okay that way. We're all rich enough to be able to afford that sort of a
link |
01:25:36.320
thing. And therefore, she's more inclined often to go with her inclinations, her sort of more
link |
01:25:42.400
natural inclinations about such things rather than to be pressured to follow the official
link |
01:25:47.520
farming norms of that you shouldn't do that sort of thing. And all through our lives, we have been
link |
01:25:51.440
drifting back toward forager attitudes because we've been getting rich. And so, aside from at
link |
01:25:57.920
work, which is an exception, but elsewhere, I think this explains trends toward less slavery,
link |
01:26:04.000
more democracy, less religion, less fertility, more promiscuity, more travel, more art, more leisure,
link |
01:26:12.160
fewer work hours. All of these trends are basically explained by becoming more forager like.
link |
01:26:18.960
And much science fiction celebrates this, Star Trek or the culture novels, people
link |
01:26:23.360
like this image that we are moving toward this world. We're basically like foragers, we're peaceful,
link |
01:26:27.840
we share, we make decisions collectively, we have a lot of free time, we are into art.
link |
01:26:34.880
So forger, you know, forger is a word and it has, it's a loaded word because it's connected to
link |
01:26:42.880
the actual, what life was actually like at that time. As you mentioned, we sometimes don't do a
link |
01:26:49.200
good job of telling accurately what life was like back then. But you're saying if it's not exactly
link |
01:26:55.120
like foragers, it rhymes in some fundamental way. You also said peaceful. Is it obvious that a
link |
01:27:01.920
forager with a nuclear weapon would be peaceful? I don't know if that's 100% obvious. So we know,
link |
01:27:10.080
again, we know a fair bit about what foragers lives were like. The main sort of violence they
link |
01:27:14.960
had would be sexual jealousy. They were relatively promiscuous and so there'd be a lot of jealousy.
link |
01:27:19.600
But they did not have organized wars with each other. That is, they were at peace with their
link |
01:27:24.480
neighboring forager bands. They didn't have property in land or even in people. They didn't
link |
01:27:28.880
really have marriage. And so they were, in fact, peaceful.
link |
01:27:35.440
When you think about large scale wars, they don't start large scale wars.
link |
01:27:38.400
They didn't have coordinated large scale wars like the way chimpanzees do. Our chimpanzees do
link |
01:27:42.560
have wars between one tribe of chimpanzees and others, but human foragers do not. Farmers return
link |
01:27:47.440
to that, of course, the more chimpanzee like styles. Well, that's a hopeful message. If we
link |
01:27:52.800
could return real quick to the Hello Aliens Twitter thread. One of them is really interesting
link |
01:28:00.160
about language. What percent of Hello Aliens would be able to talk to us in our language?
link |
01:28:05.280
This is the question of communication. It actually gets to the nature of language.
link |
01:28:10.080
It also gets to the nature of how advanced you expect them to be.
link |
01:28:16.240
So I think some people see that we have advanced over the last thousands of years,
link |
01:28:22.880
and we aren't reaching any sort of limit. And so they tend to assume it could go on forever.
link |
01:28:28.240
And I actually tend to think that within, say, 10 million years, we will sort of max out on
link |
01:28:34.400
technology. We will sort of learn everything that's feasible to know for the most part. And then
link |
01:28:40.960
obstacles to understanding would more be about sort of cultural differences, like ways in which
link |
01:28:45.680
different places had just chosen to do things differently. And so then the question is, is it
link |
01:28:52.080
even possible to communicate across some cultural distances? And I could imagine some maybe advanced
link |
01:28:59.760
aliens who just become so weird and different from each other, they can't communicate with each other.
link |
01:29:03.680
But we're probably pretty simple compared to them. So I would think, sure, if they wanted to,
link |
01:29:10.720
they could communicate with us. So it's the simplicity of the recipient. I tend to,
link |
01:29:17.200
just to push back, let's explore the possibility where that's not the case. Can we communicate
link |
01:29:23.600
with ants? I find that this idea that... We're not very good at communicating in general.
link |
01:29:33.280
Oh, you're saying... All right, I see. You're saying once you get orders of magnitude better
link |
01:29:38.400
at communicating... Once they had maxed out on all communication technology in general,
link |
01:29:43.440
and they just understood in general how to communicate with lots of things, and had done
link |
01:29:47.440
that for millions of years. But you have to be able to... This is so interesting. As somebody
link |
01:29:51.520
who cares a lot about empathy and imagining how other people feel, communication requires empathy,
link |
01:30:00.240
meaning you have to truly understand how the other person, the other organism sees the world.
link |
01:30:08.720
It's like a four dimensional species talking to a two dimensional species. It's not as trivial as,
link |
01:30:15.200
to me at least, as it might at first seem. So let me reverse my position a little,
link |
01:30:20.880
because I'll say, well, the hello aliens question really combines two different scenarios
link |
01:30:28.160
that we're slipping over. So one scenario would be that the hello aliens would be like grabby
link |
01:30:34.560
aliens. They would be just fully advanced. They would have been expanding for millions of years.
link |
01:30:38.400
They would have a very advanced civilization, and then they would finally be arriving here
link |
01:30:43.120
after a billion years perhaps of expanding, in which case they're going to be crazy advanced
link |
01:30:47.760
at some maximal level. But the hello aliens about aliens we might meet soon, which might be sort of
link |
01:30:55.040
UFO aliens, and UFO aliens probably are not grabby aliens. How do you get here if you're
link |
01:31:02.480
not a grabby alien? Well, they would have to be able to travel. Oh. But they would not be expansive.
link |
01:31:11.440
So the road trip doesn't count as a grabby. So we're talking about expanding the colony,
link |
01:31:17.200
the comfortable colony. So the question is, if UFOs, some of them are aliens,
link |
01:31:24.240
what kind of aliens would they be? This is sort of the key question you have to ask in order to
link |
01:31:28.880
try to interpret that scenario. The key fact we would know is that they are here right now,
link |
01:31:36.160
but the universe around us is not full of an alien civilization. So that says right off the bat
link |
01:31:43.520
that they chose not to allow massive expansion of a grabby civilization.
link |
01:31:50.240
Is it possible that they chose it, but we just don't see them yet? These are the stragglers,
link |
01:31:56.400
the journeymen. So the timing coincidence is, it's almost surely if they are here now,
link |
01:32:02.800
they are much older than us. They are many millions of years older than us. And so they
link |
01:32:08.400
could have filled the galaxy in that last millions of years if they had wanted to.
link |
01:32:13.360
That is, they couldn't just be right at the edge. Very unlikely. Most likely they would have been
link |
01:32:18.240
around waiting for us for a long time. They could have come here any time in the last millions of
link |
01:32:22.320
years and they just chosen, they've been waiting around for this or they just chose to come
link |
01:32:25.760
recently. But the timing coincidence, it would be crazy unlikely that they just happen to be able to
link |
01:32:31.520
get here, say in the last hundred years. They would no doubt have been able to get here far
link |
01:32:36.800
earlier than that. Again, we don't know. So this is a friend like UFO sightings on earth. We don't
link |
01:32:41.760
know if this kind of increase in sightings have anything to do with actual visitations.
link |
01:32:46.480
I'm just talking about the timing. They arose at some point in space time.
link |
01:32:52.080
And it's very unlikely that that was just to the point that they could just barely get here
link |
01:32:56.640
recently. Almost surely they could have gotten here much earlier. And throughout the stretch
link |
01:33:03.280
of several billion years that earth existed, they could have been here often. Exactly. So
link |
01:33:07.520
they could have therefore filled the galaxy long time ago if they had wanted to. Let's push back
link |
01:33:12.480
on that. The question to me is, isn't it possible that the expansion of a civilization is much
link |
01:33:20.080
harder than the travel? The sphere of the reachable is different than the sphere of the colonized.
link |
01:33:31.440
So isn't it possible that the sphere of places where like the stragglers go, the different
link |
01:33:38.560
people that journey out, the explorers, is much, much larger and grows much faster than the
link |
01:33:44.640
civilization? So in which case, like they would visit us. There's a lot of visitors, the grad
link |
01:33:51.040
students of the civilization. They're like exploring, they're collecting the data, but
link |
01:33:56.560
we're not yet going to see them. And by yet, I mean across millions of years.
link |
01:34:01.280
The time delay between when the first thing might arrive and then when colonists could arrive in
link |
01:34:10.400
mass and do a mass amount of work is cosmologically short. In human history, of course, sure, there
link |
01:34:16.480
might be a century between that, but a century is just a tiny amount of time on the scales we're
link |
01:34:22.240
talking about. So this is, in computer science, ant colony optimization. It's true for ants.
link |
01:34:28.400
So it's like when the first ant shows up, it's likely if there's anything of value,
link |
01:34:33.200
it's likely the other ants will follow quickly. Yeah.
link |
01:34:36.800
Relatively short. It's also true that traveling over very long distances, probably one of the
link |
01:34:42.800
main ways to make that feasible is that you land somewhere, you colonize a bit, you create new
link |
01:34:48.240
resources that can then allow you to go farther. Many short hops as opposed to a giant long journey.
link |
01:34:53.200
Exactly. Those hops require that you are able to start a colonization of sorts along those hops.
link |
01:34:59.280
You have to be able to stop somewhere, make it into a way station such that you can then support
link |
01:35:04.640
you moving farther. So what do you think of, there's been a lot of UFO sightings. What do
link |
01:35:10.880
you think about those UFO sightings and what do you think if any of them are of extraterrestrial
link |
01:35:19.680
origin and we don't see giant civilizations out in the sky, how do you make sense of that then?
link |
01:35:27.440
I want to do some clearing of throats, which people like to do on this topic, right? They want
link |
01:35:33.040
to make sure you understand they're saying this and not that, right? So I would say the analysis
link |
01:35:39.120
needs both a prior and a likelihood. So the prior is what are the scenarios that are at all plausible
link |
01:35:47.360
in terms of what we know about the universe. And then the likelihood is the particular actual
link |
01:35:52.240
sightings, like how hard are those to explain through various means. I will establish myself
link |
01:35:58.800
as somewhat of an expert on the prior. I would say my studies and the things I've studied make me an
link |
01:36:04.160
expert and I should stand up and have an opinion on that and be able to explain it. The likelihood,
link |
01:36:09.200
however, is not my area of expertise. That is, I'm not a pilot. I don't do atmospheric studies of
link |
01:36:15.840
studies of things I haven't studied in detail, the various kinds of atmospheric phenomena or
link |
01:36:20.240
whatever that might be used to explain the particular sightings. I can just say from
link |
01:36:24.000
my amateur stance, the sightings look damn puzzling. They do not look easy to dismiss.
link |
01:36:30.480
The attempts I've seen to easily dismiss them seem to me to fail. It seems like these are
link |
01:36:35.280
pretty puzzling, weird stuff that deserve an expert's attention in terms of considering,
link |
01:36:42.160
asking what the likelihood is. So analogy I would make as a murder trial. On average, if we say,
link |
01:36:48.400
what's the chance any one person murdered another person as a prior probability, maybe one in a
link |
01:36:52.960
thousand people get murdered. Maybe each person has a thousand people around them who could
link |
01:36:56.480
plausibly have done it. So the prior probability of a murder is one in a million. But we allow
link |
01:37:01.200
murder trials because often evidence is sufficient to overcome a one in a million prior because the
link |
01:37:07.200
evidence is often strong enough, right? My guess, rough guess for the UFOs as aliens
link |
01:37:13.840
scenario, at least some of them, is the priors roughly one in a thousand,
link |
01:37:17.760
much higher than the usual murder trial, plenty high enough that strong physical evidence could
link |
01:37:23.920
put you over the top to think it's more likely than not. But I'm not an expert on that physical
link |
01:37:28.720
evidence. I'm going to leave that part to someone else. I'm going to say the prior is pretty high.
link |
01:37:33.440
This isn't a crazy scenario. So then I can elaborate on where my prior comes from.
link |
01:37:38.000
What scenario could make most sense of this data? My scenario to make sense has two main parts.
link |
01:37:46.480
First is panspermia siblings. So panspermia is the process by which life might have arrived on
link |
01:37:55.040
earth from elsewhere. And a plausible time for that, I mean, it would have to happen very early
link |
01:38:00.960
in earth's history because we see life early in history. And a plausible time could have been
link |
01:38:05.200
during the stellar nursery where the sun was born with many other stars in the same close proximity
link |
01:38:12.400
with lots of rocks flying around, able to move things from one place to another.
link |
01:38:18.240
If a rock with life on it from some rock with planet with life came into that stellar nursery,
link |
01:38:24.480
it plausibly could have seeded many planets in that stellar nursery all at the same time. They're
link |
01:38:30.080
all born at the same time in the same place, pretty close to each other, lots of rocks flying
link |
01:38:33.600
around. So a panspermia scenario would then create siblings, i.e. there would be, say, a few thousand
link |
01:38:42.080
other planets out there. So after the nursery forms, it drifts, it separates, they drift apart.
link |
01:38:48.240
And so out there in the galaxy, there would now be a bunch of other stars all formed at the same
link |
01:38:52.560
time. And we can actually spot them in terms of their spectrum. And they would have then started
link |
01:38:58.480
on the same path of life as we did with that life being seeded, but they would move at different
link |
01:39:03.440
rates. And most likely, most of them would never reach an advanced level before the deadline. But
link |
01:39:11.120
maybe one other did, and maybe it did before us. So if they did, they could know all this,
link |
01:39:18.880
and they could go searching for their siblings. That is, they could look in the sky for the other
link |
01:39:22.560
stars that match the spectrum that matches the spectrum that came from this nursery.
link |
01:39:26.880
They could identify their sibling stars in the galaxy, the thousand of them. And those would be
link |
01:39:32.560
of special interest to them because they would think, well, life might be on those. And they
link |
01:39:38.000
could go looking for them. Can we just, such a brilliant mathematical, philosophical, physical,
link |
01:39:47.040
biological idea of panspermia siblings, because we all kind of started at a similar time
link |
01:39:53.840
in this local pocket of the universe. And so that changes a lot of the math.
link |
01:40:02.560
So that would create this correlation between when advanced life might appear,
link |
01:40:06.080
no longer just random independent spaces and space time. There'd be this cluster, perhaps.
link |
01:40:10.800
And that allows interaction between non grabby alien civilizations, like kind of
link |
01:40:19.040
primitive alien civilizations, like us with others. And they might be a little bit ahead.
link |
01:40:25.360
That's so fascinating.
link |
01:40:26.880
They would probably be a lot ahead. So the puzzle is, if they happened before us,
link |
01:40:33.600
they probably happened hundreds of millions of years before us.
link |
01:40:37.040
But less than a billion.
link |
01:40:38.560
Less than a billion, but still plenty of time that they could have become grabby and filled
link |
01:40:43.760
the galaxy and gone beyond. So the fact is, they chose not to become grabby. That would
link |
01:40:49.280
have to be the interpretation. If we have panspermia siblings...
link |
01:40:52.080
Plenty of time to become grabby, you said. So they should be gone.
link |
01:40:54.480
Yes, they had plenty of time and they chose not to.
link |
01:40:58.240
Are we sure about this? A hundred million years is enough.
link |
01:41:02.800
So I told you before that I said, within 10 million years, our descendants will become
link |
01:41:07.600
grabby or not.
link |
01:41:08.880
And they'll have that choice. Okay.
link |
01:41:10.640
Right? And so they, clearly more than 10 million years earlier than us, so they chose not to.
link |
01:41:16.240
But still go on vacation, look around, just not grabby.
link |
01:41:20.400
If they chose not to expand, that's going to have to be a rule they set to not allow
link |
01:41:25.040
any part of themselves to do it. If they let any little ship fly away with the ability
link |
01:41:31.440
to create a colony, the game's over. Then the universe becomes grabby from their origin
link |
01:41:38.000
with this one colony, right? So in order to prevent their civilization being grabby,
link |
01:41:42.240
they have to have a rule they enforce pretty strongly that no part of them can ever try
link |
01:41:46.640
to do that.
link |
01:41:46.960
Through a global authoritarian regime or through something that's internal to them,
link |
01:41:52.800
meaning it's part of the nature of life that it doesn't want...
link |
01:41:56.960
As like a political officer in the brain or whatever.
link |
01:41:59.920
Yes. There's something in human nature that prevents you from what or like alien nature
link |
01:42:08.640
that as you get more advanced, you become lazier and lazier in terms of exploration
link |
01:42:13.280
and expansion.
link |
01:42:14.480
So I would say they would have to have enforced a rule against expanding and that rule would
link |
01:42:20.320
probably make them reluctant to let people leave very far. You know, any one vacation
link |
01:42:25.680
trip far away could risk an expansion from this vacation trip. So they would probably
link |
01:42:29.760
have a pretty tight lid on just allowing any travel out from their origin in order to
link |
01:42:34.560
enforce this rule. But then we also know, well, they would have chosen to come here.
link |
01:42:40.640
So clearly they made an exception from their general rule to say, okay, but an expedition
link |
01:42:45.920
to Earth, that should be allowed.
link |
01:42:48.160
It could be intentional exception or incompetent exception.
link |
01:42:52.640
But if incompetent, then they couldn't maintain this over 100 million years, this policy of
link |
01:42:57.680
not allowing any expansion. So we have to see they have successfully, they not just
link |
01:43:01.520
had a policy to try, they succeeded over 100 million years in preventing the expansion.
link |
01:43:07.600
That's a substantial competence.
link |
01:43:09.600
Let me think about this. So you don't think there could be a barrier in 100 million years,
link |
01:43:14.560
you don't think there could be a barrier to like technological barrier to becoming expansionary.
link |
01:43:25.840
Imagine the Europeans have tried to prevent anybody from leaving Europe to go to the new
link |
01:43:30.240
world. And imagine what it would have taken to make that happen over 100 million years.
link |
01:43:36.160
Yeah, it's impossible.
link |
01:43:37.840
They would have had to have very strict, you know, guards at the borders saying, no, you
link |
01:43:43.360
can't go.
link |
01:43:44.000
But just to clarify, you're not suggesting that's actually possible.
link |
01:43:48.880
I am suggesting it's possible.
link |
01:43:51.200
I don't know how you keep, in my silly human brain, maybe it's the brain that values freedom,
link |
01:43:57.600
but I don't know how you can keep, no matter how much force, no matter how much censorship
link |
01:44:03.680
or control or so on, I just don't know how you can keep people from exploring into the
link |
01:44:10.480
mysterious, into the unknown.
link |
01:44:11.680
You're thinking of people, we're talking aliens. So remember, there's a vast space
link |
01:44:14.800
of different possible social creatures they could have evolved from, different cultures
link |
01:44:18.400
they could be in, different kinds of threats. I mean, there are many things, as you talked
link |
01:44:22.640
about, that most of us would feel very reluctant to do.
link |
01:44:25.680
This isn't one of those.
link |
01:44:26.560
Okay, so how, if the UFO sightings represent alien visitors, how the heck are they getting
link |
01:44:33.440
here under the panspermia siblings?
link |
01:44:36.320
So panspermia siblings is one part of the scenario, which is that's where they came
link |
01:44:40.880
from. And from that, we can conclude they had this rule against expansion and they've
link |
01:44:44.800
successfully enforced that. That also creates a plausible agenda for why they would be here,
link |
01:44:50.720
that is to enforce that rule on us. That is, if we go out and expanding, then we have defeated
link |
01:44:56.000
the purpose of this rule they set up.
link |
01:44:58.000
Interesting.
link |
01:44:58.800
Right? So they would be here to convince us to not expand.
link |
01:45:03.680
Convince in quotes.
link |
01:45:05.200
Right? Through various mechanisms. So obviously, one thing we conclude is they didn't just
link |
01:45:09.280
destroy us. That would have been completely possible, right? So the fact that they're
link |
01:45:13.120
here and we are not destroyed means that they chose not to destroy us. They have some degree
link |
01:45:18.560
of empathy or whatever their morals are that would make them reluctant to just destroy
link |
01:45:24.160
us. They would rather persuade us.
link |
01:45:26.240
Destroy their brethren. And so they may have been, there's a difference in arrival and
link |
01:45:31.600
observation. They may have been observing for a very long time.
link |
01:45:34.480
Exactly.
link |
01:45:35.120
And they arrive to try to, not to try, I don't think to try to ensure that we don't become
link |
01:45:45.520
grabby.
link |
01:45:46.720
Which is because we can see that they did not, they must have enforced a rule against
link |
01:45:50.640
that and they are therefore here to, that's a plausible interpretation why they would
link |
01:45:55.680
risk this expedition when they clearly don't risk very many expeditions over this long
link |
01:45:59.520
period to allow this one exception because otherwise, if they don't, we may become grabby.
link |
01:46:04.720
And they could have just destroyed us, but they didn't.
link |
01:46:06.400
And they're closely monitoring the technological advancing of our civilization. Like what
link |
01:46:11.360
nuclear weapons is one thing that, all right, cool. That might have less to do with nuclear
link |
01:46:15.840
weapons and more with nuclear energy. Maybe they're monitoring fusion closely. Like how
link |
01:46:21.760
clever are these apes getting?
link |
01:46:23.280
So no doubt they have a button that if we get too uppity or risky, they can push the
link |
01:46:28.320
button and ensure that we don't expand. But they'd rather do it some other way. So now
link |
01:46:32.800
that's, that explains why they're here and why they aren't out there. But there's another
link |
01:46:36.720
thing that we need to explain. There's another key data we need to explain about UFOs if
link |
01:46:40.000
we're going to have a hypothesis that explains them. And this is something many people have
link |
01:46:43.680
noticed, which is they had two extreme options they could have chosen and didn't chose.
link |
01:46:50.400
They could have either just remained completely invisible. Clearly an advanced civilization
link |
01:46:54.720
could have been completely invisible. There's no reason they need to fly around and be
link |
01:46:58.000
noticed. They could just be in orbit and in dark satellites that are completely invisible
link |
01:47:02.240
to us watching whatever they want to watch. That would be well within their abilities.
link |
01:47:06.080
That's one thing they could have done. The other thing they could do is just show up
link |
01:47:09.360
and land on the White House lawn, as they say, and shake hands, like make themselves
link |
01:47:13.280
really obvious. They could have done either of those and they didn't do either of those.
link |
01:47:17.600
That's the next thing you need to explain about UFOs as aliens. Why would they take
link |
01:47:21.440
this intermediate approach, hanging out near the edge of visibility with somewhat impressive
link |
01:47:26.640
mechanisms, but not walking up and introducing themselves nor just being completely invisible?
link |
01:47:30.880
So, okay, a lot of questions there. So one, do you think it's obvious where the White
link |
01:47:37.360
House is or the White House lawn?
link |
01:47:39.760
Obvious where there are concentrations of humans that you could go up and introduce.
link |
01:47:42.400
But is humans the most interesting thing about Earth?
link |
01:47:46.000
Yeah.
link |
01:47:46.800
Are you sure about this? Because...
link |
01:47:48.640
If they're worried about an expansion, then they would be worried about a civilization
link |
01:47:52.960
that could be capable of expansion. Obviously humans are the civilization on Earth that's
link |
01:47:57.440
by far the closest to being able to expand.
link |
01:47:59.600
I just don't know if aliens obviously see...obviously see humans, like the individual
link |
01:48:10.800
humans, like the meat vehicles, as the center of focus for observing a life on a planet.
link |
01:48:19.520
They're supposed to be really smart and advanced. Like, this shouldn't be that hard for them.
link |
01:48:23.680
But I think we're actually the dumb ones, because we think humans are the important
link |
01:48:27.840
things. But it could be our ideas. It could be something about our technologies.
link |
01:48:32.640
But that's mediated with us. It's correlated with us.
link |
01:48:34.560
No, we make it seem like it's mediated by us humans. But the focus for alien civilizations
link |
01:48:43.360
might be the AI systems or the technologies themselves. That might be the organism. Like,
link |
01:48:49.200
what humans are like...human is the food, the source of the organism that's under observation,
link |
01:48:57.920
versus like...
link |
01:48:59.120
So what they wanted to have close contact with was something that was closely near humans,
link |
01:49:03.440
then they would be contacting those. And we would just incidentally see, but we would still see.
link |
01:49:08.080
But don't you think that...isn't it possible, taking their perspective,
link |
01:49:12.960
isn't it possible that they would want to interact with some fundamental aspect that
link |
01:49:16.960
they're interested in without interfering with it? And that's actually a very...no
link |
01:49:23.200
matter how advanced you are, it's very difficult to do.
link |
01:49:25.280
But that's puzzling. So, I mean, the prototypical UFO observation is a shiny,
link |
01:49:33.120
big object in the sky that has very rapid acceleration and no apparent surfaces for
link |
01:49:41.200
using air to manipulate at speed. And the question is, why that? Again, if they just...
link |
01:49:50.960
For example, if they just wanted to talk to our computer systems, they could move some sort of
link |
01:49:55.200
like a little probe that connects to a wire and reads and sends bits there. They don't need a
link |
01:50:00.720
shiny thing flying in the sky.
link |
01:50:02.160
But don't you think they would be looking for the right way to communicate, the right
link |
01:50:08.960
language to communicate? Everything you just said, looking at the computer systems,
link |
01:50:13.280
I mean, that's not a trivial thing. Coming up with a signal that us humans would not freak out
link |
01:50:20.320
too much about, but also understand, might not be that trivial.
link |
01:50:24.240
Well, so the not freak out part is another interesting constraint. So again, I said,
link |
01:50:28.320
like the two obvious strategies are just to remain completely invisible and watch,
link |
01:50:31.920
which would be quite feasible, or to just directly interact, come out and be really
link |
01:50:36.800
very direct, right? I mean, there's big things that you can see around. There's big cities,
link |
01:50:41.280
there's aircraft carriers, there's lots of... If you want to just find a big thing and come
link |
01:50:45.440
right up to it and like tap it on the shoulder or whatever, that would be quite feasible,
link |
01:50:49.280
then they're not doing that. So my hypothesis is that one of the other questions there was,
link |
01:50:57.280
do they have a status hierarchy? And I think most animals on earth who are social animals
link |
01:51:02.160
who are social animals have status hierarchy, and they would reasonably presume that we have
link |
01:51:07.040
a status hierarchy. And...
link |
01:51:09.840
Take me to your leader.
link |
01:51:11.360
Well, I would say their strategy is to be impressive and sort of get us to see them
link |
01:51:17.200
at the top of our status hierarchy. That's how, for example, we domesticate dogs, right?
link |
01:51:25.520
We convince dogs we're the leader of their pack, right? And we domesticate many animals that way,
link |
01:51:30.720
but as we just swap into the top of their status hierarchy and we say,
link |
01:51:34.800
we're your top status animal, so you should do what we say, you should follow our lead.
link |
01:51:39.600
So the idea that would be, they are going to get us to do what they want by being top status.
link |
01:51:48.480
You know, all through history, kings and emperors, et cetera, have tried to impress their citizens
link |
01:51:52.720
and other people by having the bigger palace, the bigger parade, the bigger crown and
link |
01:51:56.640
diamonds, right? Whatever, maybe building a bigger pyramid, et cetera. It's a very well
link |
01:52:00.880
established trend to just be high status by being more impressive than the rest.
link |
01:52:05.680
To push back when there's an order of several orders of magnitude of power differential,
link |
01:52:11.520
asymmetry of power, I feel like that status hierarchy no longer applies. It's like memetic
link |
01:52:16.560
theory. It's like...
link |
01:52:18.000
Most emperors are several orders of magnitude more powerful than any one member of their empire.
link |
01:52:22.960
Let's increase that by even more. So like if I'm interacting with ants,
link |
01:52:29.600
I no longer feel like I need to establish my power with ants. I actually want to lower myself
link |
01:52:38.880
to the ants. I want to become the lowest possible ant so that they would welcome me.
link |
01:52:44.400
So I'm less concerned about them worshiping me. I'm more concerned about them welcoming me.
link |
01:52:49.600
It is important that you be nonthreatening and that you be local. So I think
link |
01:52:52.880
for example, if the aliens had done something really big in the sky, 100 light years away,
link |
01:52:57.600
that would be there, not here. And that could seem threatening. So I think their strategy to
link |
01:53:02.800
be the high status would have to be to be visible, but to be here and nonthreatening.
link |
01:53:06.480
I just don't know if it's obvious how to do that. Take your own perspective. You see a planet
link |
01:53:14.240
with relatively intelligent complex structures being formed, life forms. You could see this
link |
01:53:20.640
under in Titan or something like that, Europa. You start to see not just primitive bacterial
link |
01:53:29.600
life, but multicellular life. And it seems to form some very complicated cellular colonies,
link |
01:53:36.320
structures that they're dynamic. There's a lot of stuff going on. Some gigantic cellular automata
link |
01:53:43.200
type of construct. How do you make yourself known to them in an impressive fashion
link |
01:53:52.000
without destroying it? We know how to destroy potentially.
link |
01:53:56.880
Right. So if you go touch stuff, you're likely to hurt it, right? There's a good risk of hurting
link |
01:54:02.160
something by getting too close and touching it and interacting, right?
link |
01:54:04.880
Yeah, like landing on a White House lawn.
link |
01:54:06.880
Right. So the claim is that their current strategy of hanging out at the periphery of
link |
01:54:12.960
our vision and just being very clearly physically impressive with very clear physically impressive
link |
01:54:17.600
abilities is at least a plausible strategy they might use to impress us and convince us sort of
link |
01:54:25.360
we're at the top of their status hierarchy. And I would say if they came closer, not only would
link |
01:54:30.960
they risk hurting us in ways that they couldn't really understand, but more plausibly, they would
link |
01:54:35.600
reveal things about themselves we would hate. So if you look at how we treat other civilizations
link |
01:54:40.960
on Earth and other people, we are generally interested in foreigners and people from other
link |
01:54:46.880
plant lands. And we were generally interested in their varying cult customs, et cetera,
link |
01:54:51.120
until we find out that they do something that violates our moral norms and then we hate them.
link |
01:54:56.720
And these are aliens for God's sakes, right? There's just going to be something about them
link |
01:55:01.200
that we hate. They eat babies. Who knows what it is? Something they don't think is offensive,
link |
01:55:05.760
but that they think we might find. And so they would be risking a lot by revealing a lot about
link |
01:55:11.120
themselves. We would find something we hated. Interesting. But do you resonate at all with
link |
01:55:16.880
memetic theory where like, we only feel this way about things that are very close to us.
link |
01:55:21.680
So aliens are sufficiently different to where we'll be like, fascinated, terrified or fascinated,
link |
01:55:26.880
but not like. Right, but if they want to be at the top of our status hierarchy to get us to
link |
01:55:30.800
follow them, they can't be too distant. They have to be close enough that we would see them that
link |
01:55:35.520
way. But pretend to be close enough. Right. And not reveal much that mystery that old Clint Eastwood
link |
01:55:41.840
cowboy. I mean, we're clever enough that we can figure out their agenda. That is just from the
link |
01:55:47.520
fact that we're here. If we see that they're here, we can figure out, Oh, they want us not to expand
link |
01:55:51.520
and look, they are this huge power and they're very impressive. So, and a lot of us don't want
link |
01:55:55.920
to expand. So that could easily tip us over the edge toward we already wanted to not expand. We
link |
01:56:02.000
already wanted to be able to regulate and have a central community. And here are these very advanced
link |
01:56:07.040
smart aliens who have survived for a hundred million years and they're telling us not to expand
link |
01:56:12.400
either. This is brilliant. I love this so much. Uh, the, the, so returning to panspermia siblings,
link |
01:56:21.360
just to clarify one thing in that framework, how would, who originated, who planted it?
link |
01:56:31.120
Would it be a grabby alien civilization that planted the siblings or no? The simple scenario
link |
01:56:36.960
is that life started on some other planet billions of years ago and it went through part of the
link |
01:56:44.080
stages of evolution to advance life, but not all the way to advance life. And then some rock hit
link |
01:56:49.440
it, grabbed a piece of it on the rock and that rock drifted for maybe in a million years until
link |
01:56:54.640
it happened to prong the stellar nursery where it then seeded many stars. And something about that
link |
01:57:00.240
life without being super advanced, it was nevertheless resilient to the harsh conditions
link |
01:57:05.360
of space. There's some graphs that I've been impressed by that show sort of the level of
link |
01:57:10.480
genetic information in various kinds of life on the history of earth. And basically we are now
link |
01:57:16.880
more complex than the earlier life, but the earlier life was still pretty complex. And so if
link |
01:57:22.000
you actually project this log graph in history, it looks like it was many billions of years ago
link |
01:57:27.280
when you get down to zero. So like plausible, you could say there was just a lot of evolution that
link |
01:57:31.520
had to happen before you to get to the simplest life we've ever seen in history of life on earth
link |
01:57:35.520
was still pretty damn complicated. Okay. And so that race, that's always been this puzzle. How
link |
01:57:40.800
could life get to this enormously complicated level in the short period it seems to at the
link |
01:57:46.560
beginning of earth history. So where, you know, it's only 300 million years at most when it
link |
01:57:52.560
appeared. And then it was really complicated at that point. So panspermia allows you to
link |
01:57:57.840
explain that complexity by saying, well, it's been another 5 billion years on another planet
link |
01:58:03.040
going through lots of earlier stages where it was working its way up to the level of
link |
01:58:06.720
complexity you see at the beginning of earth. We'll try to talk about other ideas of the
link |
01:58:12.080
origin of life, but let me return to UFO sightings. Is there other explanations that are possible
link |
01:58:18.480
outside of panspermia siblings that can explain no grabby aliens in the sky and yet alien arrival
link |
01:58:26.640
on earth? Well, the other categories of explanations that most people will use is, well,
link |
01:58:33.280
first of all, just mistakes, like, you know, you're, you're, you're confusing something
link |
01:58:37.440
ordinary for something mysterious, right? Or some sort of secret organization, like our
link |
01:58:43.840
government is secretly messing with us and trying to do a, you know, a false flag ops
link |
01:58:48.720
or whatever, right? You know, they're trying to convince the Russians or the Chinese that
link |
01:58:52.080
there might be aliens and scare them into not attacking or something, right? Because
link |
01:58:56.880
if you, you know, the history of World War II, say the US government did all these big
link |
01:59:00.720
fake operations where they were faking a lot of big things in order to mess with people.
link |
01:59:05.600
So that's a possibility. The government has been lying and, you know, faking things and
link |
01:59:09.680
paying people to lie about what they saw, et cetera. That's a plausible set of explanations
link |
01:59:16.240
for the range of sightings seen. And another explanation people offer is some other hidden
link |
01:59:21.440
organization on earth or some, you know, secret organization somewhere that has much more
link |
01:59:26.080
advanced capabilities than anybody's given a credit for, for some reason it's been keeping
link |
01:59:30.400
secret. I mean, they all sound somewhat implausible, but again, we're looking for maybe,
link |
01:59:35.040
you know, one in a thousand sort of priors. Question is, you know, could, could they be
link |
01:59:40.400
in that level of plausibility? Can we just linger on this? So you, first of all, you've written,
link |
01:59:47.360
talked about, thought about so many different topics. You're an incredible mind. And I just
link |
01:59:54.320
thank you for sitting down today. I'm almost like at a loss of which place we explore,
link |
01:59:59.520
but let me on this topic, ask about conspiracy theories because you've written about institutions
link |
02:00:06.720
authorities. What, this is a bit of a therapy session, but what do we make of conspiracy
link |
02:00:18.320
theories? The phrase itself is pushing you in a direction, right? So clearly in history,
link |
02:00:25.120
we've had many large coordinated keepings of secrets, right? Say the Manhattan project,
link |
02:00:30.240
right? And there was hundreds of thousands of people working on that over many years,
link |
02:00:34.240
but they kept it a secret, right? Clearly many large military operations have kept things secrets
link |
02:00:39.600
over, you know, even decades with many thousands of people involved. So clearly it's possible to
link |
02:00:47.040
keep some things secret over time periods. You know, but the more people you involve and the
link |
02:00:53.840
more time you are assuming and the more, the less centralized an organization or the less
link |
02:00:59.040
discipline they have, the harder it gets to believe. But we're just trying to calibrate
link |
02:01:02.880
basically in our minds, which kind of secrets can be kept by which groups over what time periods
link |
02:01:07.600
for what purposes, right? But let me, I don't have enough data. So I'm somebody, I, you know,
link |
02:01:14.960
I hang out with people and I love people. I love all things really. And I just, I think that most
link |
02:01:22.400
people, even the assholes have the capacity to be good and they're beautiful and I enjoy them.
link |
02:01:28.400
So the kind of data, my brain, whatever the chemistry of my brain is that sees the beautiful
link |
02:01:33.200
in things is maybe collecting a subset of data that doesn't allow me to intuit the competence
link |
02:01:42.320
that humans are able to achieve in constructing a conspiracy theory. So for example, one thing
link |
02:01:50.800
that people often talk about is like intelligence agencies, this like broad thing. They say the CIA,
link |
02:01:55.920
the FSB, the different, the British intelligence. I've fortunate or unfortunate enough, never gotten
link |
02:02:02.720
the chance that I know of to talk to any member of those intelligence agencies nor like take a
link |
02:02:11.760
peek behind the curtain or the first curtain. I don't know how many levels of curtains there are.
link |
02:02:16.480
And so I don't, I can't intuit my interactions with government. I was funded by DOD and DARPA
link |
02:02:22.800
and I've interacted, been to the Pentagon, like with all due respect to my friends, lovely friends
link |
02:02:31.440
in government. And there are a lot of incredible people, but there is a very giant bureaucracy
link |
02:02:36.960
that sometimes suffocates the ingenuity of the human spirit is one way I can put it. Meaning
link |
02:02:43.440
they are, I just, it's difficult for me to imagine extreme competence at a scale of hundreds or
link |
02:02:50.240
thousands human beings. Now that doesn't mean that's my very anecdotal data of the situation.
link |
02:02:56.240
And so I try to build up my intuition about centralized system of government, how much
link |
02:03:05.600
conspiracy is possible, how much the intelligence agencies or some other source can generate
link |
02:03:14.000
sufficiently robust propaganda that controls the populace. If you look at World War II, as you
link |
02:03:20.720
mentioned, there've been extremely powerful propaganda machines on the Nazi, on the side of
link |
02:03:26.960
Nazi Germany, on the side of the Soviet Union, on the side of the United States and all these different
link |
02:03:33.120
mechanisms. Sometimes they control the free press through social pressures. Sometimes they control
link |
02:03:40.560
the press through the threat of violence, as you do in authoritarian regimes. Sometimes it's like
link |
02:03:47.520
deliberately the dictator, like writing the news, the headlines and literally announcing it. And
link |
02:03:53.600
something about human psychology forces you to embrace the narrative and believe the narrative.
link |
02:04:02.560
And at scale that becomes reality when the initial spark was just the propaganda thought in a single
link |
02:04:09.520
individual's mind. So I can't necessarily intuit of what's possible, but I'm skeptical of the power
link |
02:04:19.680
of human institutions to construct conspiracy theories that cause suffering at scale, especially
link |
02:04:26.800
in this modern age when information is becoming more and more accessible by the populace. Anyway,
link |
02:04:32.160
that's the, I don't know if you can elucidate for us.
link |
02:04:35.120
It's called suffering at scale, but of course, say during wartime, the people who are managing
link |
02:04:39.520
the various conspiracies like D Day or Manhattan Project, they thought that their conspiracy was
link |
02:04:45.040
avoiding harm rather than causing harm. So if you can get a lot of people to think that supporting
link |
02:04:49.760
the conspiracy is helpful, then a lot more might do that. And there's just a lot of things that
link |
02:04:57.120
people just don't want to see. So if you can make your conspiracy the sort of thing that people
link |
02:05:01.920
wouldn't want to talk about anyway, even if they knew about it, you're most of the way there.
link |
02:05:07.280
So I have learned many over the years, many things that most ordinary people would never want to
link |
02:05:12.640
hear, many things that most ordinary people should be interested in, but somehow don't know,
link |
02:05:17.200
even though the data has been very widespread. So I have this book, The Elephant and the Brain,
link |
02:05:21.600
and one of the chapters is there on medicine. And basically, most people seem ignorant of the very
link |
02:05:27.440
basic fact that when we do randomized trials where we give some people more medicine than others,
link |
02:05:32.480
the people who get more medicine are not healthier. Just overall, in general, just like
link |
02:05:38.160
induce somebody to get more medicine because you just give them more budget to buy medicine, say.
link |
02:05:42.160
And not a specific medicine, just the whole category. And you would think that would be
link |
02:05:46.960
something most people should know about medicine. You might even think that would be a conspiracy
link |
02:05:50.800
theory to think that would be hidden, but in fact, most people never learn that fact.
link |
02:05:55.760
So just to clarify, just a general high level statement, the more medicine you take,
link |
02:06:02.080
the less healthy you are.
link |
02:06:04.480
Randomized experiments don't find that fact. Do not find that more medicine makes you more healthy.
link |
02:06:10.080
There's just no connection. In randomized experiments, there's no relationship between
link |
02:06:15.520
more medicine and being healthier.
link |
02:06:16.400
So it's not a negative relationship, but it's just no relationship.
link |
02:06:19.680
Right.
link |
02:06:20.960
And so the conspiracy theory would say that the businesses that sell you medicine don't want you
link |
02:06:27.440
to know that fact. And then you're saying that there's also part of this is that people just
link |
02:06:32.800
don't want to know.
link |
02:06:33.760
They just don't want to know. And so they don't learn this. So I've lived in the Washington area
link |
02:06:38.560
for several decades now, reading the Washington Post regularly. Every week there was a special
link |
02:06:44.400
section on health and medicine. It never was mentioned in that section of the paper
link |
02:06:48.400
in all the 20 years I read that.
link |
02:06:50.560
So do you think there is some truth to this caricatured blue pill, red pill,
link |
02:06:55.280
where most people don't want to know the truth?
link |
02:06:58.720
There are many things about which people don't want to know certain kinds of truths.
link |
02:07:02.080
Yeah. That is bad looking truths, truths that discouraging, truths that sort of take away the
link |
02:07:07.520
justification for things they feel passionate about.
link |
02:07:10.640
Do you think that's a bad aspect of human nature? That's something we should try to overcome?
link |
02:07:16.880
Well, as we discussed, my first priority is to just tell people about it, to do the analysis
link |
02:07:22.000
and the cold facts of what's actually happening, and then to try to be careful about how we can
link |
02:07:26.240
improve. So our book, The Elephant in the Rain, coauthored with Kevin Simler, is about how we
link |
02:07:30.560
hide motives in everyday life. And our first priority there is just to explain to you what are
link |
02:07:35.920
the things that you are not looking at that you have reluctant to look at. And many people try
link |
02:07:40.880
to take that book as a self help book where they're trying to improve themselves and make
link |
02:07:44.560
sure they look at more things. And that often goes badly because it's harder to actually do
link |
02:07:49.200
that than you think. But we at least want you to know that this truth is available if you want
link |
02:07:55.680
to learn about it.
link |
02:07:56.400
It's the Nietzsche, if you gaze long into the abyss, the abyss gazes into you. Let's talk about
link |
02:08:01.520
this elephant in the brain. Amazing book. The elephant in the room is, quote, an important
link |
02:08:08.480
issue that people are reluctant to acknowledge or address a social taboo. The elephant in the brain
link |
02:08:14.080
is an important but unacknowledged feature of how our mind works, an introspective taboo.
link |
02:08:20.000
You describe selfishness and self deception as the core or some of the core elephants,
link |
02:08:28.720
some of the elephants, elephant offspring in the brain. Selfishness and self deception.
link |
02:08:35.680
All right.
link |
02:08:36.960
Can you explain, can you explain why these are the taboos in our brain that we
link |
02:08:45.200
don't want to acknowledge to ourselves?
link |
02:08:46.880
Your conscious mind, the one that's listening to me that I'm talking to at the moment, you like
link |
02:08:53.280
to think of yourself as the president or king of your mind, ruling over all that you see,
link |
02:08:58.960
issuing commands that immediately obeyed. You are instead better understood as the press secretary
link |
02:09:06.240
of your brain. You don't make decisions. You justify them to an audience. That's what your
link |
02:09:12.800
conscious mind is for. You watch what you're doing and you try to come up with stories that explain
link |
02:09:20.640
what you're doing so that you can avoid accusations of violating norms. So humans compared to most
link |
02:09:26.880
other animals have norms, and this allows us to manage larger groups with our morals and norms
link |
02:09:32.480
about what we should or shouldn't be doing. This is so important to us that we needed to be
link |
02:09:38.160
constantly watching what we were doing in order to make sure we had a good story to avoid norm
link |
02:09:43.440
violations. So many norms are about motives. So if I hit you on purpose, that's a big violation.
link |
02:09:48.480
If I hit you accidentally, that's okay. I need to be able to explain why it was an accident
link |
02:09:52.960
and not on purpose.
link |
02:09:54.880
So where does that need come from for your own self preservation?
link |
02:09:58.880
Right. So humans have norms and we have the norm that if we see anybody violating a norm,
link |
02:10:03.040
we need to tell other people and then coordinate to make them stop and punish them for violating.
link |
02:10:09.200
So such benefits are strong enough and severe enough that we each want to avoid being successfully
link |
02:10:15.360
accused of violating norms. So for example, hitting someone on purpose is a big clear norm
link |
02:10:21.760
violation. If we do it consistently, we may be thrown out of the group and that would mean we
link |
02:10:25.520
would die. Okay. So we need to be able to convince people we are not going around hitting people on
link |
02:10:30.960
purpose. If somebody happens to be at the other end of our fist and their face connects, that was
link |
02:10:37.440
an accident and we need to be able to explain that. And similarly for many other norms humans
link |
02:10:43.680
have, we are serious about these norms and we don't want people to violate. We find them
link |
02:10:48.560
violating, we're going to accuse them. But many norms have a motive component. And so we are
link |
02:10:53.360
trying to explain ourselves and make sure we have a good motive story about everything we do,
link |
02:10:58.160
which is why we're constantly trying to explain what we're doing. And that's what your conscious
link |
02:11:02.880
mind is doing. It is trying to make sure you've got a good motive story for everything you're
link |
02:11:07.280
doing. And that's why you don't know why you really do things. What you know is what the good
link |
02:11:12.320
story is about why you've been doing things. And that's the self deception. And you're saying that
link |
02:11:17.280
there is a machine, the actual dictator is selfish. And then you're just the press secretary who's
link |
02:11:24.000
desperately doesn't want to get fired and is justifying all of the decisions of the dictator.
link |
02:11:29.440
And that's the self deception.
link |
02:11:31.520
Right. Now, most people actually are willing to believe that this is true in the abstract. So
link |
02:11:36.400
our book has been classified as psychology and it was reviewed by psychologists. And the basic
link |
02:11:41.120
way that psychology referees and reviewers responded is to say, this is well known. Most
link |
02:11:46.800
people accept that there's a fair bit of self deception.
link |
02:11:49.040
But they don't want to accept it about themselves.
link |
02:11:51.120
Well, they don't want to accept it about the particular topics that we talk about. So people
link |
02:11:55.840
accept the idea in the abstract that they might be self deceived or that they might not be honest
link |
02:12:00.240
about various things. But that hasn't penetrated into the literatures where people are explaining
link |
02:12:05.120
particular things like why we go to school, why we go to the doctor, why we vote, et cetera. So
link |
02:12:10.560
our book is mainly about 10 areas of life and explaining about in each area what our actual
link |
02:12:16.000
motives there are. And people who study those things have not admitted that hidden motives are
link |
02:12:23.520
explaining those particular areas.
link |
02:12:25.280
So they haven't taken the leap from theoretical psychology to actual public policy.
link |
02:12:30.080
Exactly.
link |
02:12:30.800
And economics and all that kind of stuff. Well, let me just linger on this and bring up my old
link |
02:12:38.080
friends Zingman Freud and Carl Jung. So how vast is this landscape of the unconscious mind,
link |
02:12:47.840
the power and the scope of the dictator? Is it only dark there? Is it some light? Is there some
link |
02:12:56.080
love?
link |
02:12:56.720
The vast majority of what's happening in your head, you're unaware of. So in a literal sense,
link |
02:13:02.080
the unconscious, the aspects of your mind that you're not conscious of is the overwhelming
link |
02:13:07.520
majority. But that's just true in a literal engineering sense. Your mind is doing lots of
link |
02:13:12.960
low level things, and you just can't be consciously aware of all that low level stuff. But there's
link |
02:13:17.040
plenty of room there for lots of things you're not aware of.
link |
02:13:21.120
But can we try to shine a light at the things we're unaware of specifically? Now, again,
link |
02:13:26.480
staying with the philosophical psychology side for a moment, can you shine a light in the Jungian
link |
02:13:32.080
shadow? What's going on there? What is this machine like? What level of thoughts are happening
link |
02:13:40.080
there? Is it something that we can even interpret? If we somehow could visualize it, is it something
link |
02:13:46.400
that's human interpretable? Or is it just a kind of chaos of monitoring different systems in the
link |
02:13:51.840
body, making sure you're happy, making sure you're fed all those kind of basic forces that form
link |
02:13:58.400
abstractions on top of each other, and they're not introspective at all?
link |
02:14:01.760
We humans are social creatures. Plausibly being social is the main reason we have these unusually
link |
02:14:06.800
large brains. Therefore, most of our brain is devoted to being social. And so the things we are
link |
02:14:13.360
very obsessed with and constantly paying attention to are, how do I look to others? What would others
link |
02:14:19.520
think of me if they knew these various things they might learn about me?
link |
02:14:23.040
So that's close to being fundamental to what it means to be human, is caring what others think.
link |
02:14:28.000
Right. To be trying to present a story that would be okay for what others think. But we're very
link |
02:14:34.160
constantly thinking, what do other people think?
link |
02:14:36.720
So let me ask you this question then about you, Robin Hansen, who in many places, sometimes for
link |
02:14:45.360
fun, sometimes as a basic statement of principle, likes to disagree with what the majority of people
link |
02:14:52.640
think. So how do you explain, how are you self deceiving yourself in this task? And how are you
link |
02:15:02.000
being self, like, why is the dictator manipulating you inside your head to be so critical? Like,
link |
02:15:08.960
there's norms. Why do you want to stand out in this way? Why do you want to challenge the
link |
02:15:14.640
norms in this way?
link |
02:15:15.520
Almost by definition, I can't tell you what I'm deceiving myself about. But the more practical
link |
02:15:20.800
strategy that's quite feasible is to ask about what are typical things that most people deceive
link |
02:15:25.840
themselves about, and then to own up to those particular things.
link |
02:15:29.280
Sure. What's a good one?
link |
02:15:32.000
So for example, I can very much acknowledge that I would like to be well thought of,
link |
02:15:38.480
that I would be seeking attention and glory and praise from my intellectual work, and that that
link |
02:15:47.920
would be a major agenda driving my intellectual attempts. So if there were topics that other
link |
02:15:55.600
people would find less interesting, I might be less interested in those for that reason,
link |
02:15:59.760
for example. I might want to find topics where other people are interested, and I might want to
link |
02:16:05.680
go for the glory of finding a big insight rather than a small one, and maybe one that was
link |
02:16:13.680
especially surprising. That's also, of course, consistent with some more ideal concept of what
link |
02:16:19.040
an intellectual should be. But most intellectuals are relatively risk averse. They are in some
link |
02:16:27.200
local intellectual tradition, and they are adding to that, and they are staying conforming to the
link |
02:16:32.400
sort of usual assumptions and usual accepted beliefs and practices of a particular area
link |
02:16:37.200
so that they can be accepted in that area and treated as part of the community. But you might
link |
02:16:45.200
think for the purpose of the larger intellectual project of understanding the world better,
link |
02:16:50.480
people should be less eager to just add a little bit to some tradition, and they should be looking
link |
02:16:55.600
for what's neglected between the major traditions and major questions. They should be looking for
link |
02:16:59.600
assumptions maybe we're making that are wrong. They should be looking at things that are very
link |
02:17:04.560
surprising, things that you would have thought a priori unlikely that once you are convinced of it,
link |
02:17:10.960
you find that to be very important and a big update. So you could say that one motivation
link |
02:17:21.120
I might have is less motivated to be sort of comfortably accepted into some particular
link |
02:17:26.800
intellectual community and more willing to just go for these more fundamental long shots that should
link |
02:17:33.440
be very important if you could find them.
link |
02:17:35.440
Which would, if you can find them, would get you appreciated across a larger number of people
link |
02:17:45.280
across the longer time span of history. So like maybe the small local community will say,
link |
02:17:52.640
you suck, you must conform. But the larger community will see the brilliance of you
link |
02:18:00.000
breaking out of the cage of the small conformity into a larger cage. There's always a bigger cage
link |
02:18:06.960
and then you'll be remembered by more. Yeah. Also that explains your choice of colorful shirt that
link |
02:18:13.840
looks great in a black background. So you definitely stand out.
link |
02:18:17.120
Right. Now, of course, you could say, well, you could get all this attention by making false
link |
02:18:22.880
claims of dramatic improvement. And then wouldn't that be much easier than actually working through
link |
02:18:28.880
all the details to make true claims?
link |
02:18:30.720
Why not? Let me ask the press secretary. Why not? So of course you spoke several times about how
link |
02:18:37.360
much you value truth and the pursuit of truth. That's a very nice narrative. Hitler and Stalin
link |
02:18:43.520
also talked about the value of truth. Do you worry when you introspect as broadly as all humans
link |
02:18:51.040
might that it becomes a drug, this being a martyr, being the person who points out that the emperor
link |
02:19:03.360
wears no clothes, even when the emperor is obviously dressed, just to be the person who points
link |
02:19:11.040
out that the emperor is wearing no clothes. Do you think about that?
link |
02:19:14.560
So I think the standards you hold yourself to are dependent on the audience you have in mind.
link |
02:19:23.280
So if you think of your audience as relatively easily fooled or relatively gullible, then you
link |
02:19:29.680
won't bother to generate more complicated, deep, you know, arguments and structures and evidence
link |
02:19:36.160
to persuade somebody who has higher standards because why bother? You don't have to worry
link |
02:19:42.960
about it. Why bother? You can get away with something much easier. And of course, if you are,
link |
02:19:47.440
say, a salesperson, you know, you make money on sales, then you don't need to convince the top few
link |
02:19:53.520
percent of the most sharp customers. You can just go for the bottom 60 percent of the most gullible
link |
02:19:58.560
customers and make plenty of sales, right? So I think intellectuals have to vary. One of the main
link |
02:20:06.160
ways intellectuals vary is in who is their audience in their mind? Who are they trying to
link |
02:20:09.840
impress? Is it the people down the hall? Is it the people who are reading their Twitter feed? Is it
link |
02:20:15.680
their parents? Is it their high school teacher? Or is it Einstein and Freud and Socrates, right?
link |
02:20:24.800
So I think those of us who are especially arrogant, especially think that we're really
link |
02:20:31.040
big shot or have a chance at being a really big shot, we were naturally going to pick the
link |
02:20:34.960
big shot audience that we can. We're going to be trying to impress Socrates and Einstein.
link |
02:20:39.360
Is that why you hang out with Tyler Cohen a lot and try to convince him yourself?
link |
02:20:44.400
And you might think, you know, from the point of view of just making money or having sex or
link |
02:20:48.320
other sorts of things, this is misdirected energy, right? Trying to impress the very
link |
02:20:54.000
most highest quality minds. That's such a small sample and they can't do that much for you anyway.
link |
02:20:59.360
Yeah. So I might well have had more, you know, ordinary success in life,
link |
02:21:04.640
be more popular, invited to more parties, make more money if I had targeted a lower tier
link |
02:21:11.440
set of intellectuals with the standards they have. But for some reason I decided early on
link |
02:21:17.040
that Einstein was my audience or people like him and I was going to impress them.
link |
02:21:23.200
Yeah. I mean, you pick your set of motivations, you know, convincing,
link |
02:21:27.600
impressing Tyler Cohen is not going to help you get laid. Trust me, I tried. All right.
link |
02:21:34.720
What are some notable sort of effects of the elephant in the brain in everyday life? So you
link |
02:21:43.760
mentioned when we try to apply that to economics, to public policy. So when we think about medicine,
link |
02:21:50.320
education, all those kinds of things, what are some things that we're...
link |
02:21:53.280
The key thing is medicine is much less useful health wise than you think. So, you know,
link |
02:21:59.280
if you were focused on your health, you would care a lot less about it. And if you were focused
link |
02:22:04.000
on other people's health, you would also care a lot less about it. But if medicine is, as we
link |
02:22:08.880
suggest, more about showing that you care and let other people showing that they care about you,
link |
02:22:13.120
then a lot of priority on medicine can make sense. So that was our very earliest discussion
link |
02:22:18.400
in the podcast. You were talking about what, you know, should you give people a lot of medicine
link |
02:22:22.640
when it's not very effective? And then the answer then is, well, if that's the way that you show
link |
02:22:27.360
that you care about them and you really want them to know you care, then maybe that's what
link |
02:22:32.080
you need to do if you can't find a cheaper, more effective substitute. So if we actually just pause
link |
02:22:37.280
on that for a little bit, how do we start to untangle the full set of self deception happening
link |
02:22:44.720
in the space of medicine? So we have a method that we use in our book that is what I recommend
link |
02:22:49.840
for people to use in all these sorts of topics. The straightforward method is first, don't look
link |
02:22:54.480
at yourself. Look at other people, look at broad patterns of behavior and other people, and then
link |
02:23:00.800
ask, what are the various theories we could have to explain these patterns of behavior? And then
link |
02:23:05.920
just do the simple matching, which theory better matches the behavior they have. And the last step
link |
02:23:11.680
is to assume that's true of you too. Don't assume you're an exception. If you happen to be an
link |
02:23:17.680
exception, that won't go so well, but nevertheless, on average, you aren't very well positioned to
link |
02:23:22.160
judge if you're an exception. So look at what other people do, explain what other people do,
link |
02:23:27.280
and assume that's you too. But also in the case of medicine, there's several parties to consider.
link |
02:23:34.320
So there's the individual person that's receiving the medicine. There's the doctors that are
link |
02:23:38.400
prescribing the medicine. There's drug companies that are selling drugs. There are governments that
link |
02:23:45.200
have regulations that are lobbyists. So you can build up a network of categories of humans in this
link |
02:23:51.760
and they each play their role. So how do you introspect the sort of analyze the system at a
link |
02:24:00.400
system scale versus at the individual scale? So it turns out that in general, it's usually much
link |
02:24:07.040
easier to explain producer behavior than consumer behavior. That is, the drug companies or the
link |
02:24:13.200
doctors have relatively clear incentives to give the customers whatever they want. And so many say
link |
02:24:20.320
governments in democratic countries have the incentive to give the voters what they want.
link |
02:24:24.880
So that focuses your attention on the patient and the voter in this equation and saying,
link |
02:24:31.760
what do they want? They would be driving the rest of the system.
link |
02:24:35.920
Whatever they want, the other parties are willing to give them in order to get paid. So now we're
link |
02:24:42.240
looking for puzzles in patient and voter behavior. What are they choosing? And why do they choose
link |
02:24:48.720
that? And how much exactly? And then we can explain that potentially again, returning to
link |
02:24:55.120
the producer, but the producer being incentivized to manipulate the decision making processes of
link |
02:25:00.640
the voter and the consumer. Now, in almost every industry, producers are in general happy to lie
link |
02:25:07.200
and exaggerate in order to get more customers. This is true of auto repair as much as human
link |
02:25:11.760
body repair and medicine. So the differences between these industries can't be explained
link |
02:25:16.400
by the willingness of the producers to give customers what they want or to do various things
link |
02:25:20.960
that we have to again, go to the customers. Why are customers treating body repair different
link |
02:25:26.560
than auto repair? Yeah, and that potentially requires a lot of thinking, a lot of data
link |
02:25:35.120
collection and potentially looking at historical data too, because things don't just happen
link |
02:25:39.520
overnight. Over time, there's trends. In principle it does, but actually it's a lot,
link |
02:25:43.440
actually easier than you might think. I think the biggest limitation is just the willingness
link |
02:25:47.840
to consider alternative hypotheses. So many of the patterns that you need to rely on are actually
link |
02:25:53.200
pretty obvious, simple patterns. You just have to notice them and ask yourself, how can I explain
link |
02:25:58.080
those? Often you don't need to look at the most subtle, most difficult statistical evidence that
link |
02:26:04.400
might be out there. The simplest patterns are often enough. All right. So there's a fundamental
link |
02:26:10.240
statement about self deception in the book. There's the application of that, like we just did
link |
02:26:14.640
in medicine. Can you steel man the argument that many of the foundational ideas in the book are
link |
02:26:22.640
wrong? Meaning there's two that you just made, which is it can be a lot simpler than it looks.
link |
02:26:31.920
Can you steel man the case that it's, case by case, it's always super complicated. Like it's
link |
02:26:38.080
a complex system. It's very difficult to have a simple model about. It's very difficult to
link |
02:26:42.320
introspect. And the other one is that the human brain isn't, not just about self deception. That
link |
02:26:50.960
there's a lot of, there's a lot of motivations at play and we are able to really introspect our own
link |
02:26:57.680
mind. And like what, what's on the surface of the conscious is actually quite a good representation
link |
02:27:03.360
of what's going on in the brain. And you're not deceiving yourself. You're able to actually
link |
02:27:07.920
arrive to deeply think about where your mind stands and what you think about the world. And
link |
02:27:13.040
it's less about impressing people and more about being a free thinking individual.
link |
02:27:18.240
So when a child tries to explain why they don't have their homework assignment, they are sometimes
link |
02:27:26.240
inclined to say, the dog ate my homework. They almost never say the dragon ate my homework.
link |
02:27:32.960
The reason is the dragon is a completely implausible explanation. Almost always when we
link |
02:27:38.880
make excuses for things, we choose things that are at least in some degree plausible. It could
link |
02:27:44.640
perhaps have happened. That's an obstacle for any explanation of a hidden motive or a hidden
link |
02:27:51.840
feature of human behavior. If people are pretending one thing while really doing another,
link |
02:27:57.280
they're usually going to pick as a pretense something that's somewhat plausible. That's
link |
02:28:02.240
going to be an obstacle to proving that hypothesis if you are focused on sort of the local data that
link |
02:28:09.280
a person would typically have if they were challenged. So if you're just looking at one
link |
02:28:12.960
kid and his lack of homework, maybe you can't tell whether his dog ate his homework or not.
link |
02:28:18.560
If you happen to know he doesn't have a dog, you might have more confidence. You will need to have
link |
02:28:24.240
a wider range of evidence than a typical person would when they're encountering that actual excuse
link |
02:28:29.200
in order to see past the excuse. That will just be a general feature of it. So if I say,
link |
02:28:36.560
there's this usual story about where we go to the doctor and then there's this other explanation,
link |
02:28:41.280
it'll be true that you'll have to look at wider data in order to see that because people don't
link |
02:28:47.600
usually offer excuses unless in the local context of their excuse, they can get away with it. That
link |
02:28:53.040
is, it's hard to tell, right? So in the case of medicine, I have to point you to sort of larger
link |
02:28:58.960
sets of data. But in many areas of academia, including health economics, the researchers there
link |
02:29:07.040
also want to support the usual points of view. And so they will have selection effects in their
link |
02:29:13.360
publications and their analysis whereby they, if they're getting a result too much contrary to the
link |
02:29:18.320
usual point of view everybody wants to have, they will file drawer that paper or redo the analysis
link |
02:29:24.080
until they get an answer that's more to people's liking. So that means in the health economics
link |
02:29:29.760
literature, there are plenty of people who will claim that in fact, we have evidence that medicine
link |
02:29:34.800
is effective. And when I respond, I will have to point you to our most reliable evidence.
link |
02:29:41.200
And ask you to consider the possibility that the literature is biased in that when the evidence
link |
02:29:46.640
isn't as reliable, when they have more degrees of freedom in order to get the answer they want,
link |
02:29:50.560
they do tend to get the answer they want. But when we get to the kind of evidence that's much
link |
02:29:55.040
harder to mess with, that's where we will see the truth be more revealed. So with respect to
link |
02:30:01.440
medicine, we have millions of papers published in medicine over the years, most of which give the
link |
02:30:07.600
impression that medicine is useful. There's a small literature on randomized experiments of the
link |
02:30:14.400
aggregate effects of medicine, where there's maybe a few half dozen or so papers, where it would be
link |
02:30:21.840
the hardest to hide it because it's such a straightforward experiment done in a straightforward
link |
02:30:28.320
way that it's hard to manipulate. And that's where I will point you to.
link |
02:30:34.880
Manipulate. And that's where I will point you to, to show you that there's relatively
link |
02:30:39.200
little correlation between health and medicine. But even then, people could try to save the
link |
02:30:43.840
phenomenon and say, well, it's not hidden motives. It's just ignorance. They could say,
link |
02:30:47.040
for example, you know, medicine's complicated. Most people don't know the literature.
link |
02:30:53.200
Therefore, they can be excused for ignorance. They are just ignorantly assuming that medicine
link |
02:30:59.040
is effective. It's not that they have some other motive that they're trying to achieve.
link |
02:31:02.400
And then I will have to do, you know, as with a conspiracy theory analysis, I'm saying, well,
link |
02:31:07.200
like, how long has this misperception been going on? How consistently has it happened
link |
02:31:12.320
around the world and across time? And I would have to say, look, you know, if we're talking about,
link |
02:31:18.000
say, a recent new product, like Segway scooters or something, I could say not so many people have
link |
02:31:24.800
seen them or used them. Maybe they could be confused about their value. If we're talking
link |
02:31:28.400
about a product that's been around for thousands of years, used in roughly the same way all across
link |
02:31:32.800
the world, and we see the same pattern over and over again, this sort of ignorance mistake just
link |
02:31:38.560
doesn't work so well. It also is a question of how much of the self deception is prevalent versus
link |
02:31:47.040
foundational. Because there's a kind of implied thing where it's foundational to human nature
link |
02:31:52.800
versus just a common pitfall. This is a question I have. So, like, maybe human progress is made by
link |
02:32:01.520
people who don't fall into the self deception. It's a baser aspect of human nature, but then
link |
02:32:08.960
you escape it easily if you're motivated.
link |
02:32:12.640
The motivational hypotheses about the self deceptions are in terms of how it makes you
link |
02:32:17.920
look to the people around you. Again, the press secretary. So, the story would be, most people
link |
02:32:23.520
want to look good to the people around them. Therefore, most people present themselves in ways
link |
02:32:28.640
that help them look good to the people around them. That's sufficient to say there would be a
link |
02:32:35.040
lot of it. It doesn't need to be 100%, right? There's enough variety in people and in
link |
02:32:40.160
circumstances that sometimes taking a contrarian strategy can be in the interest of some minority
link |
02:32:44.960
of the people. So, I might, for example, say that that's a strategy I've taken. I've decided that
link |
02:32:52.560
being contrarian on these things could be winning for me in that there's a room for a small number
link |
02:32:58.880
of people like me who have these sort of messages who can then get more attention, even if there's
link |
02:33:04.640
not room for most people to do that. And that can be explaining sort of the variety, right?
link |
02:33:11.200
Similarly, you might say, look, just look at the most obvious things. Most people would like to
link |
02:33:15.440
look good, right? In the sense of physically, just you look good right now. You're wearing a nice
link |
02:33:18.960
suit, you have a haircut, you shaved, right? So, and we cut my own hair by the way. Okay.
link |
02:33:23.600
Well, that's all the more impressive. That's a counter argument for your claim.
link |
02:33:29.120
So, clearly, if we look at most people and their physical appearance, clearly, most people are
link |
02:33:33.520
trying to look somewhat nice, right? They shower, they shave, they comb their hair,
link |
02:33:38.000
but we certainly see some people around who are not trying to look so nice, right? Is that a
link |
02:33:42.240
big challenge, the hypothesis that people want to look nice? Not that much, right? We can see
link |
02:33:48.000
in those particular people's context, more particular reasons why they've chosen to be
link |
02:33:53.040
an exception to the more general rule.
link |
02:33:55.600
So, the general rule does reveal something foundational generally.
link |
02:34:00.800
Right.
link |
02:34:01.280
That's the way things work. Let me ask you, you wrote a blog post about the general rule,
link |
02:34:05.840
let me ask you, you wrote a blog post about the accuracy of authorities since we're talking
link |
02:34:10.480
about this, especially in medicine. Just looking around us, especially during this time of the
link |
02:34:17.840
pandemic, there's been a growing distrust of authorities, of institutions, even the institution
link |
02:34:24.960
of science itself. What are the pros and cons of authorities, would you say? So, what's nice
link |
02:34:33.920
about authorities? What's nice about institutions? And what are their pitfalls?
link |
02:34:40.640
One standard function of authority is as something you can defer to, respectively,
link |
02:34:45.760
without needing to seem too submissive or ignorant or, you know, gullible. That is,
link |
02:34:56.560
you know, when you're asking what should I act on or what beliefs should I act on,
link |
02:35:02.080
you might be worried if I chose something too contrarian, too weird, too speculative,
link |
02:35:07.920
that that would make me look bad. So, I would just choose something very conservative.
link |
02:35:13.680
So, maybe an authority lets you choose something a little less conservative because the authority
link |
02:35:19.200
is your authorization. The authority will let you do it. And you can say, and somebody says,
link |
02:35:23.840
why did you do that thing? And they say, the authority authorized it. The authority tells me,
link |
02:35:28.160
I should do this. Why aren't you doing it, right?
link |
02:35:30.800
So, the authority is often pushing for the conservative?
link |
02:35:34.400
Well, no, the authority can do more. I mean, so for example, we just think about,
link |
02:35:38.800
I don't know, in a pandemic even, right? You could just think, I'll just stay home and close
link |
02:35:43.200
all the doors or I'll just ignore it, right? You could just think of just some very simple
link |
02:35:46.320
strategy that might be defensible if there were no authorities, right? But authorities might be
link |
02:35:51.920
able to know more than that. They might be able to like look at some evidence, draw a more context
link |
02:35:57.120
dependent conclusion, declare it as the authority's opinion. And then other people might follow that
link |
02:36:01.600
and that could be better than doing nothing. So, what you mentioned, WHO, the world's most
link |
02:36:06.960
beloved organization. So, this is me speaking in general, WHO and CDC has been kind of,
link |
02:36:16.720
I, depending on degrees and details, just not behaving as I would have imagined in the best
link |
02:36:29.280
possible evolution of human civilization, authorities should act. They seem to have failed
link |
02:36:35.280
in some fundamental way in terms of leadership in a difficult time for our society. Can you say what
link |
02:36:42.000
are the pros and cons of this particular authority? So, again, if there were no authorities whatsoever,
link |
02:36:49.040
no accepted authorities, then people would sort of have to sort of randomly pick different local
link |
02:36:55.760
authorities who would conflict with each other. And then they'd be fighting each other about that,
link |
02:36:59.280
or just not believe anybody and just do some initial default action that you would always do
link |
02:37:03.840
without responding to context. So, the potential gain of an authority is that they could know more
link |
02:37:09.680
than just basic ignorance. And if people followed them, they could both be more informed than
link |
02:37:15.760
ignorance and all doing the same thing. So, they're each protected from being accused or
link |
02:37:20.240
complained about. That's the idea of an authority. That would be the good. What's the con of that?
link |
02:37:26.640
Okay. How does that go wrong? So, the con is that if you think of yourself as the authority and
link |
02:37:32.880
asking what's my best strategy as an authority, it's unfortunately not to be maximally informative.
link |
02:37:40.160
So, you might think the ideal authority would not just tell you more than ignorance, it would tell
link |
02:37:45.040
you as much as possible. Okay. It would give you as much detail as you could possibly listen to and
link |
02:37:51.680
manage to assimilate. And it would update that as frequently as possible or as frequently as you
link |
02:37:57.120
were able to listen and assimilate. And that would be the maximally informative authority. The problem
link |
02:38:03.440
is there's a conflict between being an authority or being seen as an authority and being maximally
link |
02:38:10.160
informative. That was the point of my blog post that you're pointing out to here. That is, if you
link |
02:38:16.400
look at it from their point of view, they won't long remain the perceived authority if they are
link |
02:38:23.440
too cautious, incautious about how they use that authority. And one of the ways to be incautious
link |
02:38:31.200
would be to be too informative. Okay. That's still in the pro column for me because you're talking
link |
02:38:37.440
about the tensions that are very data driven and very honest. And I would hope that authorities
link |
02:38:44.320
struggle with that. How much information to provide to people to maximize outcomes.
link |
02:38:52.880
Now I'm generally somebody that believes more information is better because I trust the
link |
02:38:57.040
intelligence of people. But I'd like to mention a bigger con on authorities, which is the human
link |
02:39:03.760
question. This comes back to a global government and so on. Is that, you know, there's humans that
link |
02:39:11.920
sit in chairs during meetings and those authorities, they have different titles. It's
link |
02:39:16.080
for humans form hierarchies. And sometimes those titles get to your head a little bit
link |
02:39:20.320
and you start to want to think, how do I preserve my control over this authority? As opposed to
link |
02:39:26.480
thinking through like, what is the mission of the authority? What is the mission of WHO and
link |
02:39:32.160
the other such organization? And how do I maximize the implementation of that mission? You start to
link |
02:39:37.680
think, well, I kind of like sitting in this big chair at the head of the table. I'd like to sit
link |
02:39:43.120
there for another few years or better yet, I want to be remembered as the person who in a time of
link |
02:39:48.960
crisis was at the head of this authority and did a lot of good things. So you stop trying to do good
link |
02:39:58.160
under what good means given the mission of the authority. And you start to try to carve a
link |
02:40:03.760
narrative, to manipulate the narrative. First in the meeting room, everybody around you, just a
link |
02:40:09.520
small little story you tell yourself, the new interns, the managers throughout the whole
link |
02:40:15.520
hierarchy of the company. Okay, once everybody in the company or in the organization believes this
link |
02:40:20.720
narrative, now you start to control the release of information, not because you're trying to
link |
02:40:28.160
maximize outcomes, but because you're trying to maximize the effectiveness of the narrative that
link |
02:40:33.680
you are truly a great representative of this authority in human history. And I just feel like
link |
02:40:40.240
those human forces whenever you have an authority, it starts getting to people's heads. One of the
link |
02:40:47.920
most, me as a scientist, one of the most disappointing things to see during the pandemic
link |
02:40:53.440
is the use of authority from colleagues of mine to roll their eyes, to dismiss other human beings
link |
02:41:04.240
just because they got a PhD, just because they're an assistant, associate, full faculty, just because
link |
02:41:12.960
they are deputy head of X organization, NIH, whatever the heck the organization is,
link |
02:41:20.640
just because they got an award of some kind and at a conference they won a best paper award seven
link |
02:41:27.120
years ago and then somebody shook their hand and gave them a medal, maybe it was a president
link |
02:41:32.560
and it's been 20, 30 years that people have been patting them on the back saying how special
link |
02:41:37.920
they are, especially when they're controlling money and getting sucked up to from other scientists
link |
02:41:43.920
who really want the money in a self deception kind of way, they don't actually really care
link |
02:41:47.680
about your performance and all of that gets to your head and no longer are you the authority
link |
02:41:52.560
that's trying to do good and lessen the suffering in the world, you become an authority that just
link |
02:41:57.760
wants to maximize, self preserve yourself in a sitting on a throne of power. So this is core to
link |
02:42:06.800
sort of what it is to be an economist. I'm a professor of economics. There you go with the
link |
02:42:12.480
authority again. No, it's about saying, we often have a situation where we see a world of behavior
link |
02:42:20.640
and then we see ways in which particular behaviors are not sort of maximally socially useful.
link |
02:42:26.160
Yes.
link |
02:42:28.000
And we have a variety of reactions to that. So one kind of reaction is to sort of morally
link |
02:42:34.160
blame each individual for not doing the maximally socially useful thing under perhaps the idea that
link |
02:42:42.000
people could be identified and shamed for that and maybe induced into doing the better thing if
link |
02:42:46.720
only enough people were calling them out on it, right? But another way to think about it is to
link |
02:42:52.560
think that people sit in institutions with certain stable institutional structures and that
link |
02:42:58.400
institutions create particular incentives for individuals and that individuals are typically
link |
02:43:04.240
doing whatever is in their local interest in the context of that institution.
link |
02:43:10.000
And then perhaps to less blame individuals for winning their local institutional game
link |
02:43:15.840
and more blaming the world for having the wrong institutions. So economists are often like
link |
02:43:20.800
wondering what other institutions we could have instead of the ones we have and which of them
link |
02:43:24.800
might promote better behavior. And this is a common thing we do all across human behavior is
link |
02:43:29.680
to think of what are the institutions we're in and what are the alternative variations we could
link |
02:43:33.920
imagine and then to say which institutions would be most productive. I would agree with you that
link |
02:43:40.320
our information institutions, that is the institutions by which we collect information
link |
02:43:44.880
and aggregate it and share it with people are especially broken in the sense of far from the
link |
02:43:51.200
ideal of what would be the most cost effective way to collect and share information. But then
link |
02:43:56.960
the challenge is to try to produce better institutions. And as an academic, I'm aware that
link |
02:44:03.120
academia is particularly broken in the sense that we give people incentives to do research that's
link |
02:44:09.760
not very interesting or important because basically they're being impressive. And we actually care
link |
02:44:15.120
more about whether academics are impressive than whether they're interesting or useful.
link |
02:44:20.160
And I can go happy to go into detail with lots of different known institutions and their known
link |
02:44:25.920
institutional failings, ways in which those institutions produce incentives that are
link |
02:44:31.040
mistaken. And that was the point of the post we started with talking about the authorities. If
link |
02:44:34.800
I need to be seen as an authority, that's at odds with my being informative and I might choose to be
link |
02:44:42.160
the authority instead of being informative because that's my institutional incentives.
link |
02:44:46.240
And if I may, I'd like to, given that beautiful picture of incentives and individuals that you
link |
02:44:54.320
just painted, let me just apologize for a couple of things. One, I often put too much blame on
link |
02:45:03.440
leaders of institutions versus the incentives that govern those institutions. And as a result of that,
link |
02:45:11.280
I've been, I believe too critical of Anthony Fauci, too emotional about my criticism of
link |
02:45:20.080
Anthony Fauci. And I'd like to apologize for that because I think there's a deep, there's deeper
link |
02:45:26.080
truths to think about. There's deeper incentives to think about. That said, I do sort of, I'm a
link |
02:45:32.000
romantic creature by nature. I romanticize Winston Churchill. When I think about Nazi Germany,
link |
02:45:42.480
I think about Hitler more than I do about the individual people of Nazi Germany. You think
link |
02:45:47.120
about leaders, you think about individuals, not necessarily the parameters, the incentives that
link |
02:45:51.760
govern the system that, because it's harder. It's harder to think through deeply about the models
link |
02:45:58.240
from which those individuals arise, but that's the right thing to do. So, but also I don't apologize
link |
02:46:05.760
for being emotional sometimes and being.
link |
02:46:07.680
I'm happy to blame the individual leaders in the sense that, you know, I might say, well,
link |
02:46:12.480
you should be trying to reform these institutions if you're just there to like get promoted and look
link |
02:46:17.360
good at being at the top. But maybe I can blame you for your motives and your priorities in there,
link |
02:46:22.000
but I can understand why the people at the top would be the people who are selected for having
link |
02:46:26.000
the priority of primarily trying to get to the top. I get that.
link |
02:46:29.200
Can I maybe ask you about particular universities? They've received, like science has received an
link |
02:46:36.880
increase in distrust overall as an institution, which breaks my heart because I think science is
link |
02:46:43.200
beautiful as a, not maybe not as an institution, but as one of the things, one of the journeys that
link |
02:46:51.360
humans have taken on. The other one is university. I think university is actually a place for me,
link |
02:46:58.800
at least in the way I see it, is a place of freedom of exploring ideas, scientific ideas,
link |
02:47:06.800
engineering ideas, more than corporate, more than a company, more than a lot of domains in life.
link |
02:47:15.600
They're, it's not just in its ideal, but it's in its implementation, a place where you can
link |
02:47:22.400
be a kid for your whole life and play with ideas. And I think with all the criticism that universities
link |
02:47:28.960
still not currently receive, I think they, I don't think that criticism is representative
link |
02:47:35.360
of universities. They focus on very anecdotal evidence of particular departments, particular
link |
02:47:39.760
people, but I still feel like there's a lot of place for freedom of thought, at least MIT,
link |
02:47:50.240
at least in the fields I care about, in a particular kind of science, a particular kind
link |
02:47:56.560
of technical fields, mathematics, computer science, physics, engineering, so robotics,
link |
02:48:02.560
artificial intelligence. This is a place where you get to be a kid. Yet there is bureaucracy that's
link |
02:48:12.240
rising up. There's like more rules. There's more meetings and there's more administration
link |
02:48:18.960
having like PowerPoint presentations, which to me, you should like be more of a renegade
link |
02:48:28.400
explorer of ideas and meetings destroy, they suffocate that radical thought that happens
link |
02:48:34.800
when you're an undergraduate student and you can do all kinds of wild things when you're
link |
02:48:38.240
a graduate student. Anyway, all that to say, you've thought about this aspect too. Is there
link |
02:48:42.400
something positive, insightful you could say about how we can make for better universities
link |
02:48:50.160
in the decades to come? This particular institution, how can we improve them?
link |
02:48:54.800
I hear that centuries ago, many scientists and intellectuals were aristocrats. They had time
link |
02:49:03.360
and could, if they chose, choose to be intellectuals. That's a feature of the combination
link |
02:49:12.000
that they had some source of resources that allowed them leisure and that the kind of
link |
02:49:17.680
competition they were faced in among aristocrats allowed that sort of a self indulgence or
link |
02:49:24.160
self pursuit, at least at some point in their lives. So the analogous observation is that
link |
02:49:32.240
university professors often have sort of the freedom and space to do a wide range of things.
link |
02:49:39.120
And I am certainly enjoying that as a tenured professor.
link |
02:49:42.880
You're a really, sorry to interrupt, a really good representative of that.
link |
02:49:46.960
Just the exploration you're doing, the depth of thought, like most people are afraid to do the
link |
02:49:52.880
kind of broad thinking that you're doing, which is great.
link |
02:49:55.920
The fact that that can happen is a combination of these two things analogously. One is that
link |
02:50:01.120
we have fierce competition to become a tenured professor, but then once you become tenured,
link |
02:50:05.040
we give you the freedom to do what you like. And that's a happenstance. It didn't have to
link |
02:50:11.360
be that way. And in many other walks of life, even though people have a lot of resources,
link |
02:50:16.640
et cetera, they don't have that kind of freedom set up. So I think we're kind of,
link |
02:50:20.480
I'm kind of lucky that tenure exists and that I'm enjoying it. But I can't be too enthusiastic
link |
02:50:28.000
about this unless I can approve of sort of the source of the resources that's paying for all
link |
02:50:31.760
this. So for the aristocrat, if you thought they stole it in war or something, you wouldn't be so
link |
02:50:37.440
pleased. Whereas if you thought they had earned it or their ancestors had earned this money that
link |
02:50:41.920
they were spending as an aristocrat, then you could be more okay with that. So for universities,
link |
02:50:47.120
I have to ask, where are the main sources of resources that are going to the universities and
link |
02:50:52.800
are they getting their money's worth? Are they getting a good value for that payment?
link |
02:50:58.160
So first of all, they're students. And the question is, are students getting good value
link |
02:51:03.840
for their education? And each person is getting value in the sense that they are identified and
link |
02:51:10.240
shown to be a more capable person, which is then worth more salary as an employee later.
link |
02:51:15.440
But there is a case for saying there's a big waste to the system because we aren't actually
link |
02:51:21.200
changing the students or educating them. We're more sorting them or labeling them. And that's
link |
02:51:27.280
a very expensive process to produce that outcome. And part of the expense is the freedom from tenure,
link |
02:51:33.200
I guess. So I feel like I can't be too proud of that because it's basically a tax on all these
link |
02:51:38.720
young students to pay this enormous amount of money in order to be labeled as better. Whereas I
link |
02:51:43.440
feel like we should be able to find cheaper ways of doing that. The other main customer is
link |
02:51:49.120
researcher patrons like the government or other foundations. And then the question is,
link |
02:51:54.160
are they getting their money worth out of the money they're paying for research to happen?
link |
02:51:59.920
And my analysis is they don't actually care about the research progress. They are mainly
link |
02:52:05.280
buying an affiliation with credentialed impressiveness on the part of the researchers.
link |
02:52:09.520
They mainly pay money to researchers who are impressive and have high, you know,
link |
02:52:13.840
impressive affiliations. And they don't really much care what research project happens as a result.
link |
02:52:18.880
Is that a cynical? So there's a deep truth to that cynical perspective. Is there
link |
02:52:26.720
a less cynical perspective that they do care about the long term investment into the progress
link |
02:52:32.640
of science and humanity? Well, they might personally care, but they're stuck in an equilibrium.
link |
02:52:37.680
Sure.
link |
02:52:38.160
Wherein they, basically most foundations like governments or research or, you know,
link |
02:52:43.600
the Ford Foundation, they are, the individuals there are rated based on the prestige they bring
link |
02:52:50.000
to that organization. And even if they might personally want to produce more intellectual
link |
02:52:54.800
progress, they are in a competitive game where they don't have tenure and they need to produce
link |
02:53:00.160
this prestige. And so once they give grant money to prestigious people, that is the thing that
link |
02:53:04.880
shows that they have achieved prestige for the organization. And that's what they need to do in
link |
02:53:08.800
order to retain their position. And you do hope that there's a correlation between prestige and
link |
02:53:14.800
actual competence. Of course, there is a correlation. The question is just, could we do
link |
02:53:19.760
this better some other way? I think it's almost, I think it's pretty clear we could. What is harder
link |
02:53:25.280
to do is move the world to a new equilibrium where we do that instead. What are the components
link |
02:53:31.520
of the better ways to do it? Is it money? So how, the sources of money and how the money is
link |
02:53:39.440
allocated to give the individual researchers freedom? Years ago I started studying this topic
link |
02:53:46.640
exactly because this was my issue and this was many decades ago now. And I spent a long time
link |
02:53:51.680
and my best guess still is prediction markets, betting markets. So if you as a research
link |
02:53:58.240
paper patron want to know the answer to a particular question, like what's the mass of
link |
02:54:02.800
the electron neutrino, then what you can do is just subsidize a betting market in that question.
link |
02:54:09.040
And that will induce more research into answering that question because the people who then
link |
02:54:13.440
answer that question can then make money in that betting market with the new information they gain.
link |
02:54:17.680
So that's a robust way to induce more information on a topic. If you want to induce an
link |
02:54:22.960
accomplishment, you can create prizes. And there's of course a long history of prizes to induce
link |
02:54:28.160
accomplishments. And we moved away from prizes, even though we once used them far more often than
link |
02:54:35.040
we did today. And there's a history to that. And for the customers who want to be affiliated with
link |
02:54:43.040
impressive academics, which is what most of the customers want, students, journalists, and patrons,
link |
02:54:48.320
I think there's a better way of doing that, which I just wrote about in my second most recent blog
link |
02:54:53.440
post. Can you explain? Sure. What we do today is we take sort of acceptance by other academics
link |
02:54:59.920
recently as our best indication of their deserved prestige. That is recent publications, recent
link |
02:55:07.280
job affiliation, institutional affiliations, recent invitations to speak, recent grants.
link |
02:55:13.840
We are today taking other impressive academics, recent choices to affiliate with them as our best
link |
02:55:21.440
guesstimate of their prestige. I would say we could do better by creating betting markets in what the
link |
02:55:28.000
distant future will judge to have been their deserved prestige looking back on them. I think
link |
02:55:34.000
most intellectuals, for example, think that if we looked back two centuries, say to intellectuals
link |
02:55:39.920
from two centuries ago, and tried to look in detail at their research and how it influenced
link |
02:55:45.680
future research and which path it was on, we could much more accurately judge their actual
link |
02:55:52.960
deserved prestige. That is who was actually on the right track, who actually helped, which will be
link |
02:55:58.080
different than what people at the time judged using the immediate indications at the time of
link |
02:56:02.640
which position they had or which publications they had or things like that. So in this way,
link |
02:56:07.440
if you think from the perspective of multiple centuries, you would higher prioritize true
link |
02:56:15.280
novelty, you would disregard the temporal proximity, like how recent the thing is,
link |
02:56:21.200
and you would think like, what is the brave, the bold, the big, a novel idea that this sense,
link |
02:56:27.600
and you would actually, you would be able to rate that because you could see the path
link |
02:56:31.520
with which ideas took, which things had dead ends, which led to what other followings. You could,
link |
02:56:36.080
looking back centuries later, have a much better estimate of who actually had what long term
link |
02:56:41.680
effects on intellectual progress. So my proposal is we actually pay people in several centuries to
link |
02:56:47.200
do this historical analysis. And we have prediction markets today where we buy and sell
link |
02:56:52.240
assets, which will later off pay off in terms of those final evaluations. So now we'll be inducing
link |
02:56:58.000
people today to make their best estimate of those things by actually looking at the details of
link |
02:57:03.280
people and setting the prices accordingly. So my proposal would be we rate people today on those
link |
02:57:08.320
prices today. So instead of looking at their list of publications or affiliations, you look at the
link |
02:57:12.640
actual price of assets that represent people's best guess of what the future will say about them.
link |
02:57:18.320
That's brilliant. So this concept of idea futures, can you elaborate what this would entail?
link |
02:57:26.720
I've been elaborating two versions of it here. So one is if there's a particular question,
link |
02:57:32.080
say the mass of the electron neutrino, and what you as a patron want to do is get an answer to
link |
02:57:37.200
that question, then what you would do is subsidize the betting market in that question under the
link |
02:57:42.800
assumption that eventually we'll just know the answer and we can pay off the bets that way.
link |
02:57:47.120
And that is a plausible assumption for many kinds of concrete intellectual questions like what's the
link |
02:57:51.760
mass of the electron neutrino. In this hypothetical world that you're constructing that may be a real
link |
02:57:56.480
world, do you mean literally financial? Yes. Literal. Very literal. Very cash. Very direct
link |
02:58:05.600
and literal. Yes. Or crypto. Well, crypto is money. Yes, sure. So the idea would be research labs
link |
02:58:12.560
would be for profit. They would have as their expense paying researchers to study things and
link |
02:58:17.840
then their profit would come from using the insights the researchers gains to trade in these
link |
02:58:22.080
financial markets. Just like hedge funds today make money by paying researchers to study firms
link |
02:58:28.880
and then making their profits by trading on those that that insight in the ordinary financial market.
link |
02:58:33.440
And the market would, if it's efficient, would be able to become better and better at predicting
link |
02:58:40.320
the powerful ideas that the individual is able to generate. The variance around the mass of the
link |
02:58:44.960
electron neutrino would decrease with time as we learned that value of that parameter better and
link |
02:58:49.360
any other parameters that we wanted to estimate. You don't think those markets would also respond
link |
02:58:53.760
to recency of prestige and all those kinds of things? They would respond, but the question is
link |
02:59:00.080
if they might respond incorrectly, but if you think they're doing it incorrectly, you have a
link |
02:59:03.920
profit opportunity where you can go fix it. So we'd be inviting everybody to ask whether they can
link |
02:59:10.160
find any biases or errors in the current ways in which people are estimating these things from
link |
02:59:14.000
whatever clues they have. Right. There's a big incentive for the correction mechanism in academia
link |
02:59:18.720
currently. There's not, it's the safe choice to go with the prestige. Exactly. And there's no.
link |
02:59:26.720
Even if you privately think that the prestige is over overrated. Even if you think strongly that
link |
02:59:33.280
it's overrated. Still you don't have an incentive to defy that publicly. You're going to lose a lot
link |
02:59:38.400
unless you're a contrarian that writes brilliant blogs and then you could talk about it in the
link |
02:59:44.640
pocket. Right. I mean, initially this was my initial concept of having these betting markets
link |
02:59:49.200
on these key parameters. And what I then realized over time was that that's more what people
link |
02:59:53.600
pretend to care about. What they really mostly care about is just who's how good. And that's
link |
02:59:58.400
what most of the system is built on is trying to rate people and rank them. And so I designed this
link |
03:00:03.200
other alternative based on historical evaluation centuries later, just about who's how good,
link |
03:00:08.480
because that's what I think most of the customers really care about.
link |
03:00:10.720
Customers. I like the word customers here. Humans. Right. Well, every major area of life,
link |
03:00:16.400
which, you know, has specialists who get paid to do that thing must have some customers from
link |
03:00:20.480
elsewhere who are paying for it. Well, who are the customers for the mass of the neutrino?
link |
03:00:25.280
Yes. I, I, I understand that a sense people who are willing to pay. Right. For a thing.
link |
03:00:33.440
That's an important thing to understand about anything. Who are the customers? So when I think
link |
03:00:36.960
and what's the product, like medicine, education, academia, military, et cetera, that's part of the
link |
03:00:42.960
hidden motives analysis. Often people have a thing they say about what the product is and who the
link |
03:00:46.720
customer is. And maybe you need to dig a little deeper to find out what's really going on.
link |
03:00:50.800
Or a lot deeper. You, uh, you've written that you seek out quote view quakes. You're able as a,
link |
03:00:59.520
uh, as an intelligent black box word generating machine, you're able to generate a lot of sexy
link |
03:01:03.680
words. I like it. I love it. View quakes, which are insights, which dramatically changed my
link |
03:01:10.720
worldview, your worldview. Uh, you write, I loved science fiction as a child studied physics and
link |
03:01:17.840
artificial intelligence for a long time each, and now study economics and political science,
link |
03:01:23.520
all fields full of such insights. So let me ask, what are some view quakes or a beautiful,
link |
03:01:30.960
surprising idea to you from each of those fields, physics, AI, economics, political science?
link |
03:01:36.800
I know it's a tough question. Something that springs to mind about physics, for example,
link |
03:01:40.960
that just as beautiful. I mean, right from the beginning, say special relativity was a big
link |
03:01:45.600
surprise. Uh, you know, most of us have a simple concept of time and it seems perfectly adequate
link |
03:01:51.120
for everything we've ever seen. And to have it explained to you that you need to sort of have a
link |
03:01:55.600
mixture concept of time and space where you put it into the space time construct, how it looks
link |
03:02:00.880
different from different perspectives. That was quite a shock. And that was, you know, such a
link |
03:02:06.480
shock that it makes you think, what else do I know that, you know, isn't the way it seems. Certainly
link |
03:02:11.920
quantum mechanics is certainly another enormous shock in terms of from your point, you know,
link |
03:02:16.480
you have this idea that there's a space and then there's, you know, point particles at points and
link |
03:02:21.440
maybe fields in between. And, um, quantum mechanics is just a whole different representation. It looks
link |
03:02:28.240
nothing like what you would have thought as sort of the basic representation of the physical world.
link |
03:02:32.880
And that was quite a surprise. What would you say is the catalyst for the, for the view quake in
link |
03:02:39.120
theoretical physics in the 20th century? Where does that come from? So the interesting thing
link |
03:02:43.040
about Einstein, it seems like a lot of that came from like almost thought experiments. It wasn't
link |
03:02:47.520
almost experimentally driven. Um, and with, actually, I don't know the full story of quantum
link |
03:02:55.520
mechanics, how much of it is experiment, like where, if you, if you look at the full trace of
link |
03:03:01.360
idea generation there, uh, of all the weird stuff that falls out of quantum mechanics, how much of
link |
03:03:07.040
that was the experimentalist? How much was it the theoreticians? But usually in theoretical
link |
03:03:11.520
physics, the theories lead the way. So maybe can you, uh, can you elucidate like what, what is the
link |
03:03:18.720
catalyst for these? The remarkable thing about physics and about many other areas of academic
link |
03:03:24.800
intellectual life is that it just seems way overdetermined. That is, if it hadn't been for
link |
03:03:31.360
Einstein or if it hadn't been for Heisenberg, certainly within a half a century, somebody else
link |
03:03:36.640
would have come up with essentially the same things. Is that something you believe or is that
link |
03:03:41.520
something? Yes. So I think when you look at sort of just the history of physics and the history of
link |
03:03:46.160
other areas, you know, some areas like that, there's just this enormous convergence that the,
link |
03:03:51.040
the different kinds of evidence that was being collected was so redundant in the sense that so
link |
03:03:56.400
many different things revealed the same things that eventually you just kind of have to accept it
link |
03:04:02.320
because it just gets obvious. So if you look at the details, of course, you know, Einstein did it
link |
03:04:08.720
for somebody else and it's well worth celebrating Einstein for that. And, you know, we, by
link |
03:04:13.920
celebrating the particular people who did something first or came across something first,
link |
03:04:17.840
we are encouraging all the rest to move a little faster, to try to, to push us all a little faster,
link |
03:04:25.360
which is great. But I still think we would have gotten roughly to the same place within a half
link |
03:04:32.080
century. So sometimes people are special because of how much longer it would have taken. So some
link |
03:04:37.760
people say general relativity would have taken longer without Einstein than other things. I mean,
link |
03:04:42.720
Heisenberg quantum mechanics, I mean, there were several different formulations of quantum mechanics
link |
03:04:46.080
all around the same few years, means no one of them made that much of a difference. We would have
link |
03:04:51.280
had pretty much the same thing regardless of which of them did it exactly when. Nevertheless,
link |
03:04:56.800
I'm happy to celebrate them all. But this is a choice I make in my research. That is, when there's
link |
03:05:00.960
an area where there's lots of people working together, you know, who are sort of scoping each
link |
03:05:05.680
other and getting a result just before somebody else does, you ask, well, how much of a difference
link |
03:05:10.080
would I make there? At most, I could make something happen a few months before somebody else. And so
link |
03:05:16.400
I'm less worried about them missing things. So when I'm trying to help the world, like doing research,
link |
03:05:21.200
I'm looking for neglected things. I'm looking for things that nobody's doing it. If I didn't do it,
link |
03:05:25.040
nobody would do it. Nobody would do it. Or at least for a long time. In the next 10, 20 years,
link |
03:05:28.800
kind of thing. Right, exactly. Same with general relativity, just, you know, who would do it?
link |
03:05:33.280
It might take another 10, 20, 30, 50 years. So that's the place where you can have the
link |
03:05:36.960
biggest impact is finding the things that nobody would do unless you did them.
link |
03:05:40.560
And then that's when you get the big view quake, the insight. So what about artificial
link |
03:05:45.200
intelligence? Would it be the EMs, the emulated minds? What idea, whether that struck you in the
link |
03:05:56.480
shower one day or that you just...
link |
03:06:00.480
Clearly, the biggest view quake in artificial intelligence is the realization of just how
link |
03:06:05.920
complicated our human minds are. So most people who come to artificial intelligence from other
link |
03:06:11.600
fields or from relative ignorance, a very common phenomenon, which you must be familiar with,
link |
03:06:17.200
is that they come up with some concept and then they think that must be it. Once we implement this
link |
03:06:22.560
new concept, we will have it. We will have full human level or higher artificial intelligence,
link |
03:06:27.200
right? And they're just not appreciating just how big the problem is, how long the road is,
link |
03:06:32.080
just how much is involved, because that's actually hard to appreciate. When we just think,
link |
03:06:36.800
it seems really simple. And studying artificial intelligence, going through many particular
link |
03:06:41.760
problems, looking at each problem, all the different things you need to be able to do
link |
03:06:45.200
to solve a problem like that, makes you realize all the things your minds are doing that you
link |
03:06:50.240
are not aware of. That's that vast subconscious that you're not aware of. That's the biggest
link |
03:06:55.120
view quake from artificial intelligence by far for most people who study artificial intelligence,
link |
03:06:59.360
is to see just how hard it is. I think that's a good point. But I think it's a very early
link |
03:07:07.200
view quake. It's when the Dunning Kruger crashes hard. It's the first realization that humans are
link |
03:07:16.880
actually quite incredible. The human mind, the human body is quite incredible. There's a lot
link |
03:07:20.800
of different parts to it. But then, see, it's already been so long for me to think about
link |
03:07:27.680
it. It's already been so long for me that I've experienced that view quake that, for me,
link |
03:07:32.160
I now experience the view quakes of, holy shit, this little thing is actually quite powerful,
link |
03:07:37.280
like neural networks. I'm amazed. Because you've become almost cynical after that first view quake
link |
03:07:45.360
of, like, this is so hard. Like, evolution did some incredible work to create the human mind.
link |
03:07:52.080
But then you realize, just like you have, you've talked about a bunch of simple models
link |
03:07:57.120
that simple things can actually be extremely powerful, that maybe emulating the human mind
link |
03:08:04.320
is extremely difficult. But you can go a long way with a large neural network. You can go a long way
link |
03:08:09.840
with a dumb solution. It's that Stuart Russell thing with the reinforcement learning. Holy crap,
link |
03:08:15.040
you can go quite a long way with a simple thing. But we still have a very long road to go,
link |
03:08:18.960
but not unless... I can't, I refuse to sort of know. The road is full of surprises. So long is
link |
03:08:29.040
an interesting, like you said, with the six hard steps that humans have to take to arrive at where
link |
03:08:34.800
we are from the origin of life on Earth. So it's long, maybe, in the statistical improbability of
link |
03:08:42.000
the steps that have to be taken. But in terms of how quickly those steps could be taken,
link |
03:08:47.200
I don't know if my intuition says it's, if it's hundreds of years away or if it's a couple of
link |
03:08:55.520
years away, I prefer to measure... Pretty confidence, at least a decade. And
link |
03:09:00.560
mildly confidence, at least three decades. I can steal man either direction. I prefer to
link |
03:09:05.360
measure that journey in Elon Musk's. That's a new... Well, we don't get Elon Musk very often,
link |
03:09:10.240
so that's a long timescale. For now, I don't know, maybe you can clone or maybe multiply or
link |
03:09:16.160
even know what Elon Musk, what that is. What is that? What is... That's a good question.
link |
03:09:21.120
Exactly. Well, that's an excellent question. How does that fit into the model of the three
link |
03:09:26.160
parameters that are required for becoming a grabby alien civilization? That's the question of how
link |
03:09:33.920
much any individual makes in the long path of civilization over time. Yes. And it's a favorite
link |
03:09:39.680
topic of historians and people to try to focus on individuals and how much of a difference they
link |
03:09:44.320
make. And certainly, some individuals make a substantial difference in the modest term,
link |
03:09:49.120
right? Like, you know, without Hitler being Hitler in the role he took, European history would
link |
03:09:55.600
have taken a different path for a while there. But if we're looking over like many centuries
link |
03:10:00.400
longer term things, most individuals do fade in their individual influence.
link |
03:10:04.800
So, I mean... Even Einstein. Even Einstein, no matter how sexy your hair is, you will also be
link |
03:10:13.040
forgotten in the long arc of history. So you said at least 10 years. So let's talk a little bit about
link |
03:10:20.320
this AI point of where, how we achieve, how hard is the problem of solving intelligence
link |
03:10:28.800
is the problem of solving intelligence by engineering artificial intelligence
link |
03:10:35.280
that achieves human level, human like qualities that we associate with intelligence. How hard
link |
03:10:41.600
is this? What are the different trajectories that take us there? One way to think about it
link |
03:10:46.480
is in terms of the scope of the technology space you're talking about. So let's take the biggest
link |
03:10:52.560
possible scope, all of human technology, right? The entire human economy. So the entire economy
link |
03:11:00.080
is composed of many industries, each of which have many products with many different technologies
link |
03:11:04.880
supporting each one. At that scale, I think we can accept that most innovations are a small
link |
03:11:13.440
fraction of the total. That is usually have relatively gradual overall progress. And that
link |
03:11:20.240
individual innovations that have a substantial effect that total are rare and their total effect
link |
03:11:25.920
is still a small percentage of the total economy. There's very few individual innovations that
link |
03:11:31.520
made a substantial difference to the whole economy. What are we talking? Steam engine,
link |
03:11:35.920
shipping containers, a few things. Shipping containers deserves to be up there with steam
link |
03:11:42.400
engines, honestly. Can you say exactly why shipping containers... Shipping containers
link |
03:11:48.320
revolutionized shipping. Shipping is very important. But placing that at shipping containers.
link |
03:11:55.280
So you're saying you wouldn't have some of the magic of the supply chain, all that,
link |
03:11:59.040
without shipping containers. That made a big difference, absolutely. Interesting. That's
link |
03:12:02.960
something to look into. We shouldn't take that tangent, although I'm tempted to. But anyway,
link |
03:12:08.480
so there's a few, just a few innovations. Right. So at the scale of the whole economy, right?
link |
03:12:13.360
Right. Now, as you move down to a much smaller scale, you will see individual innovations
link |
03:12:19.360
having a bigger effect, right? So if you look at, I don't know, lawnmowers or something,
link |
03:12:24.720
I don't know about the innovations lawnmower, but there were probably like steps where you
link |
03:12:28.320
just had a new kind of lawnmower and that made a big difference to mowing lawns because you're
link |
03:12:34.240
focusing on a smaller part of the whole technology space, right? And sometimes like military
link |
03:12:41.840
technology, there's a lot of military technologies, a lot of small ones, but every once in a while,
link |
03:12:45.200
a particular military weapon like makes a big difference. But still, even so, mostly overall,
link |
03:12:51.200
they're making modest differences to something that's increasing relatively soon. Like US military
link |
03:12:56.880
is the strongest in the world consistently for a while. No one weapon in the last 70 years has
link |
03:13:02.960
made a big difference in terms of the overall prominence of the US military, right? Because
link |
03:13:07.440
that's just saying, even though every once in a while, even the recent Soviet hyper missiles or
link |
03:13:12.160
whatever they are, they aren't changing the overall balance dramatically, right?
link |
03:13:18.000
So when we get to AI, now I can frame the question, how big is AI? Basically, so one way of
link |
03:13:25.840
thinking about AI is it's just all mental tasks. And then you ask what fraction of tasks are mental
link |
03:13:30.560
tasks? And then I go, a lot. And then if I think of AI as like half of everything, then I think,
link |
03:13:38.960
well, it's got to be composed of lots of parts where any one innovation is only a small impact,
link |
03:13:44.000
right? Now, if you think, no, no, no, AI is like AGI. And then you think AGI is a small thing,
link |
03:13:52.560
right? There's only a small number of key innovations that will enable it. Now you're
link |
03:13:57.680
thinking there could be a bigger chunk that you might find that would have a bigger impact. So
link |
03:14:03.040
the way I would ask you to frame these things in terms of the chunkiness of different areas of
link |
03:14:08.320
technology, in part, in terms of how big they are. If you take 10 chunky areas and you add them
link |
03:14:13.920
together, the total is less chunky. Yeah. But don't you, are you able until you solve
link |
03:14:19.760
the fundamental core parts of the problem to estimate the chunkiness of that problem?
link |
03:14:24.320
Well, if you have a history of prior chunkiness, that could be your best estimate for future
link |
03:14:29.920
chunkiness. So for example, I mean, even at the level of the world economy, right? We've had this,
link |
03:14:34.800
what, 10,000 years of civilization. Well, that's only a short time. You might say, oh, that doesn't
link |
03:14:40.720
predict future chunkiness. But it looks relatively steady and consistent. We can say even in computer
link |
03:14:47.760
science, we've had 70 years of computer science. We have enough data to look at chunkiness of
link |
03:14:52.480
computer science. Like when were there algorithms or approaches that made a big chunky difference
link |
03:15:00.960
and how large a fraction of that was that? And I'd say mostly in computer science,
link |
03:15:05.680
most innovation has been relatively small chunks. The bigger chunks have been rare.
link |
03:15:09.920
Well, this is the interesting thing. This is about AI and just algorithms in general is
link |
03:15:14.640
page rank. So Google's, right? So sometimes it's a simple algorithm that by itself is not that useful,
link |
03:15:27.360
but the scale of context and in a context that's scalable, depending on the context,
link |
03:15:34.480
all of a sudden the power is revealed. And there's something, I guess that's the nature of chunkiness
link |
03:15:38.960
is that things that can reach a lot of people simply can be quite chunky.
link |
03:15:45.840
So one standard story about algorithms is to say algorithms have a fixed cost plus a marginal cost.
link |
03:15:53.680
And so in history, when you had computers that were very small, you tried all the algorithms
link |
03:15:58.720
that had low fixed costs and you look for the best of those. But over time, as computers got bigger,
link |
03:16:04.160
you could afford to do larger fixed costs and try those. And some of those had more effective
link |
03:16:09.360
algorithms in terms of their marginal cost. And that, in fact, that roughly explains the
link |
03:16:15.040
longterm history where in fact, the rate of algorithmic improvement is about the same as
link |
03:16:19.040
the rate of hardware improvement, which is a remarkable coincidence. But it would be explained
link |
03:16:25.280
by saying, well, there's all these better algorithms you can't try until you have a big enough computer
link |
03:16:30.480
to pay the fixed cost of doing some trials to find out if that algorithm actually saves you
link |
03:16:35.360
on the marginal cost. And so that's an explanation for this relatively continuous history. So we have
link |
03:16:41.600
a good story about why hardware is so continuous. And you might think, why would software be so
link |
03:16:45.520
continuous with the hardware? But if there's a distribution of algorithms in terms of their fixed
link |
03:16:50.320
costs, and it's, say, spread out at a wide log normal distribution, then we could be sort of
link |
03:16:55.840
marching through that log normal distribution, trying out algorithms with larger fixed costs and
link |
03:17:00.560
finding the ones that have lower marginal costs.
link |
03:17:02.880
So would you say AGI, human level, AI, even EM, M, emulated minds, is chunky? Like a few
link |
03:17:18.480
breakthroughs can take us.
link |
03:17:19.680
So an M is by its nature chunky in the sense that if you have an emulated brain and you're
link |
03:17:25.280
25% effective at emulating it, that's crap. That's nothing. Okay. You pretty much need to
link |
03:17:32.800
emulate a full human brain.
link |
03:17:34.240
Is that obvious? Is that obvious?
link |
03:17:36.320
It's pretty obvious. I'm talking about like, you know, so the key thing is you're emulating
link |
03:17:41.680
various brain cells. And so you have to emulate the input output pattern of those cells. So if
link |
03:17:46.400
you get that pattern somewhat close, but not close enough, then the whole system just doesn't have
link |
03:17:51.600
the overall behavior you're looking for, right?
link |
03:17:53.280
But it could have functionally some of the power of the overall system.
link |
03:17:57.200
So there'll be some threshold. The point is when you get close enough, then it goes over the
link |
03:18:00.800
threshold, right? It's like taking a computer chip and deleting every 1% of the gates, right?
link |
03:18:05.920
No, that's very chunky. But the hope is that emulating the human brain, I mean, the human
link |
03:18:12.400
brain itself is not...
link |
03:18:13.440
Right. So it has a certain level of redundancy and a certain level of robustness. And so there's
link |
03:18:17.280
some threshold when you get close to that level of redundancy or robustness, then it starts to
link |
03:18:20.640
work. But until you get to that level, it's just going to be crap, right? It's going to be just a
link |
03:18:25.520
big thing that isn't working for us. So we can be pretty sure that emulations is a big chunk in an
link |
03:18:32.320
economic sense, right? At some point, you'll be able to make one that's actually effective in
link |
03:18:37.520
enable substituting for humans. And then that will be this huge economic product that people will
link |
03:18:42.560
try to buy like crazy.
link |
03:18:43.520
You'll bring a lot of value to people's lives, so they'll be willing to pay for it.
link |
03:18:47.360
Right. But it could be that the first emulation costs a billion dollars each, right? And then we
link |
03:18:53.360
have them, but we can't really use them. They're too expensive. And then the cost slowly comes
link |
03:18:56.400
down. And now we have less of a chunky adoption, right? That as the cost comes down, then we use
link |
03:19:03.680
more and more of them in more and more contexts. And that's a more continuous curve. So it's only
link |
03:19:10.160
if the first emulations are relatively cheap that you get a more sudden disruption to society.
link |
03:19:15.760
And that could happen if sort of the algorithm is the last thing you figure out how to do or
link |
03:19:19.360
something.
link |
03:19:19.840
What about robots that capture some magic in terms of social connection? The robots, like we have a
link |
03:19:28.160
robot dog on the carpet right there. Robots that are able to capture some magic of human connection
link |
03:19:36.160
as they interact with humans, but are not emulating the brain. What about those? How far away?
link |
03:19:42.560
So we're thinking about chunkiness or distance now. So if you ask how chunky is the task of making
link |
03:19:48.320
a, you know, emulatable robot or something, which chunkiness and time are correlated.
link |
03:19:55.760
Right. But it's about how far away it is or how suddenly it would happen. Chunkiness is how
link |
03:20:01.680
suddenly and difficulty is just how far away it is. But it could be a continuous difficulty. It
link |
03:20:07.360
would just be far away, but we'll slowly steadily get there. Or there could be these thresholds where
link |
03:20:14.720
we reach a threshold and suddenly we can do a lot better.
link |
03:20:17.040
Yeah. That's a good question for both. I tend to believe that all of it, not just the M, but AGI
link |
03:20:24.400
too is chunky and human level intelligence embodied in robots is also chunky.
link |
03:20:31.920
The history of computer science and chunkiness so far seems to be my rough best guess for the
link |
03:20:36.720
penis of AGI. That is, it is chunky.
link |
03:20:39.440
It's modestly chunky, not that chunky. Right.
link |
03:20:43.920
Our ability to use computers to do many things in the economy has been moving relatively steadily.
link |
03:20:48.560
Overall, in terms of our use of computers in society,
link |
03:20:52.480
they have been relatively steadily improving for 70 years.
link |
03:20:55.680
No, but I would say that's hard. Yeah. Okay. Okay. I would have to really think about that
link |
03:21:00.400
because neural networks are quite surprising.
link |
03:21:03.200
Sure. But every once in a while we have a new thing that's surprising. But if you stand back,
link |
03:21:07.120
you know, we see something like that every 10 years or so, some new innovation that has a big effect.
link |
03:21:12.800
So, moderately chunky. Yeah.
link |
03:21:19.040
But the history of the level of disruption we've seen in the past would be a rough
link |
03:21:22.240
estimate of the level of disruption in the future. Unless the future is,
link |
03:21:25.280
we're going to hit a chunky territory, much chunkier than we've seen in the past.
link |
03:21:28.720
Well, I do think there's, it's like, like Kuhnian, like revolution type.
link |
03:21:36.720
It seems like the data, especially on AI, is difficult to reason with because it's so recent,
link |
03:21:46.560
it's such a recent field. Wow, AI's been around for 50 years.
link |
03:21:50.560
I mean, 50, 60, 70, 80 years being recent. Okay.
link |
03:21:53.920
It's enough time to see a lot of trends.
link |
03:21:58.720
A few trends, a few trends. I think the internet, computing, there's really a lot of interesting
link |
03:22:06.640
stuff that's happened over the past 30 years that I think the possibility of revolutions
link |
03:22:13.760
is likelier than it was in the... I think for the last 70 years,
link |
03:22:17.840
there have always been a lot of things that look like they had a potential for revolution.
link |
03:22:21.120
So we can't reason well about this. I mean, we can reason well by looking
link |
03:22:25.200
at the past trends. I would say the past trend is roughly your best guess for the future.
link |
03:22:30.000
No, but if I look back at the things that might've looked like revolutions in the 70s and 80s and 90s,
link |
03:22:37.280
they are less like the revolutions that appear to be happening now, or the capacity of revolution
link |
03:22:43.840
that appear to be there now. First of all, there's a lot more money to be made. So there's a lot more
link |
03:22:49.520
incentive for markets to do a lot of kind of innovation, it seems like in the AI space.
link |
03:22:54.640
But then again, there's a history of winters and summers and so on.
link |
03:22:58.560
So maybe we're just like riding a nice wave right now.
link |
03:23:00.960
One of the biggest issues is the difference between impressive demos and commercial value.
link |
03:23:05.760
Yes.
link |
03:23:06.480
So we often through the history of AI, we saw very impressive demos
link |
03:23:10.160
that never really translated much into commercial value.
link |
03:23:12.880
Somebody who works on and cares about autonomous and semi autonomous vehicles,
link |
03:23:17.120
tell me about it. And there again, we return to the number of Elon Musk's per earth per year
link |
03:23:24.800
generated. That's the M. Coincidentally, same initials as the M.
link |
03:23:31.440
Very suspicious, very suspicious. We're going to have to look into that. All right. Two more fields
link |
03:23:37.840
that I would like to force and twist your arm to look for view quakes and for beautiful ideas,
link |
03:23:43.040
economics. What is a beautiful idea to you about economics? You mentioned a lot of them.
link |
03:23:53.120
Sure. So as you said before, there's going to be the first view cake most people encounter that
link |
03:23:58.640
makes the biggest difference on average in the world, because that's the only thing most people
link |
03:24:02.880
ever see is the first one. And so with AI, the first one is just how big the problem is. But
link |
03:24:10.800
once you get past that, you'll find others. Certainly for economics, the first one is just
link |
03:24:16.000
the power of markets. You might have thought it was just really hard to figure out how to optimize
link |
03:24:22.640
in a big, complicated space. And markets just do a good first pass for an awful lot of stuff.
link |
03:24:29.040
And they are really quite robust and powerful. And that's just quite the view quake, where you just
link |
03:24:35.520
say, if you want to get in the ballpark, just let a market handle it and step back. And that's true
link |
03:24:43.440
for a wide range of things. It's not true for everything, but it's a very good first approximation.
link |
03:24:48.640
Most people's intuitions for how they should limit markets are actually messing them up.
link |
03:24:53.680
They're that good in sense. Most people, when you go, I don't know if we want to trust that.
link |
03:24:57.440
Well, you should be trusting that. What are markets? Just a couple of words. So the idea
link |
03:25:07.280
is if people want something, then let other companies form to try to supply that thing.
link |
03:25:12.480
Let those people pay for their cost of whatever they're making and try to offer that product
link |
03:25:16.960
to those people. Let many such firms enter that industry and let the customers decide
link |
03:25:22.320
which ones they want. And if the firm goes out of business, let it go bankrupt and let other
link |
03:25:26.320
people invest in whichever ventures they want to try to attract customers to their version
link |
03:25:30.480
of the product. And that just works for a wide range of products and services.
link |
03:25:34.320
And through all of this, there's a free exchange of information too.
link |
03:25:37.760
There's a hope that there's no manipulation of information and so on.
link |
03:25:43.280
Even when those things happen, still just the simple market solution is usually better
link |
03:25:48.000
than the things you'll try to do to fix it.
link |
03:25:49.680
Than the alternative.
link |
03:25:50.560
That's a view, Craig. It's surprising. It's not what you would have initially thought.
link |
03:25:55.040
That's one of the great, I guess, inventions of human civilization that trust the markets.
link |
03:26:02.240
Now, another view, Craig, that I learned in my research that's not all of economics,
link |
03:26:05.840
but something more specialized is the rationality of disagreement. That is,
link |
03:26:11.040
basically people who are trying to believe what's true in a complicated situation would not actually
link |
03:26:16.320
disagree. And of course, humans disagree all the time. So it was quite the striking fact for me to
link |
03:26:22.080
learn in grad school that actually rational agents would not knowingly disagree. And so that makes
link |
03:26:28.960
disagreement more puzzling and it makes you less willing to disagree.
link |
03:26:35.520
Humans are, to some degree, rational and are able to...
link |
03:26:40.480
Their priorities are different than just figuring out the truth.
link |
03:26:43.840
Are different than just figuring out the truth.
link |
03:26:48.720
Which might not be the same as being irrational.
link |
03:26:52.160
That's another tangent that could take an hour.
link |
03:26:56.480
In the space of human affairs, political science, what is a beautiful, foundational,
link |
03:27:04.480
interesting idea to you, a view, Craig, in the space of political science?
link |
03:27:08.160
It's the main thing that goes wrong in politics is people not agreeing on what the best thing to do is.
link |
03:27:19.120
That's a wrong thing.
link |
03:27:20.560
So that's what goes wrong. That is where you say, what's fundamentally behind most
link |
03:27:24.160
political failures? It's that people are ignorant of what the consequences of policy is.
link |
03:27:30.480
And that's surprising because it's actually feasible to solve that problem,
link |
03:27:34.560
which we aren't solving.
link |
03:27:35.680
So it's a bug, not a feature that there's an inability to arrive at a consensus.
link |
03:27:43.040
So most political systems, if everybody looked to some authority, say, on a question and that
link |
03:27:47.840
authority told them the answer, then most political systems are capable of just doing that thing.
link |
03:27:55.200
That is. And so it's the failure to have trustworthy authorities
link |
03:28:00.000
that is sort of the underlying failure behind most political failure.
link |
03:28:04.400
We invade Iraq, say, when we don't have an authority to tell us that's a really stupid
link |
03:28:09.760
thing to do. And it is possible to create more informative trustworthy authorities.
link |
03:28:17.920
That's a remarkable fact about the world of institutions that we could do that, but we aren't.
link |
03:28:24.640
Yeah, that's surprising. We could and we aren't.
link |
03:28:28.000
Right. Another big view, Craig, about politics is from the elephant in the brain that most people,
link |
03:28:31.920
when they're interacting with politics, they say they want to make the world better,
link |
03:28:35.760
they make their city better, their country better, and that's not their priority.
link |
03:28:39.280
What is it?
link |
03:28:40.240
They want to show loyalty to their allies. They want to show their people they're on their side,
link |
03:28:44.560
yes. Or their various tribes they're in, that's their primary priority and they do accomplish that.
link |
03:28:51.360
Yeah. And the tribes are usually color coded, conveniently enough.
link |
03:28:55.120
What would you say, you know, it's the Churchill question. Democracy is the crappiest form of
link |
03:29:01.280
government, but it's the best one we got. What's the best form of government for this, our 7 billion
link |
03:29:08.560
human civilization and the maybe as we get farther and further. You mentioned a lot of stuff
link |
03:29:14.960
that's fascinating about human history as we become more forager like and looking out beyond
link |
03:29:21.200
what's the best form of government in the next 50, 100 years as we become a multi planetary species.
link |
03:29:26.080
So, the key failing is that we have existing political institutions and related institutions
link |
03:29:33.840
like media institutions and other authority institutions, and these institutions sit in
link |
03:29:39.600
a vast space of possible institutions. And the key failing, we're just not exploring that space.
link |
03:29:44.080
And the key failing, we're just not exploring that space. So, I have made my proposals in that space,
link |
03:29:50.400
and I think I can identify many promising solutions. And many other people have made many
link |
03:29:54.880
other promising proposals in that space. But the key thing is we're just not pursuing those
link |
03:29:59.040
proposals. We're not trying them out on small scales, we're not doing tests, we're not exploring
link |
03:30:04.080
the space of these options. That is the key thing we're failing to do. And if we did that, I am
link |
03:30:10.320
confident we would find much better institutions than one we're using now, but we would have to
link |
03:30:14.800
actually try. So, a lot of those topics, I do hope we get a chance to talk again. You're a fascinating
link |
03:30:23.040
human being. So, I'm skipping a lot of tangents on purpose that I would love to take. You're such a
link |
03:30:28.400
brilliant person on so many different topics. Let me take a stroll into the deep human psyche of
link |
03:30:40.640
Robin Hansen himself. So, first... May not be that deep.
link |
03:30:48.320
I might just be all on the surface. What you see is what you get. There might not be much hiding
link |
03:30:51.760
behind it. Some of the fun is on the surface. I actually think this is true of many of the most
link |
03:30:58.960
successful, most interesting people you see in the world. That is, they have put so much effort
link |
03:31:04.640
into the surface that they've constructed. And that's where they put all their energy. Somebody
link |
03:31:10.160
might be a statesman or an actor or something else, and people want to interview them and they
link |
03:31:14.640
want to say, what are you behind the scenes? What do you do in your free time? Those people don't
link |
03:31:18.800
have free time. They don't have another life behind the scenes. They put all their energy into
link |
03:31:24.000
that surface, the one we admire, the one we're fascinated by. And they kind of have to make up
link |
03:31:28.560
the stuff behind the scenes to supply it for you, but it's not really there. Well, there's several
link |
03:31:33.520
ways of phrasing that. So, one of it is authenticity, which is if you become the thing you are on the
link |
03:31:41.360
surface, if the depths mirror the surface, then that's what authenticity is. You're not hiding
link |
03:31:48.160
something. You're not concealing something. To push back on the idea of actors, they actually have
link |
03:31:52.800
often a manufactured surface that they put on and they try on different masks and the depths are
link |
03:32:00.000
very different from the surface. And that's actually what makes them very not interesting
link |
03:32:03.680
to interview. If you are an actor who actually lives the role that you play, so like, I don't
link |
03:32:13.200
know, Clint Eastwood type character who clearly represents the cowboy, like at least rhymes or
link |
03:32:20.800
echoes the person you play on the surface, that's authenticity. Some people are typecasts and they
link |
03:32:26.080
have basically one persona they play in all of their movies and TV shows. And so those people,
link |
03:32:30.480
it probably is the actual persona that they are, or it has become that over time. Clint Eastwood
link |
03:32:37.120
would be one. I think of Tom Hanks as an ever. I think they just always play the same person.
link |
03:32:40.880
And you and I are just both surface players. You're the fun, brilliant thinker and I am the
link |
03:32:49.200
suit wearing idiot full of silly questions. All right. That said, let's put on your wise
link |
03:33:01.520
sage hat and ask you, what advice would you give to young people today in high school and college
link |
03:33:07.200
about life, about how to live a successful life in career or just in general that they can be proud
link |
03:33:15.280
of? Most young people, when they actually ask you that question, what they usually mean is how can
link |
03:33:22.400
I be successful by usual standards? I'm not very good at giving advice about that because that's
link |
03:33:28.320
not how I tried to live my life. So I would more flip it around and say, you live in a rich society
link |
03:33:36.560
and you will have a long life. You have many resources available to you. Whatever career you
link |
03:33:44.480
take, you'll have plenty of time to make progress on something else. Yes, it might be better if you
link |
03:33:50.720
find a way to combine your career and your interests in a way that gives you more time
link |
03:33:54.800
and energy, but there are often big compromises there as well. So if you have a passion about some
link |
03:34:00.560
topic or some thing that you think just is worth pursuing, you can just do it. You don't need other
link |
03:34:05.760
people's approval. And you can just start doing whatever it is you think it's worth doing. It
link |
03:34:12.480
might take you decades, but decades are enough to make enormous progress on most all interesting
link |
03:34:17.280
things. And don't worry about the commitment of it. I mean, that's a lot of what people worry
link |
03:34:21.840
about is, well, there's so many options. And if I choose a thing and I stick with it, I sacrifice
link |
03:34:27.520
all the other paths I could have taken. So I switched my career at the age of 34 with two
link |
03:34:32.640
kids, age zero and two, went back to grad school in social science after being a research software
link |
03:34:39.360
engineer. So it's quite possible to change your mind later in life.
link |
03:34:45.120
How can you have an age of zero?
link |
03:34:48.560
Less than one.
link |
03:34:50.880
Okay. Oh, you index was zero. I got it. Okay.
link |
03:34:55.120
Right. People also ask what to read and I say, textbooks. Until you've read lots of textbooks
link |
03:35:02.000
or maybe review articles, I'm not so sure you should be reading blog posts and Twitter feeds
link |
03:35:08.880
and even podcasts. I would say at the beginning, this is our best, humanity's best summary of how
link |
03:35:16.080
to learn things is crammed into textbooks. Especially the ones on like introduction to
link |
03:35:21.760
biology, introduction to everything. Just read all the algorithms, read as many textbooks as
link |
03:35:26.720
you can stomach. And then maybe if you want to know more about a subject, find review articles.
link |
03:35:30.480
Right. You don't need to read the latest stuff for most topics.
link |
03:35:33.520
Yeah. And actually textbooks often have the prettiest pictures.
link |
03:35:37.120
There you go.
link |
03:35:37.680
And depending on the field, if it's technical, then doing the homework problems at the end,
link |
03:35:42.400
it's actually extremely, extremely useful. Extremely powerful way to understand something
link |
03:35:47.360
if you allow it. I actually think of like high school and college, which you kind of remind me
link |
03:35:54.000
of, people don't often think of it that way, but you will almost not again get an opportunity
link |
03:36:02.000
to spend the time with a fundamental subject and like, and everybody's forcing you, like
link |
03:36:08.560
everybody wants you to do it. And like, you'll never get that chance again to sit there,
link |
03:36:14.640
even though it's outside of your interest, biology. Like in high school, I took AP biology,
link |
03:36:19.280
AP chemistry. I'm thinking of subjects I never again really visited seriously. And it was so
link |
03:36:28.000
nice to be forced into anatomy and physiology, to be forced into that world, to stay with it,
link |
03:36:35.040
to look at the pretty pictures, to certain moments to actually for a moment, enjoy the beauty of
link |
03:36:40.800
these, of like how a cell works and all those kinds of things. And you're somehow that stays
link |
03:36:46.240
like the ripples of that fascination that stays with you, even if you never do those,
link |
03:36:51.040
even if you never utilize those learnings in your actual work.
link |
03:36:56.720
A common problem, at least of many young people I meet is that they're like feeling
link |
03:37:01.600
idealistic and altruistic, but in a rush. So, you know, the usual human tradition that goes back,
link |
03:37:09.200
you know, hundreds of thousands of years is that people's productivity rises with time and maybe
link |
03:37:13.920
peaks around the age of 40 or 50. The age of 40, 50 is when you will be having the highest income,
link |
03:37:19.520
you'll have the most contacts, you will sort of be wise about how the world works.
link |
03:37:25.680
Expect to have your biggest impact then. Before then, you can have impacts, but you're also mainly
link |
03:37:31.280
building up your resources and abilities. That's the usual human trajectory. Expect that to be
link |
03:37:38.320
true of you too. Don't be in such a rush to like accomplish enormous things at the age of 18 or
link |
03:37:43.120
whatever. I mean, you might as well practice trying to do things, but that's mostly about
link |
03:37:47.680
learning how to do things by practicing. There's a lot of things you can't do unless you just
link |
03:37:50.720
keep trying them. And when all else fails, try to maximize the number of offspring,
link |
03:37:56.640
however way you can. That's certainly something I've neglected. I would tell my younger version
link |
03:38:01.920
of myself, try to have more descendants. Yes, absolutely. It matters more than I gave,
link |
03:38:09.200
I realized at the time. Both in terms of making copies of yourself in mutated form
link |
03:38:18.320
and just the joy of raising them. Sure. I mean, the meaning even, you know, so in the literature on
link |
03:38:28.160
the value people get out of life, there's a key distinction between happiness and meaning.
link |
03:38:32.320
So happiness is how do you feel right now about right now and meaning is how do you feel about
link |
03:38:37.680
your whole life? And, you know, many things that produce happiness don't produce meaning as
link |
03:38:44.240
reliably. And if you have to choose between them, you'd rather have meaning. And meaning is more
link |
03:38:51.680
goes along with sacrificing happiness sometimes. And children are an example of that. You get a lot
link |
03:38:57.120
more meaning out of children, even if they're a lot more work. What do you think kids, children
link |
03:39:05.520
are so magical, like raising kids? I would love to have kids. And whenever I work with robots,
link |
03:39:15.920
there's some of the same magic when there's an entity that comes to life. And in that case,
link |
03:39:21.680
I'm not trying to draw too many parallels, but there is some echo to it, which is when you
link |
03:39:27.920
program a robot, there's some aspect of your intellect that is now instilled in this other
link |
03:39:33.840
moving being that's kind of magical. Well, why do you think that's magical? And you said happiness
link |
03:39:40.240
and meaning as opposed to a short. Why is it meaningful? It's overdetermined. Like I can give
link |
03:39:49.200
you several different reasons, all of which is sufficient. And so the question is, we don't know
link |
03:39:53.200
which ones are the correct reasons. It's overdetermined. Look it up. So, you know, I meet a
link |
03:39:59.840
lot of people interested in the future, interested in thinking about the future. They're thinking
link |
03:40:03.360
about how can I influence the future? But overwhelmingly in history so far, the main way
link |
03:40:08.960
people have influenced the future is by having children, overwhelmingly. And that's just not an
link |
03:40:15.920
incidental fact. You are built for that. That is, you're the sequence of thousands of generations,
link |
03:40:22.480
each of which successfully had a descendant. And that affected who you are. You just have to expect
link |
03:40:28.880
and it's true that who you are is built to be, you know, expect to have a child, to want to have a
link |
03:40:36.800
child, to have that be a natural and meaningful interaction for you. And it's just true. It's just
link |
03:40:41.920
one of those things you just should have expected and it's not a surprise. Well, to push back and
link |
03:40:48.080
sort of in terms of influencing the future, as we get more and more technology, more and more of us
link |
03:40:54.960
are able to influence the future in all kinds of other ways, right? Being a teacher, educator. Even
link |
03:41:00.240
so, though, still most of our influence in the future has probably happened being kids, even
link |
03:41:05.440
though we've accumulated more ways, other ways to do it. You mean at scale. I guess the depth of
link |
03:41:11.120
influence, like really how much of much effort, how much of yourself you really put into another human
link |
03:41:15.920
being. Do you mean both the raising of a kid or you mean raw genetic information? Well, both, but
link |
03:41:24.480
raw genetics is probably more than half of it. More than half. More than half. Even in this modern
link |
03:41:30.720
world? Yeah. Genetics. Let me ask some dark, difficult questions, if I might. Let's take a
link |
03:41:40.080
stroll into that place that may or may not exist, according to you. What's the darkest place you've
link |
03:41:48.080
ever gone to in your mind, in your life, a dark time, a challenging time in your life that you had to overcome?
link |
03:41:58.160
You know, probably just feeling strongly rejected. And so I've been, I'm apparently somewhat
link |
03:42:06.160
emotionally scarred by just being very rejection averse, which must have happened because
link |
03:42:11.360
some rejections were just very scarring. At a scale in what kinds of communities? On the
link |
03:42:18.400
individual scale? I mean, lots of different scales, yeah. All the different, many different scales. Still
link |
03:42:24.720
that rejection stings. Hold on a second, but you are a contrarian thinker. You challenge the
link |
03:42:33.360
norms. Why, if you were scarred by rejection, why welcome it in so many ways at a much
link |
03:42:43.360
larger scale, constantly with your ideas? It could be that I'm just stupid, or that I've just categorized
link |
03:42:50.560
them differently than I should or something. You know, the most rejection that I've faced hasn't been
link |
03:42:58.160
because of my intellectual ideas. So the intellectual ideas haven't been the thing
link |
03:43:06.000
to risk the rejection. The one that, the things that challenge your mind taking you to a dark
link |
03:43:14.960
place are the more psychological rejections. So. Well, you just asked me, you know, what took me to a
link |
03:43:21.040
dark place. You didn't specify it as sort of an intellectual dark place, I guess. Yeah, I just
link |
03:43:25.600
meant like what? So intellectual is disjoint or at least at a more surface level than something
link |
03:43:33.760
emotional? Yeah, I would just think, you know, there are times in your life when, you know,
link |
03:43:38.560
you're just in a dark place and that can have many different causes. And most, you know, most
link |
03:43:43.520
intellectuals are still just people and most of the things that will affect them are the kinds of
link |
03:43:47.840
things that affect people. They aren't that different necessarily. I mean, that's going to be true for,
link |
03:43:52.240
like, I presume most basketball players are still just people. If you ask them what was the worst
link |
03:43:55.920
part of their life, it's going to be this kind of thing that was the worst part of life for most
link |
03:43:59.760
people. So rejection early in life? Yeah, I think, I mean, not in grade school probably, but, you know,
link |
03:44:06.160
yeah, sort of, you know, being a young nerdy guy and feeling, you know, not in much demand or interest
link |
03:44:13.760
or, you know, later on, lots of different kinds of rejection. But yeah, but I think that's, you know,
link |
03:44:22.800
most of us like to pretend we don't that much need other people. We don't care what they think.
link |
03:44:26.960
I know it's a common sort of stance if somebody rejects you or something, I didn't care about them
link |
03:44:30.720
anyway. I, you know, didn't, but I think to be honest, people really do care. Yeah, we do seek
link |
03:44:35.920
that connection, that love. What do you think is the role of love in the human condition?
link |
03:44:40.480
Um, opacity, in part. That is, love is one of those things where we know at some level it's
link |
03:44:53.440
important to us, but it's not very clearly shown to us exactly how or why or in what ways.
link |
03:45:00.480
There are some kinds of things we want where we can just clearly see that we want and why that we
link |
03:45:03.760
want it, right? We know when we're thirsty, and we know why we were thirsty, and we know what to
link |
03:45:07.280
do about being thirsty, and we know when it's over that we're no longer thirsty. Love isn't like that.
link |
03:45:14.480
It's like, what do we seek from this? We're drawn to it, but we do not understand why
link |
03:45:19.920
we're drawn exactly. Because it's not just affection, because if it was just affection,
link |
03:45:25.040
we don't seem to be drawn to pure affection. We don't seem to be drawn to somebody who's like a
link |
03:45:32.160
servant. We don't seem to be necessarily drawn to somebody that satisfies all your needs or something
link |
03:45:37.520
like that. So it's clearly something we want or need, but we're not exactly very clear about it,
link |
03:45:43.520
and that is kind of important to it. So I've also noticed there are some kinds of things
link |
03:45:48.080
you can't imagine very well. So if you imagine a situation, there's some aspects of the situation
link |
03:45:53.040
that you can clearly, you can imagine it being bright or dim, you can imagine it being windy,
link |
03:45:56.960
or you can imagine it being hot or cold. But there's some aspects about your emotional stance
link |
03:46:02.240
in a situation that's actually just hard to imagine or even remember. You can often remember
link |
03:46:08.240
an emotion only when you're in a similar sort of emotion situation, and otherwise, you just can't
link |
03:46:12.720
bring the emotion to your mind, and you can't even imagine it, right? So there's certain kinds of
link |
03:46:19.280
emotions you can have, and when you're in that emotion, you can know that you have it, and you
link |
03:46:22.480
can have a name, and it's associated. But later on, I tell you, remember joy, and it doesn't come to
link |
03:46:28.560
mind. I'm not able to replay it. Right. And it's the sort of reason why we have, one of the reasons
link |
03:46:33.760
that pushes us to re consume it and reproduce it is that we can't reimagine it. Well, it's interesting
link |
03:46:41.200
because there's a Daniel Kahneman type of thing of reliving memories, because I'm able to summon
link |
03:46:47.840
some aspect of that emotion, again, by thinking of that situation from which that emotion came.
link |
03:46:53.760
Right. So like a certain song, you can listen to it, and you can feel the same way you felt the
link |
03:46:59.840
first time you remember that song associated with it. Right. So you need to remember that situation
link |
03:47:03.760
in some sort of complete package. Yes. You can't just take one part off of it, and then if you get
link |
03:47:08.400
the whole package again, if you remember the whole feeling. Yes. Or some fundamental aspect of that
link |
03:47:13.440
whole experience that arouse from which the feeling arose. And actually, the feeling is probably
link |
03:47:18.480
different in some way. It could be more pleasant or less pleasant than the feeling you felt
link |
03:47:23.040
originally, and that morphs over time every time you replay that memory. It is interesting. You're
link |
03:47:28.160
not able to replay the feeling perfectly. You don't remember the feeling. You remember the facts of the
link |
03:47:33.840
events. So there's a sense of which over time we expand our vocabulary as a community of language,
link |
03:47:39.200
and that allows us to sort of have more feelings and know that we are feeling them. Because you can
link |
03:47:43.920
have a feeling but not have a word for it, and then you don't know how to categorize it or even
link |
03:47:47.840
what it is and whether it's the same as something else. But once you have a word for it, you can
link |
03:47:52.720
sort of pull it together more easily. And so I think over time we are having a richer palette of
link |
03:47:58.640
feelings because we have more words for them. What has been a painful loss in your life?
link |
03:48:05.520
Maybe somebody or something that's no longer in your life, but played an important part of your life.
link |
03:48:12.640
Youth?
link |
03:48:14.720
That's a concept. No, it has to be...
link |
03:48:16.800
I mean, but I was once younger. I had health and I had vitality. I was
link |
03:48:20.720
insomere. I mean, you know, I've lost that over time.
link |
03:48:22.960
Do you see that as a different person? Maybe you've lost that person.
link |
03:48:26.800
Yes, absolutely. I'm a different person than I was when I was younger, and I don't even remember
link |
03:48:32.080
exactly what he was. So I don't remember as many things from the past as many people do. So in
link |
03:48:36.800
some sense, I've just lost a lot of my history by not remembering it. And I'm not that person
link |
03:48:42.560
anymore. That person is gone and I don't have any of their abilities.
link |
03:48:45.120
Is it a painful loss, though?
link |
03:48:46.960
Yeah.
link |
03:48:47.440
Or is it a... Why is it painful? Because you're wiser.
link |
03:48:54.640
There's so many things that are beneficial to getting older.
link |
03:48:57.840
Right. But I just was this person and I felt assured that I could continue to be that person.
link |
03:49:06.240
And you're no longer that person.
link |
03:49:07.680
And he's gone. And I'm not him anymore. And he died without fanfare or a funeral.
link |
03:49:14.240
And that the person you are today talking to me, that person will be changed, too.
link |
03:49:20.640
Yes. And maybe in 20 years, he won't be there anymore.
link |
03:49:24.320
And the future person, we'll look back. The future version of you will...
link |
03:49:30.560
For Ems, this will be less of a problem. For Ems, they would be able to save an archived
link |
03:49:34.720
copy of themselves at each different age. And they could turn it on periodically and go back
link |
03:49:39.440
and talk to it.
link |
03:49:40.000
To replay. You think some of that will be... So with emulated minds, with Ems,
link |
03:49:46.880
there's a digital cloning that happens. And do you think that makes you less special if you're
link |
03:50:00.000
clonable? Does that make you the experience of life, the experience of a moment, the scarcity
link |
03:50:10.160
of that moment, the scarcity of that experience, isn't that a fundamental part of what makes
link |
03:50:14.640
that experience so delicious, so rich of feeling?
link |
03:50:18.160
I think if you think of a song that lots of people listen to that are copies all over the
link |
03:50:22.480
world, we're going to call that a more special song.
link |
03:50:26.080
Yeah. Yeah.
link |
03:50:32.400
So there's a perspective on copying and cloning where you're just scaling happiness versus
link |
03:50:39.200
degrading it.
link |
03:50:40.160
I mean, each copy of a song is less special if there are many copies, but the song itself is
link |
03:50:46.800
more special if there are many copies.
link |
03:50:48.480
In a mass, right, you're actually spreading the happiness even if it diminishes over a
link |
03:50:55.120
large number of people at scale and that increases the overall happiness in the world.
link |
03:50:59.440
And then you're able to do that with multiple songs.
link |
03:51:02.160
Is a person who has an identical twin more or less special?
link |
03:51:06.800
Well, the problem with identical twins is, you know, it's like just two with M's.
link |
03:51:16.880
Right, but two is different than one.
link |
03:51:18.480
So I think an identical twin's life is richer for having this other identical twin, somebody
link |
03:51:24.560
who understands them better than anybody else can.
link |
03:51:27.760
From the point of view of an identical twin, I think they have a richer life for being
link |
03:51:32.080
part of this couple, each of which is very similar.
link |
03:51:34.560
Now, if you said, will the world, you know, if we lose one of the identical twins, will
link |
03:51:38.960
the world miss it as much because you've got the other one and they're pretty similar?
link |
03:51:42.240
Maybe from the rest of the world's point of view, they suffer less of a loss when they
link |
03:51:46.560
lose one of the identical twins.
link |
03:51:48.080
But from the point of view of the identical twin themselves, their life is enriched by
link |
03:51:52.400
having a twin.
link |
03:51:53.520
See, but the identical twin copying happens at the place of birth.
link |
03:51:58.240
It's different than copying after you've done some of the environment, like the nurture
link |
03:52:05.600
at the teenage or in the 20s after going to college.
link |
03:52:08.640
Yes, that'll be an interesting thing for M's to find out all the different ways that
link |
03:52:11.920
they can have different relationships to different people who have different degrees of similarity
link |
03:52:16.080
to them in time.
link |
03:52:17.760
Yeah, yeah, man.
link |
03:52:23.920
But it seems like a rich space to explore and I don't feel sorry for them.
link |
03:52:26.880
This sounds like an interesting world to live in.
link |
03:52:29.200
And there could be some ethical conundrums there.
link |
03:52:31.920
There will be many new choices to make that they don't make now.
link |
03:52:35.200
So, and I discussed that in the book Age of M.
link |
03:52:38.560
Like, say you have a lover and you make a copy of yourself, but the lover doesn't make
link |
03:52:43.040
a copy.
link |
03:52:43.440
Well now, which one of you or are both still related to the lover?
link |
03:52:48.880
Socially entitled to show up.
link |
03:52:52.560
Yes, so you'll have to make choices then when you split yourself, which of you inherit
link |
03:52:58.800
which unique things.
link |
03:53:01.760
Yeah, and of course there'll be an equivalent increase in lawyers.
link |
03:53:08.720
Well, I guess you can clone the lawyers to help manage some of these negotiations of
link |
03:53:14.800
how to split property.
link |
03:53:16.160
The nature of owning, I mean, property is connected to individuals, right?
link |
03:53:22.080
You only really need lawyers for this with an inefficient, awkward law that is not very
link |
03:53:26.320
transparent and able to do things.
link |
03:53:28.320
So, you know, for example, an operating system of a computer is a law for that computer.
link |
03:53:33.520
When the operating system is simple and clean, you don't need to hire a lawyer to make a
link |
03:53:37.920
key choice with the operating system.
link |
03:53:38.800
You don't need a human in the loop.
link |
03:53:40.240
You just make a choice, right?
link |
03:53:42.800
So ideally we want a legal system that makes the common choices easy and not require much
link |
03:53:48.640
overhead.
link |
03:53:49.440
And that's the digitization of things further enables that.
link |
03:53:56.000
So the loss of a younger self, what about the loss of your life overall?
link |
03:54:01.280
Do you ponder your death, your mortality?
link |
03:54:03.760
Are you afraid of it?
link |
03:54:05.120
I am a cryonics customer.
link |
03:54:06.960
That's what this little tag around my deck says.
link |
03:54:09.520
It says that if you find me in a medical situation, you should call these people to enable the
link |
03:54:15.840
cryonics transfer.
link |
03:54:16.960
So I am taking a long shot chance at living a much longer life.
link |
03:54:22.480
Can you explain what cryonics is?
link |
03:54:25.600
So when medical science gives up on me in this world, instead of burning me or letting
link |
03:54:32.800
worms eat me, they will freeze me or at least freeze my head.
link |
03:54:36.960
And there is damage that happens in the process of freezing the head.
link |
03:54:40.400
But once it's frozen, it won't change for a very long time.
link |
03:54:44.240
Chemically, it'll just be completely exactly the same.
link |
03:54:47.520
So future technology might be able to revive me.
link |
03:54:50.960
And in fact, I would be mainly counting on the brain emulation scenario, which doesn't
link |
03:54:55.440
require reviving my entire biological body.
link |
03:54:58.000
It means I would be in a computer simulation.
link |
03:55:02.080
And so I think I've got at least a 5% shot at that.
link |
03:55:06.400
And that's immortality.
link |
03:55:10.320
But most likely it won't happen.
link |
03:55:12.000
And therefore, I'm sad that it won't happen.
link |
03:55:14.960
Do you think immortality is something that you would like to have?
link |
03:55:20.720
Well, I mean, just like infinity, I mean, you can't know until forever, which means
link |
03:55:26.160
never, right?
link |
03:55:26.800
So all you can really, you know, the better choice is at each moment, do you want to keep
link |
03:55:30.800
going?
link |
03:55:31.600
So I would like at every moment to have the option to keep going.
link |
03:55:34.720
The interesting thing about human experience is that the way you phrase it is exactly right.
link |
03:55:45.440
At every moment, I would like to keep going.
link |
03:55:48.720
But the thing that happens, you know, leave them wanting more of whatever that phrase
link |
03:55:58.640
is, the thing that happens is over time, it's possible for certain experiences to become
link |
03:56:04.560
bland and you become tired of them.
link |
03:56:07.840
And that actually makes life really unpleasant.
link |
03:56:13.920
Sorry, makes that experience really unpleasant.
link |
03:56:15.760
And perhaps you can generalize that to life itself if you have a long enough horizon.
link |
03:56:21.280
And so...
link |
03:56:22.000
Might happen, but might as well wait and find out.
link |
03:56:24.560
But then you're ending on suffering, you know?
link |
03:56:28.160
So in the world of brain emulations, I have more options.
link |
03:56:32.800
You can return yourself.
link |
03:56:34.080
That is, I can make copies of myself, archive copies at various ages.
link |
03:56:39.040
And at a later age, I could decide that I'd rather replace myself with a new copy from
link |
03:56:43.440
a younger age.
link |
03:56:44.640
So does a brain emulation still operate in physical space?
link |
03:56:48.800
So can we do, what do you think about like the metaverse and operating in virtual reality
link |
03:56:53.360
so we can conjure up not just emulate, not just your own brain and body, but the entirety
link |
03:57:00.240
of the environment?
link |
03:57:00.880
Well, most brain emulations will, in fact, most of their time in virtual reality.
link |
03:57:06.000
But they wouldn't think of it as virtual reality.
link |
03:57:08.480
They would just think of it as their usual reality.
link |
03:57:11.200
I mean, the thing to notice, I think, in our world, most of us spend most time indoors.
link |
03:57:16.320
And indoors, we are surrounded by walls covered with paint and floors covered with
link |
03:57:21.760
tile or rugs.
link |
03:57:23.600
Most of our environment is artificial.
link |
03:57:26.400
It's constructed to be convenient for us.
link |
03:57:28.560
It's not the natural world that was there before.
link |
03:57:31.200
A virtual reality is basically just like that.
link |
03:57:33.840
It is the environment that's comfortable and convenient for you.
link |
03:57:37.760
But when it's the right, that environment for you, it's real for you.
link |
03:57:41.360
Just like the room you're in right now most likely is very real for you.
link |
03:57:45.040
You're not focused on the fact that the paint is hiding the actual studs behind the
link |
03:57:49.600
wall and the actual wires and pipes and everything else.
link |
03:57:52.880
The fact that we're hiding that from you doesn't make it fake or unreal.
link |
03:57:58.400
What are the chances that we're actually in the very kind of system that you're describing
link |
03:58:04.400
where the environment and the brain is being emulated and you're just replaying an experience
link |
03:58:08.880
when you first did a podcast with Lex after?
link |
03:58:14.960
And now, the person that originally launched this already did hundreds of podcasts with
link |
03:58:19.520
Lex.
link |
03:58:19.760
This is just the first time and you like this time because there's so much uncertainty.
link |
03:58:24.560
There's nerves.
link |
03:58:25.360
It could have gone any direction.
link |
03:58:28.320
At the moment, we don't have the technical ability to create that emulation.
link |
03:58:32.560
So we'd have to be postulating that in the future we have that ability and then they
link |
03:58:37.280
choose to evaluate this moment now to simulate it.
link |
03:58:40.400
Don't you think we could be in the simulation of that exact experience right now and we
link |
03:58:45.520
wouldn't be able to know?
link |
03:58:46.320
So one scenario would be this never really happened.
link |
03:58:51.040
This only happens as a reconstruction later on.
link |
03:58:55.040
That's different than the scenario that this did happen the first time and now it's happening
link |
03:58:58.560
again as a reconstruction.
link |
03:59:00.640
That second scenario is harder to put together because it requires this coincidence where
link |
03:59:06.240
between the two times we produce the ability to do it.
link |
03:59:08.960
But don't you think replay of memories, poor replay of memories is something that might
link |
03:59:18.320
be a possible thing in the future?
link |
03:59:19.600
You're saying it's harder than conjure up things from scratch.
link |
03:59:23.600
It's certainly possible.
link |
03:59:25.040
So the main way I would think about it is in terms of the demand for simulation versus
link |
03:59:29.920
other kinds of things.
link |
03:59:31.120
So I've given this a lot of thought because I first wrote about this long ago when Bostrom
link |
03:59:36.160
first wrote his papers about simulation argument and I wrote about how to live in a simulation.
link |
03:59:42.160
And so the key issue is the fraction of creatures in the universe that are really experiencing
link |
03:59:50.560
what you appear to be really experiencing relative to the fraction that are experiencing
link |
03:59:54.480
it in a simulation way, i.e., simulated.
link |
03:59:57.760
So then the key parameter is at any one moment in time, creatures at that time, many of them,
link |
04:00:06.880
most of them are presumably really experiencing what they're experiencing, but some fraction
link |
04:00:10.880
of them are experiencing some past time where that past time is being remembered via their
link |
04:00:17.760
simulation.
link |
04:00:19.680
So to figure out this ratio, what we need to think about is basically two functions.
link |
04:00:26.000
One is how fast in time does the number of creatures grow?
link |
04:00:30.320
And then how fast in time does the interest in the past decline?
link |
04:00:34.800
Because at any one time, people will be simulating different periods in the past with different
link |
04:00:40.000
emphasis.
link |
04:00:40.480
I love the way you think so much.
link |
04:00:42.880
That's exactly right, yeah.
link |
04:00:44.160
So if the first function grows slower than the second one declines, then in fact, your
link |
04:00:51.680
chances of being simulated are low.
link |
04:00:54.160
Yes.
link |
04:00:54.720
So the key question is how fast does interest in the past decline relative to the rate
link |
04:00:58.960
at which the population grows with time?
link |
04:01:00.720
Does this correlate to you earlier suggested that the interest in the future increases
link |
04:01:05.040
over time, are those correlated interest in the future versus interest in the past?
link |
04:01:09.520
Like, why are we interested in the past?
link |
04:01:11.280
So, but the simple way to do it is, as you know, like Google Ngrams has a way to type
link |
04:01:15.840
in a word and see how interest in it declines or rises over time, right?
link |
04:01:20.400
Yeah.
link |
04:01:20.880
You can just type in a year and get the answer for that.
link |
04:01:24.160
If you type in a particular year, like 1900 or 1950, you can see with Google Ngram, how
link |
04:01:30.320
interest in that year increased up until that date and decreased after it.
link |
04:01:34.480
Yep.
link |
04:01:35.040
And you can see that interest in a date declines faster than does the population grow with
link |
04:01:41.040
time.
link |
04:01:42.880
That is brilliant.
link |
04:01:44.640
That is so interesting.
link |
04:01:45.600
And so you have the answer.
link |
04:01:48.000
Wow.
link |
04:01:49.120
Wow.
link |
04:01:50.560
And that was your argument against, not against to this particular aspect of the simulation,
link |
04:01:56.160
how much past simulation there will be, replay of past memories.
link |
04:02:01.920
First of all, if we assume that like simulation of the past is a small fraction of all the
link |
04:02:06.000
creatures at that moment.
link |
04:02:07.120
Yes.
link |
04:02:07.440
Right.
link |
04:02:08.880
And then it's about how fast.
link |
04:02:10.400
Now, some people have argued plausibly that maybe most interest in the past falls with
link |
04:02:15.680
this fast function, but some unusual category of interest in the past won't fall that fast
link |
04:02:19.840
quickly.
link |
04:02:20.160
And then that eventually would dominate.
link |
04:02:22.080
So that's a other hypothesis you want.
link |
04:02:24.240
Some category.
link |
04:02:25.440
So that very outlier specific kind of, yeah, okay.
link |
04:02:28.720
Yeah, yeah, yeah.
link |
04:02:29.520
Like really popular kinds of memories, like probably sexual.
link |
04:02:35.040
In a trillion years, there's some small research institute that tries to randomly select from
link |
04:02:40.240
all possible people in history or something to simulate.
link |
04:02:42.560
Yeah, yeah, yeah.
link |
04:02:46.080
So the question is how big is this research institute and how big is the future in a trillion
link |
04:02:50.640
years, right?
link |
04:02:51.200
And that would be hard to say.
link |
04:02:52.720
But if we just look at the ordinary process by which people simulate recent errors.
link |
04:02:57.680
So if you look at, it's also true for movies and plays and video games,
link |
04:03:02.000
overwhelmingly they're interested in the recent past.
link |
04:03:04.800
There's very few video games where you play someone in the Roman Empire.
link |
04:03:07.440
Right.
link |
04:03:08.160
But even fewer where you play someone in the ancient Egyptian Empire.
link |
04:03:14.320
Yeah, just different.
link |
04:03:15.440
It's just declined very quickly.
link |
04:03:16.560
But every once in a while that's brought back.
link |
04:03:20.880
But yeah, you're right.
link |
04:03:21.840
I mean, just if you look at the mass of entertainment, movies and games, it's focusing on the present
link |
04:03:28.160
recent past.
link |
04:03:29.200
And maybe some, I mean, where does science fiction fit into this?
link |
04:03:32.320
Because it's sort of, what is science fiction?
link |
04:03:39.040
I mean, it's a mix of the past and the present and some kind of manipulation of that to make
link |
04:03:44.000
it more efficient for us to ask deep philosophical questions about humanity.
link |
04:03:48.800
The closest genre to science fiction is clearly fantasy.
link |
04:03:51.520
Fantasy and science fiction in many bookstores and even Netflix or whatever categories, they're
link |
04:03:55.280
just lumped together.
link |
04:03:56.640
So clearly they have a similar function.
link |
04:03:58.240
So that the function of fantasy is more transparent than the function of science fiction.
link |
04:04:02.160
So use that as your guide.
link |
04:04:04.240
What's fantasy for is just to take away the constraints of the ordinary world and imagine
link |
04:04:08.800
stories with much fewer constraints.
link |
04:04:10.560
That's what fantasy is.
link |
04:04:11.840
You are much less constrained.
link |
04:04:13.040
What's the purpose to remove constraints?
link |
04:04:14.800
Is it to escape from the harshness of the constraints of the real world?
link |
04:04:19.600
Or is it to just remove constraints in order to explore some, get a deeper understanding
link |
04:04:24.960
of our world?
link |
04:04:26.000
What is it?
link |
04:04:26.800
I mean, why do people read fantasy?
link |
04:04:28.800
I'm not a cheap fantasy reading kind of person.
link |
04:04:34.320
So I need to...
link |
04:04:36.400
One story that sounds plausible to me is that there are sort of these deep story structures
link |
04:04:40.720
that we love and we want to realize.
link |
04:04:43.760
And then many details of the world get in their way.
link |
04:04:46.560
Fantasy takes all those obstacles out of the way and lets you tell the essential hero story
link |
04:04:51.200
or the essential love story, whatever essential story you want to tell.
link |
04:04:53.760
Well, the reality and constraints are not in the way.
link |
04:04:59.120
And so science fiction can be thought of as like fantasy, except you're not willing to
link |
04:05:02.640
admit that it can't be true.
link |
04:05:04.480
So the future gives the excuse of saying, well, it could happen.
link |
04:05:09.280
And you accept some more reality constraints for the illusion, at least, that maybe it
link |
04:05:13.840
could really happen.
link |
04:05:16.640
Maybe it could happen.
link |
04:05:18.080
And that, it stimulates the imagination.
link |
04:05:20.080
Imagination is something really interesting about human beings.
link |
04:05:24.880
And it seems also to be an important part of creating really special things is to be
link |
04:05:28.960
able to first imagine them.
link |
04:05:30.960
With you and Nick Bostrom, where do you land on the simulation and all the mathematical
link |
04:05:37.360
ways of thinking it and just the thought experiment of it?
link |
04:05:41.280
Are we living in a simulation?
link |
04:05:44.480
That was just the discussion we just had.
link |
04:05:46.720
That is, you should grant the possibility of being a simulation.
link |
04:05:50.000
You shouldn't be 100% confident that you're not.
link |
04:05:52.080
You should certainly grant a small probability.
link |
04:05:54.240
The question is, how large is that probability?
link |
04:05:56.160
Are you saying we would be, I misunderstood because I thought our discussion was about
link |
04:06:01.600
replaying things that already happened.
link |
04:06:03.360
Right.
link |
04:06:03.520
But the whole question is, right now, is that what I am?
link |
04:06:08.080
Am I actually a replay from some distant future?
link |
04:06:11.840
But it doesn't necessarily need to be a replay.
link |
04:06:13.680
It could be a totally new.
link |
04:06:15.280
You could be, you don't have to be an NPC.
link |
04:06:17.360
Clearly, I'm in a certain era with a certain kind of world around me.
link |
04:06:20.880
So either this is a complete fantasy or it's a past of somebody else in the future.
link |
04:06:26.480
No, it could be a complete fantasy though.
link |
04:06:28.080
It could be.
link |
04:06:28.640
But then you have to talk about what's the fraction of complete fantasies.
link |
04:06:33.680
I would say it's easier to generate a fantasy than to replay a memory.
link |
04:06:36.880
Right?
link |
04:06:37.120
Oh, but the fraction is important.
link |
04:06:39.840
We just look at the entire history of everything.
link |
04:06:41.600
We just say, sure, but most things are real.
link |
04:06:43.760
Most things aren't fantasies.
link |
04:06:45.200
Therefore, the chance that my thing is real.
link |
04:06:47.040
Right?
link |
04:06:47.440
So the simulation argument works stronger about sort of the past.
link |
04:06:50.560
We say, ah, but there's more future people than there are today.
link |
04:06:53.840
So you being in the past of the future makes you special relative to them,
link |
04:06:57.600
which makes you more likely to be in a simulation.
link |
04:06:59.840
Right?
link |
04:07:00.160
If we're just taking the full count and saying, in all creatures ever,
link |
04:07:03.680
what percentage are in simulations?
link |
04:07:05.120
Probably no more than 10%.
link |
04:07:08.240
So what's the good argument for that?
link |
04:07:10.000
That most things are real?
link |
04:07:11.680
Yeah.
link |
04:07:12.080
Because a classroom says the other way, right?
link |
04:07:14.240
In a competitive world, in a world where people have to work and have to get things done,
link |
04:07:20.240
then they have a limited budget for leisure.
link |
04:07:24.080
And so, you know, leisure things are less common than work things, like real things.
link |
04:07:29.600
Right?
link |
04:07:29.840
But if you look at the stretch of history in the universe, doesn't the ratio of leisure increase?
link |
04:07:41.040
Isn't that where we, isn't that the forger?
link |
04:07:45.360
Right, but now we're looking at the fraction of leisure,
link |
04:07:47.360
which takes the form of something where the person doing the leisure doesn't realize it.
link |
04:07:51.920
Now there could be some fraction of that, but that's much smaller, right?
link |
04:07:55.200
Yeah.
link |
04:07:57.440
Clueless forgers.
link |
04:07:58.880
Or somebody is clueless in the process of supporting this leisure, right?
link |
04:08:02.640
It might not be the person leisureing, somebody,
link |
04:08:04.320
they're a supporting character or something,
link |
04:08:05.600
but still that's got to be a pretty small fraction of leisure.
link |
04:08:07.760
What, you mentioned that children are one of the things that are a source of meaning.
link |
04:08:13.760
Broadly speaking, then let me ask the big question.
link |
04:08:16.320
What's the meaning of this whole thing?
link |
04:08:19.040
Robin, meaning of life.
link |
04:08:21.120
What is the meaning of life?
link |
04:08:23.200
We talked about alien civilizations, but this is the one we got.
link |
04:08:27.600
Where are the aliens?
link |
04:08:28.800
Where are the human?
link |
04:08:30.960
Seem to be conscious, be able to introspect.
link |
04:08:35.040
What's, why are we here?
link |
04:08:37.280
This is the thing I told you before about how we can predict that
link |
04:08:40.720
future creatures will be different from us.
link |
04:08:43.840
We, our preferences are this amalgam of various sorts of random sort of patched together
link |
04:08:51.200
preferences about thirst and sex and sleep and attention and all these sorts of things.
link |
04:08:57.360
So we don't understand that very well.
link |
04:08:59.600
It's not very transparent and it's a mess, right?
link |
04:09:03.360
That is the source of our motivation.
link |
04:09:05.840
That is how we were made and how we are induced to do things.
link |
04:09:09.520
But we can't summarize it very well and we don't even understand it very well.
link |
04:09:13.760
That's who we are.
link |
04:09:15.040
And often we find ourselves in a situation where we don't feel very motivated.
link |
04:09:18.080
We don't know why.
link |
04:09:18.880
In other situations, we find ourselves very motivated and we don't know why either.
link |
04:09:24.480
And so that's the nature of being a human of the sort that we are because
link |
04:09:29.200
even though we can think abstractly and reason abstractly, this package
link |
04:09:32.400
of motivations is just opaque and a mess.
link |
04:09:34.640
And that's what it means to be a human today and the motivation.
link |
04:09:39.360
We can't very well tell the meaning of our life.
link |
04:09:42.160
It is this mess that our descendants will be different.
link |
04:09:44.800
They will actually know exactly what they want.
link |
04:09:48.160
And it will be to have more descendants.
link |
04:09:50.960
That will be the meaning for them.
link |
04:09:52.640
Well, it's funny that you have the certainty.
link |
04:09:54.560
You have more certainty.
link |
04:09:56.160
You have more transparency about our descendants than you do about your own self.
link |
04:10:01.840
Right.
link |
04:10:02.400
So it's really interesting to think, because you mentioned this about love,
link |
04:10:07.920
that something that's fundamental about love is this opaqueness that we're not able
link |
04:10:13.680
to really introspect what the heck it is or all the feelings, the complex feelings.
link |
04:10:19.280
And that's true about many of our motivations.
link |
04:10:21.360
And that's what it means to be human of the 20th and the 21st century variety.
link |
04:10:28.480
Why is that not a feature that we will choose to persist in civilization then?
link |
04:10:35.840
This opaqueness, put another way, mystery, maintaining a sense of mystery
link |
04:10:40.480
about ourselves and about those around us.
link |
04:10:43.360
Maybe that's a really nice thing to have.
link |
04:10:45.360
Maybe.
link |
04:10:46.080
But, so, I mean, this is the fundamental issue in analyzing the future.
link |
04:10:50.880
What will set the future?
link |
04:10:52.800
One theory about what will set the future is, what do we want the future to be?
link |
04:10:56.880
What do we want the future to be?
link |
04:10:58.480
So under that theory, we should sit and talk about what we want the future to be,
link |
04:11:01.440
have some conferences, have some conventions, discussion things, vote on it maybe,
link |
04:11:05.920
and then hand out off to the implementation people to make the future the way we've
link |
04:11:09.840
decided it should be.
link |
04:11:12.400
That's not the actual process that's changed the world over history up to this point.
link |
04:11:16.720
It has not been the result of us deciding what we want and making it happen.
link |
04:11:21.360
In our individual lives, we can do that.
link |
04:11:23.360
We might decide what career we want or where we want to live, who we want to live with.
link |
04:11:26.800
In our individual lives, we often do slowly make our lives better according to our plan
link |
04:11:31.520
and our things, but that's not the whole world.
link |
04:11:34.400
The whole world so far has mostly been a competitive world where things happen if
link |
04:11:38.880
anybody anywhere chooses to adopt them and they have an advantage.
link |
04:11:42.240
And then it spreads and other people are forced to adopt it by competitive pressures.
link |
04:11:46.240
So that's the kind of analysis I can use to predict the future.
link |
04:11:49.280
And I do use that to predict the future.
link |
04:11:50.800
It doesn't tell us it'll be a future we like.
link |
04:11:52.560
It just tells us what it'll be.
link |
04:11:54.640
And it'll be one where we're trying to maximize the number of our descendants.
link |
04:11:57.840
And we know that abstractly and directly.
link |
04:12:00.400
And it's not opaque.
link |
04:12:01.600
With some probability that's nonzero, that will lead us to become grabby in expanding
link |
04:12:09.920
aggressively out into the cosmos until we meet other aliens.
link |
04:12:13.680
The timing isn't clear.
link |
04:12:14.720
We might become grabby and then this happens.
link |
04:12:17.040
These are, the grabbiness and this are both the result of competition, but it's less
link |
04:12:21.280
clear which happens first.
link |
04:12:24.240
Does this future excite you or scare you?
link |
04:12:26.640
How do you feel about this whole thing?
link |
04:12:28.080
Again, I told you compared to sort of a dead cosmology, at least it's energizing and having
link |
04:12:33.120
a living story with real actors and characters and agendas, right?
link |
04:12:36.800
Yeah.
link |
04:12:37.280
And that's one hell of a fun universe to live in.
link |
04:12:40.720
Robin, you're one of the most fascinating, fun people to talk to.
link |
04:12:44.480
Brilliant, humble, systematic in your analysis.
link |
04:12:48.800
Hold on to my wallet here.
link |
04:12:49.840
What's he looking for?
link |
04:12:50.880
I already stole your wallet long ago.
link |
04:12:52.960
I really, really appreciate you spending your valuable time with me.
link |
04:12:55.440
I hope we get a chance to talk many more times in the future.
link |
04:12:59.600
Thank you so much for sitting down.
link |
04:13:01.360
Thank you.
link |
04:13:03.200
Thanks for listening to this conversation with Robin Hansen.
link |
04:13:05.920
To support this podcast, please check out our sponsors in the description.
link |
04:13:09.920
And now let me leave you with some words from Ray Bradbury.
link |
04:13:13.840
We are an impossibility in an impossible universe.
link |
04:13:17.840
Thank you for listening and hope to see you next time.