back to index

Nick Bostrom: Simulation and Superintelligence | Lex Fridman Podcast #83


small model | large model

link |
00:00:00.000
The following is a conversation with Nick Bostrom, a philosopher at University of Oxford
link |
00:00:05.440
and the director of the Future of Humanity Institute. He has worked on fascinating and
link |
00:00:10.560
important ideas in existential risk, simulation hypothesis, human enhancement ethics,
link |
00:00:16.800
and the risks of superintelligent AI systems, including in his book, Superintelligence.
link |
00:00:23.040
I can see talking to Nick multiple times in this podcast, many hours each time,
link |
00:00:27.520
because he has done some incredible work in artificial intelligence, in technology space,
link |
00:00:33.520
science, and really philosophy in general. But we'll have to start somewhere.
link |
00:00:38.640
This conversation was recorded before the outbreak of the coronavirus pandemic,
link |
00:00:43.360
that both Nick and I, I'm sure, will have a lot to say about next time we speak.
link |
00:00:48.560
And perhaps that is for the best, because the deepest lessons can be learned only in retrospect
link |
00:00:54.400
when the storm has passed. I do recommend you read many of his papers on the topic of existential
link |
00:01:00.000
risk, including the technical report titled Global Catastrophic Risks Survey that he coauthored with
link |
00:01:07.040
Anders Sandberg. For everyone feeling the medical, psychological, and financial burden of this crisis,
link |
00:01:13.520
I'm sending love your way. Stay strong. We're in this together. We'll beat this thing.
link |
00:01:18.800
This is the Artificial Intelligence Podcast. If you enjoy it, subscribe on YouTube,
link |
00:01:25.440
review it with five stars on Apple Podcast, support on Patreon, or simply connect with
link |
00:01:29.760
me on Twitter at Lex Freedman, spelled F R I D M A N. As usual, I'll do one or two minutes of ads now
link |
00:01:37.360
and never any ads in the middle that can break the flow of the conversation. I hope that works
link |
00:01:41.680
for you. It doesn't hurt the listening experience. This show is presented by Cash App, the number
link |
00:01:48.160
one finance app in the App Store. When you get it, use code LexPodcast. Cash App lets you send money
link |
00:01:54.640
to friends by Bitcoin and invest in the stock market with as little as $1. Since Cash App does
link |
00:02:00.800
fractional share trading, let me mention that the order execution algorithm that works behind the
link |
00:02:05.920
scenes to create the abstraction of fractional orders is an algorithmic marvel. So big props to
link |
00:02:11.760
the Cash App engineers for solving a hard problem that in the end provides an easy interface that
link |
00:02:17.280
takes a step up to the next layer of abstraction over the stock market, making trading more accessible
link |
00:02:22.240
for new investors and diversification much easier. So again, if you get Cash App from the App Store,
link |
00:02:29.120
Google Play, and use the code LexPodcast, you get $10 and Cash App will also donate $10 to first,
link |
00:02:36.880
an organization that is helping to advance robotics and STEM education for young people around the
link |
00:02:42.080
world. And now here's my conversation with Nick Bostrom. At the risk of asking the Beatles to
link |
00:02:50.560
play yesterday or the Rolling Stones to play satisfaction, let me ask you the basics. What
link |
00:02:56.480
is the simulation hypothesis? That we are living in a computer simulation. What is a computer
link |
00:03:03.440
simulation? How are we supposed to even think about that? Well, so the hypothesis is meant to be
link |
00:03:10.000
understood in a literal sense, not that we can kind of metaphorically view the universe as an
link |
00:03:17.600
information processing physical system, but that there is some advanced civilization who built a
link |
00:03:25.200
lot of computers and that what we experience is an effect of what's going on inside one of those
link |
00:03:33.040
computers so that the world around us, our own brains, everything we see and perceive and think
link |
00:03:39.760
and feel would exist because this computer is running certain programs. So do you think of this
link |
00:03:49.920
computer as something similar to the computers of today, these deterministic sort of touring machine
link |
00:03:56.480
type things? Is that what we're supposed to imagine or we're supposed to think of something more
link |
00:04:02.480
like a quantum mechanical system, something much bigger, something much more complicated,
link |
00:04:09.040
something much more mysterious from our current perspective? The ones we have today would do
link |
00:04:14.000
find them bigger, certainly. You'd need more memory and more processing power. I don't think
link |
00:04:19.120
anything else would be required. Now, it might well be that they do have additional, maybe they
link |
00:04:24.640
have quantum computers and other things that would give them even more umph. It seems kind of
link |
00:04:29.920
plausible, but I don't think it's a necessary assumption in order to get to the conclusion
link |
00:04:37.280
that a technologically mature civilization would be able to create these kinds of computer
link |
00:04:43.280
simulations with conscious beings inside them. So do you think the simulation hypothesis is an idea
link |
00:04:50.000
that's most useful in philosophy, computer science, physics, sort of where do you see it
link |
00:04:58.240
having valuable kind of starting point in terms of the thought experiment of it?
link |
00:05:05.040
Is it useful? I guess it's more informative and interesting and maybe important,
link |
00:05:13.200
but it's not designed to be useful for something else.
link |
00:05:16.400
Okay, interesting, sure. But is it philosophically interesting or is there some kind of implications
link |
00:05:22.880
of computer science and physics? I think not so much for computer science or physics per se.
link |
00:05:29.040
Certainly it would be of interest in philosophy, I think also to say cosmology or physics in as
link |
00:05:37.440
much as you're interested in the fundamental building blocks of the world and the rules that
link |
00:05:44.080
govern it. If we are in a simulation, there is then the possibility that say physics at the
link |
00:05:50.160
level where the computer running the simulation could be different from the physics governing
link |
00:05:57.600
phenomena in the simulation. So I think might be interesting from point of view of religion or
link |
00:06:04.800
just for trying to figure out what the heck is going on. So we mentioned the simulation hypothesis
link |
00:06:13.600
so far. There is also the simulation argument, which I tend to make a distinction. So simulation
link |
00:06:20.160
hypothesis, we are living in a computer simulation argument, this argument that
link |
00:06:24.320
tries to show that one of three propositions is true, one of which is the simulation hypothesis,
link |
00:06:30.640
but there are two alternatives in the original simulation argument, which we can get to.
link |
00:06:36.400
Yeah, let's go there. By the way, confusing terms because people will, I think, probably naturally
link |
00:06:42.480
think simulation argument equals simulation hypothesis, just terminology wise. But let's go
link |
00:06:47.760
there. So simulation hypothesis means that we are living in a simulation. The hypothesis that we're
link |
00:06:52.560
living in a simulation, simulation argument has these three complete possibilities that cover
link |
00:06:59.520
all possibilities. So what are they? Yeah, so it's like a disjunction. It says at least one of these
link |
00:07:03.760
three is true, although it doesn't on its own tell us which one. So the first one is that almost all
link |
00:07:14.400
civilizations at their current stage of technological development
link |
00:07:17.520
go extinct before they reach technological maturity. So there is some great filter
link |
00:07:27.440
that makes it so that basically none of the civilizations throughout, you know,
link |
00:07:35.440
maybe vast cosmos will ever get to realize the full potential of technological development.
link |
00:07:42.080
And this could be theoretically speaking, this could be because most civilizations kill themselves
link |
00:07:48.160
too eagerly or destroy themselves too eagerly, or it might be super difficult to build a simulation.
link |
00:07:54.960
So the span of time. Theoretically, it could be both. Now, I think it looks like we would
link |
00:08:01.360
technologically be able to get there in a time span that is short compared to, say, the lifetime of
link |
00:08:08.640
planets and other sort of astronomical processes. So your intuition is the build simulation is not.
link |
00:08:16.720
Well, so this is interesting concept of technological maturity. It's kind of an
link |
00:08:21.600
interesting concept to have other purposes as well. We can see, even based on our current
link |
00:08:27.120
limited understanding, what some lower bound would be on the capabilities that you could
link |
00:08:33.200
realize by just developing technologies that we already see are possible. So for example,
link |
00:08:40.000
one of my research fellows here, Eric Drexler back in the 80s, studied molecular manufacturing.
link |
00:08:48.240
That is, you could analyze using theoretical tools and computer modeling the performance
link |
00:08:55.440
of various molecularly precise structures that we didn't then and still don't today have the
link |
00:09:01.200
ability to actually fabricate. But you could say that, well, if we could put these atoms together
link |
00:09:06.480
in this way, then the system would be stable and it would rotate at this speed and have these
link |
00:09:12.960
computational characteristics. And he also outlined some pathways that would enable us to get to
link |
00:09:20.320
this kind of molecularly manufacturing in the fullness of time. And you could do other studies
link |
00:09:27.280
we've done. You could look at the speed at which say it would be possible to colonize the galaxy
link |
00:09:33.360
if you had mature technology. We have an upper limit, which is the speed of light. We have
link |
00:09:38.800
sort of a lower current limit, which is how fast current rockets go. We know we can go faster than
link |
00:09:43.760
that by just making them bigger and have more fuel and stuff. And you can then start to describe
link |
00:09:51.440
the technological affordances that would exist once a civilization has had enough time to develop.
link |
00:09:57.760
Even at least those technologies we already know are possible. Then maybe they would discover other
link |
00:10:01.760
new physical phenomena as well that we haven't realized that would enable them to do even more.
link |
00:10:06.960
But at least there is this kind of basic set of capabilities.
link |
00:10:11.920
Can you just link on that? How do we jump from molecular manufacturing to deep space exploration
link |
00:10:18.720
to mature technology? What's the connection? These would be two examples of technological
link |
00:10:27.600
capability sets that we can have a high degree of confidence or physically possible in our
link |
00:10:34.960
universe under that civilization that was allowed to continue to develop its science and technology
link |
00:10:41.680
would eventually attain. We can kind of see the set of breakthroughs that are likely to
link |
00:10:48.560
happen. So you can see, what did you call it, the technological set?
link |
00:10:52.880
With computers, maybe it's easier stuff. The one is we could just imagine bigger computers
link |
00:10:59.680
using exactly the same parts that we have. So you can kind of scale things that way, right?
link |
00:11:04.160
But you could also make processors a bit faster if you had this molecular nanotechnology that
link |
00:11:10.000
Director Exeter described. He characterized a kind of crude computer built with these parts
link |
00:11:16.640
that would perform at a million times the human brain while being significantly smaller, the
link |
00:11:22.640
size of a sugar cube. And he may not claim that that's the optimum computing structure.
link |
00:11:29.920
You could build a faster computer that would be more efficient, but at least you could do that
link |
00:11:34.000
if you had the ability to do things that were atomically precise. So you can then combine these
link |
00:11:39.200
two. You could have this kind of nanomolecular ability to build things atom by atom and then
link |
00:11:44.480
say at this as a spatial scale that would be attainable through space colonizing technology.
link |
00:11:52.800
You could then start, for example, to characterize a lower bound on the amount of computing power
link |
00:11:58.080
that a technologically mature civilization would have if it could grab resources, you know,
link |
00:12:04.560
planets and so forth and then use this molecular nanotechnology to optimize them for computing.
link |
00:12:10.000
You'd get a very, very high lower bound on the amount of compute.
link |
00:12:16.160
So, sorry, I just need to define some terms. So technologically mature civilization is one
link |
00:12:21.280
that took that piece of technology to its lower bound. What is it technologically mature civilization?
link |
00:12:27.360
Well, okay. So that means it's a stronger concept than we really need for the simulation hypothesis.
link |
00:12:31.040
I just think it's interesting in its own right. So it would be the idea that there is
link |
00:12:35.280
some stage of technological development where you've basically maxed out that you developed all
link |
00:12:41.840
those general purpose widely useful technologies that could be developed or at least kind of come
link |
00:12:48.400
very close to the, you know, 99.9% there or something. So that's an independent question.
link |
00:12:55.040
You can think either that there is such a ceiling or you might think it just goes,
link |
00:12:59.280
the technology tree just goes on forever. Where does your sense fall?
link |
00:13:04.720
I would guess that there is a maximum that you would start to asymptote towards.
link |
00:13:09.840
So new things won't keep springing up. New ceilings.
link |
00:13:13.760
In terms of basic technological capabilities, I think there is like a finite set of those
link |
00:13:19.840
that can exist in this universe. Moreover, I mean, I wouldn't be that surprised if we actually
link |
00:13:27.920
reached close to that level fairly shortly after we have, say, machine superintelligence.
link |
00:13:33.120
So I don't think it would take millions of years for a human originating civilization
link |
00:13:39.120
to begin to do this. I think it's like more likely to happen on historical timescales.
link |
00:13:45.760
But that's an independent speculation from the simulation argument.
link |
00:13:50.640
I mean, for the purpose of the simulation argument, it doesn't really matter whether it
link |
00:13:54.640
goes indefinitely far up or whether there is a ceiling, as long as we know we can at least
link |
00:13:59.120
get to a certain level. And it also doesn't matter whether that's going to happen in 100 years
link |
00:14:04.560
or 5,000 years or 50 million years. Like the timescales really don't make any difference
link |
00:14:10.560
for the simulation. Can you look on that a little bit? There's a big difference between 100 years
link |
00:14:15.360
and 10 million years. So it doesn't really not matter. Because you just said,
link |
00:14:22.000
does it matter if we jump scales to beyond historical scales? So we described that. So
link |
00:14:31.200
for the simulation argument, doesn't it matter that if it takes 10 million years,
link |
00:14:40.720
it gives us a lot more opportunity to destroy civilization in the meantime?
link |
00:14:44.640
Yeah. Well, so it would shift around the probabilities between these three alternatives.
link |
00:14:48.880
Right. That is, if we are very, very far away from being able to create these simulations,
link |
00:14:54.560
if it's like say the billions of years into the future, then it's more likely that we will fail
link |
00:14:58.960
ever to get there. There's more time for us to kind of, you know, go extinct along the way.
link |
00:15:03.840
And so it's similarly for other civilizations. So it is important to think about how hard it is
link |
00:15:08.320
to build a simulation. In terms of figuring out which of the disjuncts. But for the simulation
link |
00:15:15.200
argument itself, which is agnostic as to which of these three alternatives is true.
link |
00:15:19.760
Yeah, okay.
link |
00:15:21.600
You don't have to, like the simulation argument would be true whether or not,
link |
00:15:26.000
we thought this could be done in 500 years or it would take 500 million years.
link |
00:15:29.840
No, for sure. The simulation argument stands. I mean, I'm sure there might be some people who
link |
00:15:33.520
oppose it, but it doesn't matter. I mean, it's very nice those three cases cover it. But the fun part
link |
00:15:40.320
is at least not saying what the probabilities are, but kind of thinking about kind of
link |
00:15:45.360
intuitive reasoning about like what's more likely, what are the kind of things that would make
link |
00:15:51.360
some of the arguments less and more so like, but let's actually, I don't think we went through them.
link |
00:15:56.400
So number one is we destroy ourselves before we ever create simulation.
link |
00:16:00.640
Right. So that's kind of sad, but we have to think not just what might destroy us.
link |
00:16:07.200
I mean, so there could be some whatever disasters or meteorites slamming the earth
link |
00:16:13.680
a few years from now that could destroy us, right? But you'd have to postulate
link |
00:16:19.520
in order for this first disjunct to be true that almost all civilizations throughout
link |
00:16:26.720
the cosmos also failed to reach technological maturity.
link |
00:16:31.360
And the underlying assumption there is that there is likely a very large number of other
link |
00:16:37.120
intelligent civilizations. Well, if there are, yeah, then they would virtually all have to
link |
00:16:43.360
succumb in the same way. I mean, then that leads off another. I guess there are a lot of little
link |
00:16:48.560
digressions that they're interesting. Definitely, let's go there. Let's go there. I'll keep dragging
link |
00:16:51.920
us back. Well, there are these, there is a set of basic questions that always come up
link |
00:16:56.720
in conversations with interesting people. Yeah. Like the Fermi Paradox. Like there's like,
link |
00:17:03.120
you could almost define whether a person is interesting, whether at some point the question
link |
00:17:07.680
of the Fermi Paradox comes up. Well, so forward is where it looks to me that the universe is very
link |
00:17:15.120
big. I mean, in fact, according to the most popular current cosmological theory is infinitely big.
link |
00:17:22.480
And so then it would follow pretty trivially that it would contain a lot of other civilizations.
link |
00:17:28.560
In fact, infinite the many. If you have some local stochasticity and infinite the many,
link |
00:17:35.040
it's like, you know, infinite the many lumps of matter, one next to another, there's kind
link |
00:17:38.960
of random stuff in each one, then you're going to get all possible outcomes with probability
link |
00:17:43.840
one, infinitely repeated. So then certainly there would be a lot of random stuff in each one,
link |
00:17:50.880
so then certainly there would be a lot of extraterrestrials out there.
link |
00:17:55.920
Even short of that, if the universe is very big, there might be a finite but large number.
link |
00:18:02.320
If we were literally the only one, yeah, then of course,
link |
00:18:06.880
if we went extinct, then all of civilizations at our current stage would have gone extinct before
link |
00:18:13.520
becoming technological material. So then it kind of becomes trivially true that a very high fraction
link |
00:18:19.360
of those went extinct. But if we think there are many, I mean, it's interesting because there are
link |
00:18:24.880
certain things that possibly could kill us, like if you look at existential risks,
link |
00:18:33.840
and it might be a different, like that the best answer to what would be most likely to kill us
link |
00:18:38.960
might be a different answer than the best answer to the question. If there is something that kills
link |
00:18:44.720
almost everyone, what would that be? Because that would have to be some risk factor that was kind of
link |
00:18:50.320
uniform overall possible civilization. Yeah. So in this, for the sake of this argument,
link |
00:18:56.160
you have to think about not just us, but like every civilization dies out before they create
link |
00:19:02.240
the simulation or something very close to everybody. Okay. So what's number two in the
link |
00:19:09.760
Well, so number two is the convergence hypothesis that is that maybe like a lot of some of these
link |
00:19:15.360
civilizations do make it through to technological maturity. But out of those who do get there,
link |
00:19:21.200
they all lose interest in creating these simulations. So they just they have the capability
link |
00:19:29.040
of doing it, but they choose not to. Yeah, not just a few of them decide not to, but
link |
00:19:34.640
you know, out of a million, maybe not even a single one of them would do it.
link |
00:19:41.440
And I think when you say lose interest, that sounds like unlikely because it's like they
link |
00:19:48.080
get bored or whatever, but it could be so many possibilities within that. I mean, losing interest
link |
00:19:55.920
could be it could be anything from it being exceptionally difficult to do to fundamentally
link |
00:20:05.360
changing the sort of the fabric of reality. If you do it is ethical concerns, all those kinds
link |
00:20:12.240
of things could be exceptionally strong pressures. Well, certainly, I mean, yeah, ethical concerns.
link |
00:20:18.320
I mean, not really too difficult to do. I mean, in a sense, that's the first assumption that you
link |
00:20:24.080
get to technological maturity where you would have the ability using only a tiny fraction of your
link |
00:20:29.360
resources to create many, many simulations. So it wouldn't be the case that they would need to
link |
00:20:36.960
spend half of their GDP forever in order to create one simulation. And they had this like
link |
00:20:41.840
difficult debate about whether they should, you know, invest half of their GDP for this.
link |
00:20:46.160
It would more be like, well, if any little fraction of the civilization feels like doing this
link |
00:20:50.800
at any point during maybe their millions of years of existence, then there would be millions
link |
00:20:57.920
of simulations. But certainly, there could be many conceivable reasons for why there
link |
00:21:06.160
would be this convert, many possible reasons for not running ancestor simulations or other
link |
00:21:10.960
computer simulations, even if you could do so cheaply. By the way, what's an ancestor simulation?
link |
00:21:16.960
Well, that would be type of computer simulation that would contain people like those we think
link |
00:21:24.800
have lived on our planet in the past and like ourselves in terms of the types of experiences
link |
00:21:30.000
they have. And where those simulated people are conscious. So like, not just simulated in the
link |
00:21:35.760
same sense that a non player character would be simulated in the current computer game where
link |
00:21:42.560
it kind of has like an avatar body and then a very simple mechanism that moves it forward or
link |
00:21:48.320
backwards or but something where the simulated being has a brain, let's say that simulated
link |
00:21:56.080
at the sufficient level of granularity that it would have the same subjective experiences as we
link |
00:22:02.800
have. So where does consciousness fit into this? Do you think simulation, like is there
link |
00:22:08.640
different ways to think about how this can be simulated? Just like you're talking about now.
link |
00:22:14.080
Do we have to simulate each brain within the larger simulation? Is it enough to simulate
link |
00:22:22.080
just the brain, just the minds and not the simulation, not the big universe itself? Like,
link |
00:22:27.600
is there different ways to think about this? Yeah, I guess there is a kind of premise
link |
00:22:32.560
in the simulation argument, rolled in from philosophy of mind, that is that it would be
link |
00:22:40.240
possible to create a conscious mind in a computer. And that what determines whether
link |
00:22:47.520
some system is conscious or not is not like whether it's built from organic biological
link |
00:22:54.160
neurons, but maybe something like what the structure of the computation is that it implements.
link |
00:22:58.480
Right. So we can discuss that if we want, but I think it would be more forward as far as my view
link |
00:23:05.760
that it would be sufficient, say, if you had a computation that was identical to the computation
link |
00:23:15.040
in the human brain down to the level of neurons. So if you had a simulation with 100 billion neurons
link |
00:23:19.760
connected in the same Western human brain, and you then roll that forward with the same kind of
link |
00:23:25.520
synaptic weights and so forth, so you actually had the same behavior coming out of this
link |
00:23:31.280
as a human with that brain, then I think that would be conscious. Now, it's possible you could
link |
00:23:37.120
also generate consciousness without having that detailed assimilation. There, I'm getting more
link |
00:23:46.000
uncertain exactly how much you could simplify or abstract away.
link |
00:23:50.160
Can you look on that? What do you mean? So I missed where you're placing consciousness in
link |
00:23:55.760
the second. Well, so if you are a computationalist, do you think that what creates consciousness is
link |
00:24:01.920
the implementation of a computation? So some property, emergent property of the computation
link |
00:24:07.200
itself is the idea. Yeah, you could say that. But then the question is, what's the class of
link |
00:24:13.440
computations such that when they are run, consciousness emerges? So if you just have
link |
00:24:19.120
like something that adds one plus one plus one plus one, like a simple computation, you think maybe
link |
00:24:24.560
that's not going to have any consciousness. If on the other hand, the computation is one, like our
link |
00:24:31.760
human brains are performing, where as part of the computation, there is like, you know, a global
link |
00:24:40.080
workspace, a sophisticated attention mechanism, there is like self representations of other
link |
00:24:46.400
cognitive processes, and a whole lot of other things that possibly would be conscious. And in
link |
00:24:53.040
fact, if it's exactly like ours, I think definitely it would. But exactly how much less than the full
link |
00:25:00.480
computation that the human brain is performing would be required is a little bit, I think,
link |
00:25:06.000
of an open question. And you asked another interesting question as well, which is, would it be
link |
00:25:14.400
sufficient to just have, say, the brain or would you need the environment in order to generate
link |
00:25:23.200
the same kind of experiences that we have? And there is a bunch of stuff we don't know. I mean,
link |
00:25:29.280
if you look at, say, current virtual reality environments, one thing that's clear is that
link |
00:25:36.160
we don't have to simulate all details of them all the time in order for, say, the human player to
link |
00:25:42.560
have the perception that there is a full reality in there. You can have, say, procedurally generated
link |
00:25:47.280
virtual reality might only render a scene when it's actually within the view of the player character.
link |
00:25:54.480
And so similarly, if this environment that we perceive is simulated, it might be that
link |
00:26:05.280
all of the parts that come into our view are rendered at any given time. And a lot of aspects
link |
00:26:10.880
that never come into view, say, the details of this microphone I'm talking into exactly what
link |
00:26:17.120
each atom is doing at any given point in time might not be part of the simulation, only a more
link |
00:26:23.680
coarse grained representation. So that to me is actually from an engineering perspective, why the
link |
00:26:29.600
simulation hypothesis is really interesting to think about, is how much, how difficult is it to
link |
00:26:37.040
fake sort of in a virtual reality context? I don't know, fake is the right word, but to construct
link |
00:26:42.720
a reality that is sufficiently real to us to be immersive in the way that the physical world is,
link |
00:26:51.120
I think that's actually probably an answerable question of psychology, of computer science,
link |
00:26:58.320
of how, where's the line where it becomes so immersive that you don't want to leave that
link |
00:27:06.160
world? Yeah, or that you don't realize while you're in it that it is a virtual world.
link |
00:27:12.560
Yeah, those are two actually questions. Yours is the more sort of the good question about the realism.
link |
00:27:17.920
But mine, from my perspective, what's interesting is it doesn't have to be real, but how can we
link |
00:27:26.320
construct a world that we wouldn't want to leave? Yeah, I mean, I think that might be too low a bar.
link |
00:27:31.840
I mean, if you think, say, when people first had Pong or something like that, I'm sure there were
link |
00:27:37.600
people who wanted to keep playing it for a long time because it was fun and they wanted to be in
link |
00:27:42.320
this little world. I'm not sure we would say it's immersive. I mean, I guess in some sense it is,
link |
00:27:48.000
but like an absorbing activity doesn't even have to be. But they left that world though.
link |
00:27:52.400
So I think that bar is deceivingly high. So you can play Pong or Starcraft or would have
link |
00:28:04.720
more sophisticated games for hours, for months, while the work could be in a big addiction,
link |
00:28:12.080
but eventually they escaped that. So I mean, when it's absorbing enough that you would spend your
link |
00:28:17.680
entire, it would choose to spend your entire life in there. And then thereby changing the concept
link |
00:28:23.040
of what reality is, because your reality becomes the game, not because you're fooled,
link |
00:28:31.040
but because you've made that choice. Yeah, I mean, people might have different preferences
link |
00:28:37.760
regarding that. Some might, even if you had any perfect virtual reality,
link |
00:28:42.960
might still prefer not to spend the rest of their lives there. I mean, in philosophy,
link |
00:28:50.720
there's this experience machine thought experiment. Have you come across this?
link |
00:28:55.680
So Robert Nozick had this thought experiment where you imagine some crazy,
link |
00:29:02.400
super duper neuroscientists of the future have created a machine that could give you any experience
link |
00:29:07.520
you want if you step in there. And for the rest of your life, you can kind of pre programmed it
link |
00:29:12.960
in different ways. So your fondest dreams could come true. You could whatever you dream, you want
link |
00:29:22.800
to be a great artist, a great lover, like have a wonderful life, all of these things. If you step
link |
00:29:29.600
into the experience machine will be your experiences constantly happy. But would you kind of disconnect
link |
00:29:37.360
from the rest of reality and you would float there in a tank. And so Nozick thought that most
link |
00:29:44.080
people would choose not to enter the experience machine. I mean, many might want to go there for
link |
00:29:50.400
a holiday, but they wouldn't want us to check out of existence permanently. And so he thought that
link |
00:29:55.200
was an argument against certain views of value, according to what we value is a function of what
link |
00:30:02.640
we experience. Because in the experience machine, you could have any experience you want. And yet,
link |
00:30:08.240
many people would think that would not be much value. So therefore, what we value depends on
link |
00:30:14.800
other things than what we experience. So okay, can you take that argument further? What about
link |
00:30:22.080
the fact that maybe what we value is the up and down of life. So you could have up and downs in
link |
00:30:26.480
the experience machine, right? But what can't you have in the experience machine? Well, I mean,
link |
00:30:31.680
I mean, that then becomes an interesting question to explore. But for example, real connection
link |
00:30:38.000
with other people, if the experience machine is a solar machine, where it's only you,
link |
00:30:42.800
like that's something you wouldn't have there, you would have this objective experience that
link |
00:30:46.560
would be like fake people. But when if you gave somebody flowers, that wouldn't be anybody they
link |
00:30:52.800
who actually got happy, it would just be a little simulation of somebody smiling. But the
link |
00:30:58.560
simulation would not be the kind of simulation I'm talking about in the simulation argument where
link |
00:31:02.400
the simulated creature is conscious, it would just be a kind of smiley face that would look
link |
00:31:06.960
perfectly real to you. So we're now drawing a distinction between appear to be perfectly real
link |
00:31:13.520
and actually being real. Yeah. So that could be one thing. I mean, like a big impact on history,
link |
00:31:20.320
maybe it's also something you won't have if you check into this experience machine. So some people
link |
00:31:26.000
might actually feel the life I want to have for me is one where I have a big positive impact on
link |
00:31:34.080
history unfolds. So if you could kind of explore these different possible explanations for why it
link |
00:31:42.000
is, you wouldn't want to go into the experience machine if that's if that's what you feel.
link |
00:31:47.680
And one interesting observation regarding this nosic thought experiment and the conclusions
link |
00:31:53.280
you wanted to draw from it is how much is a kind of status quo effect. So a lot of people might not
link |
00:31:59.680
want to get this on current reality to plug into this dream machine. But if they instead were told,
link |
00:32:09.840
well, what you've experienced up to this point was a dream. Now, do you want to disconnect from this
link |
00:32:18.400
and enter the real world when you have no idea maybe what the real world is? Or maybe you could
link |
00:32:22.960
say, well, you're actually a farmer in Peru growing peanuts and you could live for the rest of your
link |
00:32:30.160
life in this. Or would you want to continue your dream life as Alex Friedman going around the
link |
00:32:39.360
world, making podcasts and doing research? So if the status quo was that they were actually in
link |
00:32:47.600
the experience machine, I think a lot of people might then prefer to live the life that they are
link |
00:32:53.040
familiar with rather than sort of bail out into. So essentially the change itself, the leap,
link |
00:32:59.040
whatever. Yeah, so it might not be so much the reality itself that we're after, but it's more
link |
00:33:03.840
that we are maybe involved in certain projects and relationships. And we have, you know, a self
link |
00:33:09.840
identity and these things that's our values are kind of connected with carrying that forward.
link |
00:33:14.480
And then whether it's inside a tank or outside a tank in Peru or whether inside a computer,
link |
00:33:21.520
outside a computer, that's kind of less important to what we ultimately care about.
link |
00:33:27.520
Yeah, so just to linger on it, it is interesting. I find maybe people are different, but I find
link |
00:33:35.760
myself quite willing to take the leap to the farmer in Peru, especially as the virtual
link |
00:33:41.920
reality system become more realistic. I find that possibility and I think more people would take
link |
00:33:48.480
that leap. But so in this thought experiment, just to make sure we are understanding, so in this
link |
00:33:52.160
case, the farmer in Peru would not be a virtual reality. That would be the real. The real. The
link |
00:33:58.160
real, your life, like before this whole experience machine started. Well, I kind of assumed from
link |
00:34:04.640
that description, you're being very specific, but that kind of idea just like,
link |
00:34:09.280
washes away the concept of what's real. I mean, I'm still a little hesitant about your kind of
link |
00:34:16.000
distinction between real and illusion. Because when you can have an illusion that feels, I mean,
link |
00:34:25.040
that looks real, I mean, what, I don't know how you can definitively say something is real or not.
link |
00:34:31.520
Like what's a good way to prove that something is real in that context?
link |
00:34:35.360
Well, so I guess in this case, it's more a definition. In one case, you're floating in a
link |
00:34:40.240
tank with these wires by the super duper neuroscientists, plugging into your head,
link |
00:34:46.480
giving you like Friedman experiences. In the other, you're actually tilling the soil in
link |
00:34:51.840
Peru, growing peanuts, and then those peanuts are being eaten by other people all around the
link |
00:34:56.640
world who buy the exports. And that's two different possible situations in the one and the
link |
00:35:02.560
same real world that you could choose to occupy.
link |
00:35:07.040
But just to be clear, when you're in a vat with wires and the neuroscientists,
link |
00:35:12.240
you can still go farming in Peru, right? No, well, you could, if you wanted to,
link |
00:35:19.600
you could have the experience of farming in Peru, but that wouldn't actually be any peanuts grown.
link |
00:35:24.320
Well, but what makes a peanut? So a peanut could be grown and you could feed things with that
link |
00:35:36.560
peanut. And why can't all of that be done in a simulation?
link |
00:35:41.520
I hope, first of all, that they actually have peanut farms in Peru. I guess we'll get a lot
link |
00:35:45.920
of comments out of us from angry. I was with you up until the point when you started talking about
link |
00:35:52.640
you should know you can't realize you're a lot of these in that climate.
link |
00:35:57.840
No, I mean, I think, I mean, I, in the simulation, I think there's a sense, the important sense in
link |
00:36:05.120
which it would all be real. Nevertheless, there is a distinction between inside a simulation and
link |
00:36:11.200
outside a simulation, or in the case of no six thought experiment, whether you're in the vat
link |
00:36:17.120
or outside the vat. And some of those differences may or may not be important. I mean, that comes
link |
00:36:23.120
down to your values and preferences. So if the, if the experience machine only gives you the
link |
00:36:30.640
experience of growing peanuts, but you're the only one in the experience machines.
link |
00:36:35.440
No, but there's other, you can, within the experience machine, others can plug in.
link |
00:36:41.200
Well, there are versions of the experience machine. So in fact, you might want to have
link |
00:36:45.440
distinguish different thought experiments, different versions of it. So in, like in the
link |
00:36:49.520
original thought experiment, maybe it's only you, right? Just you. So, and you think, I wouldn't
link |
00:36:53.600
want to go in there. Well, that tells you something interesting about what you value and what you
link |
00:36:57.200
care about. Then you could say, well, what if you add the fact that there would be other people in
link |
00:37:02.080
there and you would interact with them? Well, it starts to make it more attractive, right?
link |
00:37:06.640
Then you could add in, well, what if you could also have important long term effects on human
link |
00:37:10.800
history in the world and you could actually do something useful, even though you were in there,
link |
00:37:15.120
that makes it maybe even more attractive. Like you could actually have a life that had a purpose
link |
00:37:21.200
and consequences. So as you sort of add more into it, it becomes more similar to the baseline
link |
00:37:30.240
reality that you were comparing it to. Yeah, but I just think inside the experience machine and
link |
00:37:36.480
without taking those steps you just mentioned, you still have an impact on long term history
link |
00:37:43.040
of the creatures that live inside that, of the quote unquote, fake creatures that live inside
link |
00:37:50.480
that experience machine. And that, like at a certain point, if there's a person waiting for
link |
00:37:59.280
you inside that experience machine, maybe you're newly found wife and she dies, she has fear,
link |
00:38:06.640
she has hopes and she exists in that machine when you unplug yourself and plug back in,
link |
00:38:13.520
she's still there, going on about her life. Well, in that case, yeah, she starts to have
link |
00:38:18.000
more of an independent existence. Independent existence. But it depends, I think, on how she's
link |
00:38:22.960
implemented in the experience machine. Take one limit case where all she is is a static picture
link |
00:38:31.120
on the wall of photograph. So you think, well, I can look at her, but that's it. Then you think,
link |
00:38:38.480
well, it doesn't really matter much what happens to that. Any more than a normal photograph,
link |
00:38:42.800
if you tear it up, it means you can't see it anymore, but you haven't harmed the person
link |
00:38:48.560
whose picture you tore it up. But if she's actually implemented, say, at a neural level
link |
00:38:56.320
of detail, so that she's a fully realized digital mind with the same behavioral repertoire as you
link |
00:39:04.000
have, then very possibly she would be a conscious person like you are. And then you would, what you
link |
00:39:10.320
do in this experience machine would have real consequences for how this other mind felt.
link |
00:39:17.440
So you have to specify which of these experience machines you're talking about.
link |
00:39:20.960
I think it's not entirely obvious that it would be possible to have an experience machine that gave
link |
00:39:28.400
you a normal set of human experiences, which include experiences of interacting with other people,
link |
00:39:35.200
without that also generating consciousnesses corresponding to those other people. That is,
link |
00:39:41.760
if you create another entity that you perceive and interact with, that to you looks entirely
link |
00:39:48.320
realistic. Not just when you say hello, they say hello back, but you have a rich interaction
link |
00:39:52.720
many days, deep conversations. Like it might be that the only possible way of implementing that
link |
00:39:59.760
would be one that also as a side effect, instantiated this other person in enough detail
link |
00:40:05.280
that you would have a second consciousness there. I think that's to some extent an open question.
link |
00:40:11.520
So you don't think it's possible to fake consciousness and fake intelligence?
link |
00:40:14.800
Well, it might be. I mean, I think you could certainly fake, if you have a very limited
link |
00:40:19.680
interaction with somebody, you could certainly fake that. If all you have to go on is somebody
link |
00:40:25.600
said hello to you, that's not enough for you to tell whether that was a real person there
link |
00:40:30.320
or a prerecorded message or a very superficial simulation that has no consciousness,
link |
00:40:37.760
because that's something easy to fake. We could already fake it. Now you can record a voice
link |
00:40:41.120
recording. But if you have a richer set of interactions where you're allowed to ask open
link |
00:40:48.320
ended questions and probe from different angles that you couldn't give canned answer to all of
link |
00:40:54.320
the possible ways that you could probe it, then it starts to become more plausible that the only
link |
00:41:00.080
way to realize this thing in such a way that you would get the right answer from any which angle
link |
00:41:05.440
you probe it would be a way of instantiating it where you also instantiated a conscious mind.
link |
00:41:10.560
Yeah, I'm with you on the intelligence part, but there's something about me that says consciousness
link |
00:41:14.720
is easier to fake. I've recently gotten my hands on a lot of rubas. Don't ask me why or how.
link |
00:41:22.800
And I've made them, this is just a nice robotic mobile platform for experiments,
link |
00:41:28.320
and I made them scream and or moan and pain and so on just to see when they're responding to me.
link |
00:41:34.880
And it's just a sort of psychological experiment on myself. And I think they appear conscious to me
link |
00:41:40.960
pretty quickly. Like I, to me, at least my brain can be tricked quite easily. So if I introspect,
link |
00:41:49.760
it's harder for me to be tricked that something is intelligent. So I just have this feeling that
link |
00:41:55.120
inside this experience machine, just saying that you're conscious and having certain qualities
link |
00:42:02.400
of the interaction like being able to suffer, like being able to hurt, like being able to wonder
link |
00:42:08.880
about the essence of your own existence, not actually, I mean, the creating the illusion
link |
00:42:15.920
that you're wondering about it is enough to create the feeling of consciousness and
link |
00:42:21.520
the illusion of consciousness. And because of that, create a really immersive experience
link |
00:42:25.920
to where you feel like that is the real world. So you think there's a big gap between
link |
00:42:29.680
being appearing conscious and being conscious? Or is it that you think it's very easy to be
link |
00:42:35.120
conscious? I'm not actually sure what it means to be conscious. All I'm saying is the illusion of
link |
00:42:40.400
consciousness is enough for this to create a social interaction that's as good as if the
link |
00:42:49.120
thing was conscious, meaning I'm making it about myself. Right. Yeah. I mean, I guess there are a
link |
00:42:54.240
few difficulties. One is how good the interaction is, which might, I mean, if you don't really care
link |
00:42:57.840
about probing hard for whether the thing is conscious, maybe it would be a satisfactory
link |
00:43:04.960
interaction, whether or not you really thought it was conscious. Now, if you really care about it being
link |
00:43:14.960
conscious inside this experience machine, how easy would it be to fake it? And you say
link |
00:43:22.880
it sounds really easy. But then the question is, would that also mean it's very easy to
link |
00:43:28.400
instantiate consciousness? It's much more widely spread in the world than we have thought. It
link |
00:43:34.400
doesn't require a big human brain with 100 billion neurons. All you need is some system that exhibits
link |
00:43:39.440
basic intentionality and can respond and you already have consciousness. In that case, I guess
link |
00:43:44.160
you still have a close coupling. I guess a data case would be where they can come apart,
link |
00:43:52.000
where you could create the appearance of there being a conscious mind without actually not being
link |
00:43:57.520
another conscious mind. I'm somewhat agnostic exactly where these lines go. I think one
link |
00:44:04.480
observation that makes it plausible that you could have very realistic appearances
link |
00:44:12.080
relatively simply, which also is relevant for the simulation argument. And in terms of thinking
link |
00:44:18.640
about how realistic would a virtual reality model have to be in order for the simulated
link |
00:44:25.040
creature not to notice that anything was awry. Well, just think of our own humble brains during
link |
00:44:32.240
the wee hours of the night when we are dreaming. Many times, well, dreams are very immersive,
link |
00:44:38.640
but often you also don't realize that you're in a dream. And that's produced by simple primitive
link |
00:44:46.640
three pound lumps of neural matter effortlessly. So if a simple brain like this can create the
link |
00:44:53.520
virtual reality that seems pretty real to us, then how much easier would it be for a super
link |
00:45:01.280
intelligent civilization with planetary sized computers optimized over the eons to create
link |
00:45:06.800
a realistic environment for you to interact with? Yeah, by the way, behind that intuition
link |
00:45:13.600
is that our brain is not that impressive relative to the possibilities of what technology could
link |
00:45:20.000
bring. It's also possible that the brain is the epitome is the ceiling. Like just the ceiling.
link |
00:45:28.640
How is that possible? Meaning like this is the smartest possible thing that the universe could
link |
00:45:34.640
create. So that seems unlikely to me. Yeah, I mean, for some of these reasons we alluded to
link |
00:45:43.120
earlier in terms of designs we already have for computers that would be faster by many orders
link |
00:45:52.560
of magnitude than the human brain. Yeah, but it could be that the constraints, the cognitive
link |
00:45:58.480
constraints in themselves is what enables the intelligence. So the more powerful you make the
link |
00:46:04.640
computer, the less likely it is to become super intelligent. This is where I say dumb things
link |
00:46:10.560
to push back. Yeah, I'm not sure. I mean, so there are different dimensions of intelligence.
link |
00:46:17.920
A simple one is just speed. Like if you can solve the same challenge faster in some sense,
link |
00:46:23.680
you're smarter. So there I think we have very strong evidence for thinking that you could have
link |
00:46:30.480
a computer in this universe that would be much faster than the human brain and therefore have
link |
00:46:37.520
speed superintelligence, like be completely superior, maybe a million times faster.
link |
00:46:42.560
Then maybe there are other ways in which you could be smarter as well, maybe more qualitative
link |
00:46:47.680
ways, right? And there the concepts are a little bit less clear cut. So it's harder to make a very
link |
00:46:54.080
crisp, neat, firmly logical argument for why that could be qualitative superintelligence as
link |
00:47:01.760
opposed to just things that were faster. Although I still think it's very plausible.
link |
00:47:04.960
And for various reasons that are less than watertight arguments. But for example, if you look at
link |
00:47:10.880
animals and even within humans, there seems to be Einstein versus random person. It's not just
link |
00:47:19.200
that Einstein was a little bit faster. But how long would it take a normal person to invent
link |
00:47:24.640
general relativity? It's not 20% longer than it took Einstein or something like that. I don't
link |
00:47:30.640
know whether they would do it at all or it would take millions of years or some totally bizarre.
link |
00:47:36.800
But your intuition is that the compute size will get you go. Increasing the size of the computer
link |
00:47:43.280
and the speed of the computer might create some much more powerful levels of intelligence that
link |
00:47:49.680
would enable some of the things we've been talking about with the simulation, being able to simulate
link |
00:47:54.560
an ultra realistic environment, ultra realistic reception of reality.
link |
00:48:01.200
Yeah. I mean, strictly speaking, it would not be necessary to have superintelligence in order to
link |
00:48:05.840
have, say, the technology to make these simulations, ancestor simulations or other kinds of simulations.
link |
00:48:14.000
As a matter of fact, I think if we are in a simulation, it would most likely be one built
link |
00:48:21.120
by a civilization that had superintelligence. It certainly would help. I mean, it could build
link |
00:48:28.080
more efficient, larger scale structures if you had superintelligence. I also think that if you had
link |
00:48:32.400
the technology to build these simulations, that's like a very advanced technology. It seems kind
link |
00:48:36.160
of easier to get the technology to superintelligence. So I'd expect by the time they could make these
link |
00:48:42.720
fully realistic simulations of human history with human brains in there, before that, they
link |
00:48:47.760
got to that stage that would have figured out how to create machine superintelligence or maybe
link |
00:48:54.000
biological enhancements of their own brains if they were biological creatures to start with.
link |
00:48:58.960
So we talked about the three parts of the simulation argument. One, we destroy ourselves
link |
00:49:05.520
before we ever create the simulation. Two, we somehow, everybody somehow loses interest in
link |
00:49:11.440
creating simulation. Three, we're living in a simulation. So you've kind of, I don't know if
link |
00:49:19.040
your thinking has evolved on this point, but you kind of said that we know so little that these
link |
00:49:24.880
three cases might as well be equally probable. So probabilistically speaking, where do you stand
link |
00:49:31.120
on this? Yeah, I mean, I don't think equal necessarily would be the most supported probability
link |
00:49:40.240
assignment. So how would you, without assigning actual numbers, what's more or less likely in
link |
00:49:46.480
your view? Well, I mean, I've historically tended to punt on the question of like as between these
link |
00:49:54.320
three. So maybe you asked me another way is which kind of things would make each of these more or
link |
00:50:01.440
less likely? What kind of, yeah. I mean, certainly in general terms, if you think anything that say
link |
00:50:07.920
increases or reduces the probability of one of these would tend to slash probability around
link |
00:50:16.320
on the other. So if one becomes less probable, like the other would have to, because it's going to
link |
00:50:20.400
add up to one. Yes. So if we consider the first hypothesis, the first alternative that there's
link |
00:50:27.040
this filter that makes it so that virtually no civilization reaches technical maturity.
link |
00:50:35.040
In particular, our own civilization, if that's true, then it's like very unlikely that we would
link |
00:50:42.160
reach technical maturity, because if almost no civilization at our stage does it, then
link |
00:50:47.280
it's unlikely that we do it. So I'm sorry, can you longer on that for a second? Well,
link |
00:50:51.360
so if it's the case that almost all civilizations at our current stage of technical maturity fail,
link |
00:50:57.440
at our current stage of technical development, fail to reach maturity,
link |
00:51:01.120
that would give us very strong reason for thinking we will fail to reach technical maturity.
link |
00:51:07.520
Oh, and also sort of the flip side of that is the fact that we've reached it means that many
link |
00:51:12.240
other civilizations have reached this point. Yeah. So that means if we get closer and closer to
link |
00:51:15.760
actually reaching technical maturity, there's less and less distance left where we could
link |
00:51:22.480
go extinct before we are there. And therefore, the probability that we will reach increases
link |
00:51:29.200
as we get closer. And that would make it less likely to be true that almost all civilizations
link |
00:51:34.880
at our current stage failed to get there. Like we would have this, the one case we had started
link |
00:51:40.000
ourselves would be very close to getting there. That would be strong evidence is not so hard to
link |
00:51:44.480
get to technical maturity. So to the extent that we feel we are moving nearer to technical
link |
00:51:51.040
maturity, that would tend to reduce the probability of the first alternative and increase the probability
link |
00:51:57.760
of the other to it doesn't need to be a monotonic change. Like if every once in a while, some new
link |
00:52:04.640
threat comes into view, some bad new thing you could do with some novel technology, for example,
link |
00:52:10.560
you know, that that could change our probabilities in the other direction.
link |
00:52:14.880
But that that technology again, you have to think about as that technology has to be able to
link |
00:52:20.800
equally in an even way affect every civilization out there.
link |
00:52:26.160
Yeah, pretty much. I mean, that's strictly speaking, it's not true. I mean, I could be two
link |
00:52:31.600
different existential risk and every civilization, you know, one or the other, like, but none of them
link |
00:52:39.920
kills more than 50%. Like, yeah, but I incidentally, so in some of my the work, I mean, on machine
link |
00:52:48.000
superintelligence, like some existential risks related to sort of superintellent AI and how we
link |
00:52:54.080
must make sure, you know, to handle that wisely and carefully. It's not the right kind of existential
link |
00:53:03.040
catastrophe to make the first alternative true, though, like it might be bad for us.
link |
00:53:12.000
If the future lost a lot of value as a result of it being shaped by some process that optimized for
link |
00:53:18.400
some completely non human value. But even if we got killed by machine superintelligence is that
link |
00:53:26.320
machine superintelligence might still attain technological maturity.
link |
00:53:30.080
So I see. So you're not very, you're not human exclusive. This could be any intelligent species
link |
00:53:35.360
that achieves like it's all about the technological maturity. It's not that the humans have to
link |
00:53:42.080
attain it. Right.
link |
00:53:43.040
So like superintelligence because it replaced us. And that's just as well for the simulation
link |
00:53:47.040
argument. Yeah, I mean, it could interact with the second by alternative. Like if the thing that
link |
00:53:52.640
replaced us was either more likely or less likely, then we would be to have an interest in creating
link |
00:53:58.320
ancestor simulations, you know, that that could affect probabilities. But yeah, to a first order,
link |
00:54:05.600
like if we all just die, then yeah, we won't produce any simulations because we are dead. But if we
link |
00:54:12.160
all die and get replaced by some other intelligent thing that then gets to technological maturity,
link |
00:54:17.120
the question remains, of course, if my not that thing, then use some of its resources to do this
link |
00:54:22.720
stuff. So can you reason about this stuff? This is given how little we know about the universe.
link |
00:54:29.280
Is it reasonable to reason about these probabilities? So like how little, well,
link |
00:54:38.880
well, maybe you can disagree. But to me, it's not trivial to figure out how difficult it is to
link |
00:54:46.080
build a simulation. We kind of talked about it a little bit. We've also don't know, like as we
link |
00:54:52.960
try to start building it, like start creating virtual worlds and so on, how that changes the
link |
00:54:57.840
fabric of society. Like there's all these things along the way that can fundamentally change just
link |
00:55:03.440
so many aspects of our society about our existence that we don't know anything about. Like the kind
link |
00:55:09.600
of things we might discover when we understand to a greater degree the fundamental, the physics,
link |
00:55:19.120
like the theory, if we have a breakthrough, have a theory and everything, how that changes,
link |
00:55:23.600
how that changes deep space exploration and so on. So like, is it still possible to reason about
link |
00:55:30.640
probabilities given how little we know? Yes, I think though, there will be a large residual of
link |
00:55:37.920
uncertainty that we'll just have to acknowledge. And I think that's true for most of these big
link |
00:55:45.680
picture questions that we might wonder about. It's just we are small, short lived, small brained,
link |
00:55:54.400
cognitively very limited humans with little evidence and it's amazing we can figure out as
link |
00:56:01.680
much as we can really about the cosmos. But okay, so there's this cognitive trick that seems to happen
link |
00:56:09.680
when I look at the simulation argument, which for me, it seems like case one and two feel
link |
00:56:15.120
unlikely. I want to say feel unlikely as opposed to sort of like, it's not like I have too much
link |
00:56:22.000
scientific evidence to say that either one or two are not true. It just seems unlikely that every
link |
00:56:29.040
single civilization destroys itself. And it seems like feels unlikely that the civilizations lose
link |
00:56:36.480
interest. So naturally, without necessarily explicitly doing it, but the simulation argument
link |
00:56:44.160
basically says it's very likely we're living in a simulation. Like to me, my mind naturally goes
link |
00:56:51.520
there. I think the mind goes there for a lot of people. Is that the incorrect place for it to go?
link |
00:56:57.600
Well, not necessarily. I think the second alternative which has to do with the motivations
link |
00:57:06.560
and interests of technological and mature civilizations. I think there is much we don't
link |
00:57:14.000
understand about that. Can you talk about that a little bit? What do you think? I mean, this
link |
00:57:19.280
question that pops up when you build an AGI system or build a general intelligence or
link |
00:57:26.080
how does that change your motivations? Do you think it will fundamentally transform our motivations?
link |
00:57:31.360
Well, it doesn't seem that implausible that once you take this leap to technological maturity.
link |
00:57:39.680
I mean, I think it involves creating machine superintelligence possibly that would be sort of
link |
00:57:45.680
on the path for basically all civilizations, maybe before they are able to create large
link |
00:57:51.440
numbers of ancestor simulations. That possibly could be one of these things that quite radically
link |
00:57:58.400
changes the orientation of what a civilization is, in fact, optimizing for. There are other
link |
00:58:06.800
things as well. At the moment, we have not perfect control over our own being, our own mental states,
link |
00:58:20.000
our own experiences are not under our direct control. For example, if you want to experience
link |
00:58:29.600
a pleasure and happiness, you might have to do a whole host of things in the external world to
link |
00:58:37.280
try to get into the stage, into the mental state where you experience pleasure. Like when people
link |
00:58:43.040
get some pleasure from eating great food, well, they can't just turn that on. They have to kind
link |
00:58:47.520
of actually go to a nice restaurant and then they have to make money. So there's like all this kind
link |
00:58:52.240
of activity that maybe arises from the fact that we are trying to ultimately produce mental states,
link |
00:59:01.840
but the only way to do that is by a whole host of complicated activities in the external world.
link |
00:59:06.800
Now, at some level of technological development, I think we'll become auto potent in the sense of
link |
00:59:12.000
gaining direct ability to choose our own internal configuration and enough knowledge and insight
link |
00:59:19.440
to be able to actually do that in a meaningful way. So then it could turn out that there are a lot
link |
00:59:24.800
of instrumental goals that would drop out of the picture and be replaced by other instrumental
link |
00:59:31.040
goals because we could now serve some of these final goals in more direct ways. And who knows how
link |
00:59:37.680
all of that shakes out after civilizations reflect on that and converge and different
link |
00:59:45.920
attractors and so on and so forth. And that could be new instrumental considerations that come into
link |
00:59:54.960
view as well that we are just oblivious to that would maybe have a strong shaping effect on actions,
link |
01:00:03.440
like very strong reasons to do something or not to do something. And we just don't realize
link |
01:00:07.680
they are there because we are so dumb, bumbling through the universe. But if almost inevitably
link |
01:00:13.120
on route to attaining the ability to create many answers to simulations, you do have this
link |
01:00:18.720
cognitive enhancement or advice from superintelligence or you yourself, then maybe there's like this
link |
01:00:24.160
additional set of considerations coming into view. And yesterday I asked, it's obvious that the thing
link |
01:00:28.560
that makes sense is to do X. Whereas right now it seems you could X, Y or Z and different people
link |
01:00:33.520
will do different things and we are kind of random in that sense. Yeah, because at this time,
link |
01:00:41.040
with our limited technology, the impact of our decisions is minor. I mean, that's starting
link |
01:00:46.000
to change in some ways. Well, I'm not sure how it follows that the impact of our decisions is minor.
link |
01:00:54.240
Well, it's starting to change. I mean, I suppose 100 years ago was minor. It's starting to...
link |
01:01:00.480
Well, it depends on how you view it. So what people did 100 years ago still have effects on
link |
01:01:07.440
the world today. Oh, as I see, as a civilization in the togetherness. Yeah. So it might be that
link |
01:01:16.880
the greatest impact of individuals is not at technological maturity or very far down. It might
link |
01:01:23.040
be earlier on when there are different tracks, civilization could go down. I mean, maybe the
link |
01:01:28.480
population is smaller, things still haven't settled out. If you count indirect effects, that
link |
01:01:37.920
those could be bigger than the direct effects that people have later on. So part three of the
link |
01:01:44.160
argument says that, so that leads us to a place where eventually somebody creates a simulation.
link |
01:01:53.520
I think you had a conversation with Joe Rogan. I think there's some aspect here where you get
link |
01:01:57.680
stuck a little bit. How does that lead to we're likely living in a simulation? So this kind of
link |
01:02:08.080
probability argument, if somebody eventually creates a simulation, why does that mean that
link |
01:02:13.440
we're now in a simulation? What you get to if you accept alternative three first is there would be
link |
01:02:20.240
more simulated people with our kinds of experiences than non simulated ones. If you look at the world
link |
01:02:30.400
as a whole, by the end of time, as it were, you just count it up, that would be more simulated
link |
01:02:37.680
ones than non simulated ones. Then there is an extra step to get from that. If you assume that,
link |
01:02:43.760
suppose for the sake of the argument that that's true, how do you get from that to the statement
link |
01:02:51.280
we are probably in a simulation? So here you're introducing an indexical statement like it's
link |
01:03:01.840
that this person right now is in a simulation. There are all these other people that are in
link |
01:03:09.280
simulations and some that are not in the simulation. But what probability should you have that you
link |
01:03:15.680
yourself is one of the simulated ones in the setup. So I call it the bland principle of
link |
01:03:23.280
indifference, which is that in cases like this, when you have two sets of observers,
link |
01:03:30.720
one of which is much larger than the other, and you can't from any internal evidence you have,
link |
01:03:39.840
tell which set you belong to. You should assign a probability that's proportional to the size
link |
01:03:49.200
of these sets so that if there are 10 times more simulated people with your kinds of experiences,
link |
01:03:55.040
you would be 10 times more likely to be one of those. Is that as intuitive as it sounds?
link |
01:04:00.480
I mean, that seems kind of, if you don't have enough information, you should rationally just
link |
01:04:06.480
assign the same probability as the size of the set. It seems pretty plausible to me.
link |
01:04:15.600
Where are the holes in this? Is it at the very beginning, the assumption that everything stretches
link |
01:04:22.320
sort of you have infinite time essentially? You don't need infinite time.
link |
01:04:26.720
You just need how long does the time you take? However long it takes, I guess, for a universe
link |
01:04:32.960
to produce an intelligent civilization that then attains the technology to run some
link |
01:04:37.680
ancestry simulations. Got you. When the first simulation is created, that stretch of time
link |
01:04:44.320
just a little longer than they'll all start creating simulations, kind of like order of
link |
01:04:49.040
matters. Well, I mean, it might be different. If you think of there being a lot of different
link |
01:04:54.320
planets and some subset of them have life and then some subset of those get intelligent life
link |
01:05:00.800
and some of those maybe eventually start creating simulations, they might get started at quite
link |
01:05:06.240
different times. Like maybe on some planet, it takes a billion years longer before you get
link |
01:05:11.840
like monkeys or before you get even bacteria than on another planet. So this might happen
link |
01:05:20.240
when kind of at different cosmological epochs. Is there a connection here to the doomsday
link |
01:05:26.800
argument and that sampling there? Yeah, there is a connection in that they both
link |
01:05:33.360
involve an application of anthropic reasoning that is reasoning about these kind of indexical
link |
01:05:40.240
propositions. But the assumption you need in the case of the simulation argument
link |
01:05:45.520
is much weaker than the assumption you need to make the doomsday argument go through.
link |
01:05:53.520
What is the doomsday argument? And maybe you can speak to the anthropic reasoning in more
link |
01:05:58.160
general. Yeah, that's a big and interesting topic in its own right, anthropics. But the
link |
01:06:03.200
doomsday argument is this really first discovered by Brandon Carter, who was a theoretical physicist
link |
01:06:11.120
and then developed by philosopher John Leslie. I think it might have been discovered initially
link |
01:06:18.160
in the 70s or 80s. And Leslie wrote this book, I think in 96. And there are some other versions
link |
01:06:24.240
as well by Richard Gott, he's a physicist, but let's focus on the Carter Leslie version where
link |
01:06:32.320
it's an argument that we have systematically underestimated the probability that
link |
01:06:39.520
he might not be able to go extinct soon. Now, I should say most people probably
link |
01:06:46.960
think at the end of the day, there is something wrong with this doomsday argument that it doesn't
link |
01:06:50.960
really hold. It's like there's something wrong with it. But it's proved hard to say exactly what
link |
01:06:56.080
is wrong with it. And different people have different accounts. My own view is it seems
link |
01:07:02.560
inconclusive. But I can say what the argument is. Yeah, that would be good. Yeah, so maybe it's
link |
01:07:08.880
easy to explain via an analogy to sampling from urns. So imagine you have a big, imagine you
link |
01:07:21.280
have two urns in front of you, and they have balls in them that have numbers. The two urns look
link |
01:07:27.440
the same, but inside one, there are 10 balls, ball number one, two, three up to ball number 10.
link |
01:07:32.080
And then in the other urn, you have a million balls numbered one to a million. And now somebody
link |
01:07:41.600
puts one of these urns in front of you and asks you to guess what's the chance it's the 10 ball
link |
01:07:48.720
urn. And you say, well, 50, 50, I can't tell which urn it is. But then you're allowed to
link |
01:07:54.240
reach in and pick a ball at random from the urn. And that's suppose you find that it's ball number
link |
01:07:59.600
seven. So that's strong evidence for the 10 ball hypothesis. Like, it's a lot more likely that
link |
01:08:07.440
you would get such a low numbered ball, if there are only 10 balls in the urn, like it's in fact
link |
01:08:12.480
10% done, right? Then if there are a million balls, it would be very unlikely you would get number
link |
01:08:18.000
seven. So you perform a Bayesian update. And if your prior was 50, 50, that it was the 10 ball
link |
01:08:26.800
urn, you become virtually certain after finding the random sample was seven that it only has 10
link |
01:08:31.920
balls in it. So in the case of the urns, this is uncontroversial, just elementary probability
link |
01:08:36.560
theory. The Doomsday argument says that you should reason in a similar way with respect to
link |
01:08:42.800
different hypotheses about how many balls there will be in the urn of humanity, I said,
link |
01:08:49.280
for how many humans there will ever be by the time we go extinct. So to simplify, let's suppose we
link |
01:08:55.760
only consider two hypotheses, either maybe 200 billion humans in total, or 200 trillion humans
link |
01:09:03.520
in total. You could fill in more hypotheses, but it doesn't change the principle here. So it's
link |
01:09:09.360
easiest to see if we just consider these two. So you start with some prior based on ordinary,
link |
01:09:14.080
empirical ideas about threats to civilization and so forth. And maybe you say it's a 5% chance that
link |
01:09:20.880
we will go extinct. By the time there will have been 200 billion only, you're kind of optimistic,
link |
01:09:26.400
let's say, I think probably we'll make it through colonized universe. But then, according to this
link |
01:09:33.600
Doomsday argument, you should take off your own birth rank as a random sample. So your birth rank
link |
01:09:41.120
is your sequence in the position of all humans that have ever existed. And it turns out you're
link |
01:09:48.320
about a human number of 100 billion, you know, give or take. That's like roughly how many people
link |
01:09:53.920
have been born before you. That's fascinating, because I probably, we each have a number.
link |
01:09:59.520
We would each have a number in this. I mean, obviously, the exact number would depend on
link |
01:10:04.080
where you started counting, like which ancestors was human enough to count as human. But those
link |
01:10:09.840
are not really important. They're relatively few. So yeah, so you're roughly 100 billion. Now,
link |
01:10:16.320
if they're only going to be 200 billion in total, that's a perfectly unremarkable number. You're
link |
01:10:21.120
somewhere in the middle, right? Run of the mill human, completely unsurprising. Now, if they're
link |
01:10:27.680
going to be 200 trillion, you would be remarkably early. Like, what are the chances out of these
link |
01:10:34.640
200 trillion human that you should be human number 100 billion? That seems it would have
link |
01:10:41.520
a much lower conditional probability. And so analogously to how in the urn case, you thought
link |
01:10:48.880
after finding this low number random sample, you updated in favor of the urn having few balls.
link |
01:10:54.480
Similarly, in this case, you should update in favor of the human species having a lower total
link |
01:11:00.720
number of members that is doomed soon. You said doomed soon. That's the hypothesis in this case
link |
01:11:08.960
that it will end 100 billion. I just like that term for the hypothesis. Yeah.
link |
01:11:14.080
So what it kind of crucially relies on, the doomed argument, is the idea that you should reason
link |
01:11:21.520
as if you were a random sample from the set of all humans that will ever have existed.
link |
01:11:27.200
If you have that assumption, then I think the rest kind of follows. The question then is,
link |
01:11:31.440
why should you make that assumption? In fact, you know you're 100 billion, so where do you get this
link |
01:11:37.280
prior? And then there is like a literature on that with different ways of supporting that assumption.
link |
01:11:44.960
There's just one example of anthropocrysine, right? That seems to be kind of convenient when you
link |
01:11:50.000
think about humanity, when you think about sort of even like existential threats and so on,
link |
01:11:56.880
as it seems that quite naturally that you should assume that you're just an average case.
link |
01:12:02.480
Yeah, that you're kind of a typical randomly sample. Now in the case of the doomed argument,
link |
01:12:09.520
it seems to lead to what intuitively we think is the wrong conclusion. Or at least many people
link |
01:12:14.320
have this reaction that there's got to be something fishy about this argument. Because from very,
link |
01:12:19.680
very weak premises, it gets this very striking implication that we have almost no chance of
link |
01:12:27.040
reaching size 200 trillion humans in the future. And how can we possibly get there just by reflecting
link |
01:12:33.920
on when we were born? It seems you would need sophisticated arguments about the impossibility
link |
01:12:38.640
of space colonization, blah, blah. So one might be tempted to reject this key assumption. I call
link |
01:12:43.840
it the self sampling assumption. The idea that you should reason as if you're a random sample from
link |
01:12:47.920
all observers or in your some reference class. However, it turns out that in other domains,
link |
01:12:56.480
it looks like we need something like this self sampling assumption to make sense of
link |
01:13:02.080
bona fide scientific inferences in contemporary cosmology, for example, we have these multiverse
link |
01:13:07.760
theories. And according to a lot of those, all possible human observations are made. I mean,
link |
01:13:15.040
if you have a sufficiently large universe, you will have a lot of people observing all kinds
link |
01:13:19.120
of different things. So if you have two competing theories, say about the value of some constant,
link |
01:13:29.040
it could be true, according to both of these theories, that there will be some observers
link |
01:13:34.400
observing the value that corresponds to the other theory, because there will be some observers that
link |
01:13:42.080
have hallucinations. So there's a local fluctuation or a statistically anomalous measurement,
link |
01:13:47.360
these things will happen. And if enough observers make enough different observations,
link |
01:13:52.160
there will be some that sort of by chance make these different ones. And so what we would want to say
link |
01:13:57.360
is, well, many more observers, a larger proportion of the observers will observe as it were the true
link |
01:14:06.160
value. And a few will observe the wrong value. If we think of ourselves as a random sample,
link |
01:14:12.560
we should expect with our own probability to observe the true value and that will then allow us
link |
01:14:18.160
to conclude that the evidence we actually have is evidence for the theories we think are supported.
link |
01:14:24.480
It kind of done is a way of making sense of these inferences that clearly seem correct,
link |
01:14:31.840
that we can make various observations and infer what the temperature of the cosmic background is
link |
01:14:38.960
and the fine structure constant and all of this. But it seems that without rolling in some assumption
link |
01:14:46.640
similar to the self sampling assumption, this inference doesn't go through. And there are
link |
01:14:52.000
other examples. So there are these scientific contexts where it looks like this kind of
link |
01:14:55.680
anthropic reasoning is needed and makes perfect sense. And yet, in the case of the Dubster argument,
link |
01:15:00.880
it has this weird consequence and people might think there's something wrong with it there.
link |
01:15:04.320
So there's then this project that would consistent try to figure out what are the legitimate ways
link |
01:15:14.480
of reasoning about these indexical facts when observer selection effects are in play. In other
link |
01:15:20.560
words, developing a theory of anthropics. And there are different views of looking at that.
link |
01:15:25.760
And it's a difficult methodological area. But to tie it back to the simulation argument,
link |
01:15:33.200
the key assumption there, this bland principle of indifference, is much weaker than the self
link |
01:15:40.560
sampling assumption. So if you think about in the case of the Dubster argument, it says you
link |
01:15:47.760
should reason as if you are a random sample from all humans that would have lived, even though in
link |
01:15:51.760
fact you know that you are about number 100 billionth human and you're alive in the year 2020,
link |
01:15:59.520
whereas in the case of the simulation argument, it says that, well, if you actually have no way
link |
01:16:04.400
of telling which one you are, then you should assign this kind of uniform probability.
link |
01:16:11.040
Yeah, your role as the observer in the simulation argument is different, it seems like.
link |
01:16:15.840
Who's the observer? I keep assigning the individual consciousness.
link |
01:16:20.960
Well, a lot of observers in the simulation, in the context of the simulation argument,
link |
01:16:25.360
the relevant observers would be A, the people in original histories and B, the people in simulations.
link |
01:16:33.200
So this would be the class of observers that we need. I mean, there are also maybe the simulators,
link |
01:16:37.280
but we can set those aside for this. So the question is, given that class of observers,
link |
01:16:43.920
a small set of original history observers and the large class of simulated observers,
link |
01:16:48.400
which one should you think is you? Where are you amongst this set of observers?
link |
01:16:53.440
I'm maybe having a little bit of trouble wrapping my head around the intricacies of
link |
01:16:59.600
what it means to be an observer in the different instantiations of the anthropic reasoning cases
link |
01:17:08.240
that we mentioned. It's like the observer. No, I mean, it may be an easier way of putting it,
link |
01:17:14.640
it's just like, are you simulated or are you not simulated, given this assumption that these
link |
01:17:19.680
two groups of people exist? Yeah, in the simulation case, it seems pretty straightforward.
link |
01:17:24.480
Yeah, so the key point is the methodological assumption you need to make to get the simulation
link |
01:17:31.200
argument to where it wants to go is much weaker and less problematic than the methodological
link |
01:17:38.400
assumption you need to make to get the doomsday argument to its conclusion. Maybe the doomsday
link |
01:17:43.280
argument is sound or unsound, but you need to make a much stronger and more controversial assumption
link |
01:17:49.920
to make it go through. In the case of the simulation argument, I guess one
link |
01:17:56.080
maybe way intuition popped to support this bland principle of indifference,
link |
01:18:00.720
is to consider a sequence of different cases where the fraction of people who are simulated
link |
01:18:07.600
to non simulated approaches one. In the limiting case where everybody is simulated,
link |
01:18:18.560
obviously you can deduce with certainty that you are simulated. If everybody
link |
01:18:26.640
with your experiences is simulated and you know you've got to be one of those,
link |
01:18:30.720
you don't need a probability at all, you just kind of logically
link |
01:18:33.520
conclude it, right? So then as we move from a case where say 90% of everybody is simulated,
link |
01:18:44.960
99.9%, it should seem possible that the probability assigned should sort of approach
link |
01:18:52.960
one, certainty, as the fraction approaches the case where everybody is in the simulation.
link |
01:19:00.480
And so you wouldn't expect that to be a discrete. Well, if there's one non simulated person,
link |
01:19:06.480
then it's 50, 50, but if we move that, then it's 100%. There are other
link |
01:19:13.200
arguments as well one can use to support this bland principle of indifference, but
link |
01:19:18.000
that might be nice too. But in general, when you start from time equals zero and go into the future,
link |
01:19:24.640
the fraction of simulated, if it's possible to create simulated worlds,
link |
01:19:28.960
the fraction simulated worlds will go to one. Well, I mean, is that an obvious kind of thing?
link |
01:19:35.120
Well, it won't probably go all the way to one. In reality, there would be some
link |
01:19:40.240
ratio, although maybe a technological mature civilization could run a lot of
link |
01:19:46.000
simulations using a small portion of its resources. It probably wouldn't be able to
link |
01:19:52.240
run infinitely many. I mean, if we take say the observed, the physics in the observed universe,
link |
01:19:58.560
if we assume that that's also the physics at the level of the simulators, that would be limits
link |
01:20:04.880
to the amount of information processing that anyone civilization could perform in its future
link |
01:20:11.520
trajectory. Right. Well, first of all, there's limited amount of matter you can get your hands
link |
01:20:18.560
off because with the positive cosmological constant, the universe is accelerating,
link |
01:20:24.320
there's like a finite sphere of stuff, even if you travel with the speed of light that you
link |
01:20:28.000
could ever reach to have a finite amount of stuff. And then if you think there is like a lower limit
link |
01:20:34.320
to the amount of loss you get when you perform an erasure of a computation, or if you think,
link |
01:20:40.640
for example, just matter gradually over cosmological time scales, decay, maybe protons, decay,
link |
01:20:46.960
other things, you radiate out gravitational waves, like there's all kinds of seemingly
link |
01:20:52.640
unavoidable losses that occur. So eventually, we'll have something like a heat death of the
link |
01:21:01.600
universe or a close death or whatever. So it's fine. But of course, we don't know which, if there's
link |
01:21:07.920
many ancestral simulations, we don't know which level we are. So there could be,
link |
01:21:15.600
couldn't there be like an arbitrary number of simulations that spawned ours? And those had
link |
01:21:20.960
more resources in terms of physical universe to work with? Sorry, what do you mean that that could
link |
01:21:27.680
be? Okay, so if simulations spawn other simulations, it seems like each new spawn has fewer resources
link |
01:21:41.840
to work with. But we don't know at which step along the way we are at,
link |
01:21:49.920
any one observer doesn't know whether we're in level 42, or 100, or one, or is that not
link |
01:21:58.320
matter for the resources? I mean, it's true that there would be uncertainty asked,
link |
01:22:05.760
you could have stacked simulations. Yes. And that could then be uncertainty as to which level we
link |
01:22:12.560
are at. As you remarked also, all the computations performed in a simulation within a simulation
link |
01:22:24.640
also have to be expanded at the level of the simulation. So the computer in basement reality
link |
01:22:30.640
where all these simulations with the simulations with the simulations are taking place, like that
link |
01:22:34.240
computer ultimately, it's CPU or whatever it is, like that has to power this whole tower, right? So
link |
01:22:40.000
if there is a finite compute power in basement reality, that would impose a limit to how tall
link |
01:22:46.320
this tower can be. And if each level kind of imposes a large extra overhead, you might think
link |
01:22:53.520
maybe the tower would not be very tall, that most people would be low down in the tower.
link |
01:23:00.560
I love the term basement reality. Let me ask one of the popularizers, you said there's many
link |
01:23:07.520
through this, when you look at sort of the last few years of the simulation hypothesis,
link |
01:23:13.280
just like you said, it comes up every once in a while, some new community discovers it and so on.
link |
01:23:17.680
But I would say one of the biggest popularizers of this idea is Elon Musk. Do you have any kind
link |
01:23:23.280
of intuition about what Elon thinks about when he thinks about simulation? Why is this of such
link |
01:23:28.640
interest? Is it all the things we've talked about, or is there some special kind of intuition about
link |
01:23:33.920
simulation that he has? I mean, you might have a better, I think, I mean, why it's of interest,
link |
01:23:39.120
I think it seems pretty obvious why to the extent that one think the argument is credible,
link |
01:23:44.960
why it would be of interest. If it's correct, tell us something very important about the world,
link |
01:23:49.840
in one way or the other, whichever of the three alternatives for a simulation that seems arguably
link |
01:23:54.960
one of the most fundamental discoveries, right? Now, interestingly, in the case of someone like
link |
01:24:00.080
Elon, so there's like the standard arguments for why you might want to take the simulation hypothesis
link |
01:24:04.640
seriously, the simulation argument, right? In the case that if you are actually Elon Musk, let
link |
01:24:09.360
us say, there's a kind of an additional reason in that what are the chances you would be Elon Musk?
link |
01:24:16.720
Like, it seems like maybe there would be more interest in simulating the lives of very unusual
link |
01:24:24.240
and remarkable people. So, if you consider not just assimilations where all of human history
link |
01:24:31.600
or the whole of human civilization are simulated, but also other kinds of simulations which only
link |
01:24:37.200
include some subset of people. Like, in those simulations that only include a subset,
link |
01:24:43.600
it might be more likely that they would include subsets of people with unusually interesting
link |
01:24:48.160
or consequential lives. If you're Elon Musk, you got to wonder, right? It's more likely that
link |
01:24:52.400
if you are Donald Trump or if you are Bill Gates or you're like some particularly
link |
01:25:00.320
like distinctive character, you might think that that add, I mean, if you just think of yourself
link |
01:25:06.160
into the shoes, right? It's got to be like an extra reason to think. That's kind of...
link |
01:25:11.200
So interesting. So, on a scale of like farmer and Peru to Elon Musk, the more you get towards
link |
01:25:18.320
the Elon Musk, the higher the probability... You'd imagine there would be some extra boost from that.
link |
01:25:25.040
There's an extra boost. So, he also asked the question of what he would ask an AGI saying,
link |
01:25:30.800
the question being what's outside the simulation. Do you think about the answer to this question
link |
01:25:37.600
if we are living a simulation? What is outside the simulation? So, the programmer of the simulation?
link |
01:25:44.320
Yeah. I mean, I think it connects to the question of what's inside the simulation in that
link |
01:25:50.320
if you had views about the creators of the simulation, it might help you
link |
01:25:56.240
make predictions about what kind of simulation it is, what might happen, what happens after
link |
01:26:02.720
the simulation if there is some after, but also like the kind of setup. So, these two questions
link |
01:26:07.680
would be quite closely intertwined. But do you think it would be very surprising to... Is the
link |
01:26:16.000
stuff inside the simulation, is it possible for it to be fundamentally different than the stuff
link |
01:26:20.240
outside? Yeah. Another way to put it, can the creatures inside the simulation be smart enough
link |
01:26:28.880
to even understand or have the cognitive capabilities or any kind of information processing
link |
01:26:33.840
capabilities enough to understand the mechanism that created them? They might understand some
link |
01:26:41.920
aspects of it. I mean, it's a level of explanation, like degrees to which you can understand. So,
link |
01:26:51.120
does your dog understand what it is to be human? Well, it's got some idea, like humans are these
link |
01:26:56.160
physical objects that move around and do things. And a normal human would have a deeper understanding
link |
01:27:03.520
of what it is to be human. And maybe some very experienced psychologists or great novelists
link |
01:27:11.040
might understand a little bit more about what it is to be human. And maybe superintelligence
link |
01:27:15.840
could see right through your soul. So, similarly, I do think that we are quite limited in our
link |
01:27:25.360
ability to understand all of the relevant aspects of the larger context that we exist in.
link |
01:27:30.960
But there might be hope for some. I think we understand some aspects of it. But
link |
01:27:36.880
how much good is that? If there's one key aspect that changes the significance of all
link |
01:27:43.360
the other aspects. So, we understand maybe seven out of ten key insights that you need.
link |
01:27:51.280
But the answer actually varies completely, depending on what number eight, nine, and
link |
01:27:57.840
ten insight is. It's like whether you want to... Suppose that the big task were to guess whether
link |
01:28:06.640
a certain number was odd or even, like a ten digit number. And if it's even, the best thing
link |
01:28:14.160
for you to do in life is to go north. And if it's odd, the best thing for you to go south.
link |
01:28:20.880
Now, we are in a situation where maybe through our science and philosophy, we figured out what
link |
01:28:25.120
the first seven digits are. So, we have a lot of information, right? Most of it we figured out.
link |
01:28:30.720
But we are clueless about what the last three digits are. So, we are still completely clueless
link |
01:28:36.400
about whether the number is odd or even, and therefore whether we should go north or go south.
link |
01:28:41.040
I feel that's an analogy, but I feel we're somewhat in that predicament. We know a lot about the
link |
01:28:46.960
universe. We've come maybe more than half of the way there to kind of fully understanding it,
link |
01:28:52.400
but the parts we are missing are possibly ones that could completely change the overall
link |
01:28:58.480
upshot of the thing. And including change our overall view about what the scheme of
link |
01:29:04.320
priorities should be or which strategic direction would make sense to pursue.
link |
01:29:07.680
Yeah, I think your analogy of us being the dog, trying to understand human beings is an entertaining
link |
01:29:15.200
one and probably correct. The closer the understanding tends from the dog's viewpoint to us human
link |
01:29:22.800
psychologist's viewpoint, the steps along the way there will have completely transformative ideas
link |
01:29:28.640
of what it means to be human. So, the dog has a very shallow understanding. It's interesting to
link |
01:29:34.880
think that to analogize that a dog's understanding of a human being is the same as our current
link |
01:29:41.440
understanding of the fundamental laws of physics in the universe. Oh, man. Okay, we spent an hour
link |
01:29:49.600
or 40 minutes talking about the simulation. I like it. Let's talk about superintelligence,
link |
01:29:54.640
at least for a little bit. And let's start at the basics. What to you is intelligence?
link |
01:30:01.440
Yeah, I tend not to get too stuck with the definitional question. I mean, the common sense
link |
01:30:08.080
understand, like the ability to solve complex problems, to learn from experience, to plan,
link |
01:30:13.280
to reason, some combination of things like that. Is consciousness mixed up into that or no? Is
link |
01:30:21.200
consciousness mixed up into that? Well, I don't think, I think it could be fairly intelligent,
link |
01:30:26.000
at least without being conscious, probably. So, then what is superintelligence? Yeah,
link |
01:30:33.520
that would be like something that was much more of that, had much more general cognitive capacity
link |
01:30:40.080
than we humans have. So, if we talk about general superintelligence, it would be much faster learner
link |
01:30:47.920
be able to reason much better, make plans that are more effective at achieving its goals,
link |
01:30:52.960
say in a wide range of complex, challenging environments. In terms of, as we turn our eye
link |
01:30:59.200
to the idea of existential threats from superintelligence, do you think superintelligence
link |
01:31:05.520
has to exist in the physical world or can it be digital only? We think of our general intelligence
link |
01:31:13.520
as us humans, as an intelligence that's associated with the body that's able to interact with the
link |
01:31:19.680
world, that's able to affect the world directly with physically. I mean, digital only is perfectly
link |
01:31:25.440
fine, I think. I mean, it's physical in the sense that obviously the computers and the memories are
link |
01:31:31.440
physical. But it's capable to affect the world, sort of? Could be very strong, even if it has a
link |
01:31:36.800
limited set of actuators. If it can type text on the screen or something like that, that would be,
link |
01:31:44.000
I think, ample. So, in terms of the concerns of existential threat of AI, how can an AI system
link |
01:31:52.480
that's in the digital world have existential risk? What are the attack vectors for a digital system?
link |
01:32:01.600
Well, I mean, I guess maybe to take one step back. I should emphasize that I also think there's this
link |
01:32:08.080
huge positive potential from machine intelligence, including superintelligence. I want to stress that
link |
01:32:15.760
because some of my writing has focused on what can go wrong. When I wrote the book,
link |
01:32:22.000
Superintelligence, at that point, I felt that there was a kind of neglect of what would happen
link |
01:32:30.560
if AI succeeds. And in particular, a need to get a more granular understanding of where the pitfalls
link |
01:32:36.000
are so we can avoid them. I think that since the book came out in 2014, there has been a much
link |
01:32:44.480
wider recognition of that. And a number of research groups are now actually working on
link |
01:32:48.880
developing, say, AI alignment techniques and so on and so forth. So, yeah, I think now it's
link |
01:32:56.160
important to make sure we bring back onto the table the upside as well.
link |
01:33:02.400
And there's a little bit of a neglect now on the upside, which is, I mean, if you look at,
link |
01:33:08.000
talking to a friend, if you look at the amount of information that is available,
link |
01:33:11.520
or people talking, or people being excited about the positive possibilities of general
link |
01:33:16.160
intelligence, that's not, it's far outnumbered by the negative possibilities in terms of our
link |
01:33:24.000
public discourse. Possibly, yeah, it's hard to measure. What are, can you look at that for a
link |
01:33:30.480
little bit? What are some, to you, possible big positive impacts of general intelligence,
link |
01:33:37.360
superintelligence? Well, I mean, super, because I tend to
link |
01:33:40.320
also want to distinguish these two different contexts of thinking about AI and AI impacts,
link |
01:33:47.520
the kind of near term and long term, if you want, both of which I think are legitimate things to
link |
01:33:53.040
think about. And people should, you know, discuss both of them, but they are different and they
link |
01:33:59.360
often get mixed up. And then, then I get, you get confusion. I think you get simultaneously,
link |
01:34:06.240
like maybe an overhyping of the near term and an underhyping of the long term. And so I think as
link |
01:34:10.960
long as we keep them apart, we can have like two good conversations, but, or we can mix them
link |
01:34:16.960
together and have one bad conversation. Can you clarify just the two things we're talking about,
link |
01:34:21.440
the near term and the long term? Yeah, and what are the distinctions? Well, it's a, it's a blur
link |
01:34:26.640
distinction. But say the things I wrote about in this book, superintelligence, long term,
link |
01:34:32.720
things people are worrying about today with, I don't know, algorithmic discrimination or even
link |
01:34:40.560
things self driving cars and drones and stuff, more near term. And then, of course,
link |
01:34:48.400
you could imagine some medium term where they kind of overlap and one evolves into the other.
link |
01:34:54.880
But anyway, I think both, yeah, the issues look kind of somewhat different depending on
link |
01:35:00.080
which of these contexts. So I think it would be nice if we can talk about the long term
link |
01:35:06.560
and think about a positive impact or a better world because of the existence of the long
link |
01:35:16.080
term superintelligence. Do you have views of such a world? Yeah, I mean, I guess it's a little
link |
01:35:21.200
hard to articulate because it seems obvious that the world has a lot of problems as it currently
link |
01:35:27.040
stands. And it's hard to think of any one of those which it wouldn't be useful to have a
link |
01:35:34.880
like a friendly aligned superintelligence working on. So from health to the economic system to be
link |
01:35:45.120
able to sort of improve the investment and trade and foreign policy decisions, all that kind of
link |
01:35:50.800
stuff. All that kind of stuff and a lot more. I mean, what's the killer app? Well, I don't think
link |
01:35:58.800
there is one. I think AI, especially artificial general intelligence is really the ultimate
link |
01:36:05.520
general purpose technology. So it's not that there is this one problem, this one area where it will
link |
01:36:10.720
have a big impact. But if and when it succeeds, it will really apply across the board in all fields
link |
01:36:18.320
where human creativity and intelligence and problem solving is useful, which is pretty much
link |
01:36:23.280
all fields, right? The thing that it would do is give us a lot more control over nature.
link |
01:36:30.640
It wouldn't automatically solve the problems that arise from conflict between humans,
link |
01:36:36.560
fundamentally political problems. Some subset of those might go away if you just had more
link |
01:36:40.160
resources and cooler tech, but some subset would require coordination that is not automatically
link |
01:36:49.600
achieved just by having more technical capability. But anything that's not of that sort, I think
link |
01:36:55.040
you just get like an enormous boost with this kind of cognitive technology once it goes all
link |
01:37:02.320
the way. Now, again, that doesn't mean I'm like thinking, oh, people don't recognize what's possible
link |
01:37:10.400
with current technology. And like sometimes things get overhyped. But I mean, those are
link |
01:37:14.960
perfectly consistent views to hold the ultimate potential being enormous. And then it's a very
link |
01:37:20.960
different question of how far are we from that? Or what can we do with near term technology?
link |
01:37:25.200
Yeah. So what's your intuition about the idea of intelligence explosion? So there's this,
link |
01:37:29.680
you know, when you start to think about that leap from the near term to the long term,
link |
01:37:36.080
the natural inclination, like for me, sort of building machine learning systems today,
link |
01:37:40.960
it seems like it's a lot of work to get the general intelligence. But there's some intuition
link |
01:37:45.840
of exponential growth of exponential improvement of intelligence explosion. Can you maybe
link |
01:37:51.200
try to elucidate, to try to talk about what's your intuition about the possibility of
link |
01:38:01.120
intelligence explosion, that it won't be this gradual slow process that might be a phase shift?
link |
01:38:08.720
Yeah, I think it's, we don't know how explosive it will be. I think for what it's worth,
link |
01:38:15.200
it seems fairly likely to me that at some point there will be some intelligence
link |
01:38:20.560
explosion, like some period of time where progress in AI becomes extremely rapid.
link |
01:38:26.800
Roughly in the area where you might say it's kind of humanish equivalent in
link |
01:38:34.320
core cognitive faculties, that the concept of human equivalent starts to break down when
link |
01:38:41.040
you look too closely at it. And just how explosive does something have to be for it to
link |
01:38:47.280
be called an intelligence explosion? Like does it have to be like overnight literally,
link |
01:38:50.800
or a few years? But overall, I guess if you plotted the opinions of different people in
link |
01:38:59.440
the world, I guess I would be somewhat more probability towards the intelligence explosion
link |
01:39:04.640
scenario than probably the average AI researcher, I guess.
link |
01:39:09.360
So, and then the other part of the intelligence explosion, or just forget explosion, just progress,
link |
01:39:15.840
is once you achieve that gray area of human level intelligence, is it obvious to you that
link |
01:39:22.000
we should be able to proceed beyond it to get to super intelligence?
link |
01:39:26.960
Yeah, that seems, I mean, as much as any of these things can be obvious, given we've never had one,
link |
01:39:34.800
people have different views, smart people have different views, it's like some
link |
01:39:38.080
some degree of uncertainty that always remains for any big futuristic philosophical grand question
link |
01:39:45.920
that just we realize humans are fallible, especially about these things.
link |
01:39:49.360
But it does seem as far as I'm judging things based on my own impressions that it seems very
link |
01:39:55.680
unlikely that there would be a ceiling at or near human cognitive capacity.
link |
01:40:03.680
But that's such a, I don't know, that's such a special moment. It says both terrifying and
link |
01:40:10.240
exciting to create a system that's beyond our intelligence. So maybe you can step back and
link |
01:40:17.680
say, like, how does that possibility make you feel that we can create something,
link |
01:40:24.400
it feels like there's a line beyond which it steps, it'll be able to all smart you.
link |
01:40:30.960
And therefore, it feels like a step where we lose control.
link |
01:40:35.360
Well, I don't think the latter follows, that is, you could imagine, and in fact, this is what
link |
01:40:43.440
a number of people are working towards, making sure that we could ultimately
link |
01:40:48.080
project higher levels of problem solving ability while still making sure that they are aligned,
link |
01:40:54.640
like they are in the service of human values. I mean, so losing control, I think,
link |
01:41:02.480
is not a given that that would happen. I asked how it makes me feel. I mean, to some extent,
link |
01:41:08.640
I've lived with this for so long, since as long as I can remember being an adult or even a teenager,
link |
01:41:16.320
it seemed to me obvious that at some point, AI will succeed. And so I actually misspoke,
link |
01:41:22.160
I didn't mean control. I meant, because the control problem isn't an interesting thing.
link |
01:41:27.840
And I think the hope is, at least we should be able to maintain control over systems that are
link |
01:41:33.840
smarter than us. But we do lose our specialness. It sort of will lose our place as the smartest,
link |
01:41:45.680
coolest thing on earth. And there's an ego involved with that, that humans are very good at
link |
01:41:54.240
dealing with. I mean, I value my intelligence as a human being. It seems like a big transformative
link |
01:42:01.760
step to realize there's something out there that's more intelligent. I mean, you don't see that
link |
01:42:08.240
as such a fundamentally... Well, yeah, I think, yes, a lot. I think it would be small. I mean,
link |
01:42:13.280
I think there are already a lot of things out there that are... I mean, certainly if you think
link |
01:42:18.000
the universe is big, there's going to be other civilizations that already have super intelligences
link |
01:42:22.720
or that just naturally have brains the size of beach balls and are completely leaving us in the
link |
01:42:29.520
dust. And we haven't come face to face with this. We haven't come face to face. But I mean, that's
link |
01:42:35.200
an open question. What would happen in a kind of post human world? Like how much day to day would
link |
01:42:45.760
these super intelligences be involved in the lives of ordinary... I mean, you could imagine
link |
01:42:51.840
some scenario where it would be more like a background thing that would help protect against
link |
01:42:55.680
some things. But you wouldn't... They wouldn't be this intrusive kind of making you feel bad by
link |
01:43:02.240
like making clever jokes on your experience. But there's like all sorts of things that maybe in
link |
01:43:06.480
the human context would feel awkward about that. You wouldn't want to be the dumbest kid in your
link |
01:43:12.000
class, everybody picks it. Like a lot of those things maybe you need to abstract away from
link |
01:43:17.840
if you're thinking about this context where we have infrastructure that is in some sense
link |
01:43:22.800
beyond any of our old humans. I mean, it's a little bit like, say, the scientific community
link |
01:43:29.280
as a whole. If you think of that as a mind, it's a little bit of metaphor. But I mean,
link |
01:43:33.680
obviously it's got to be like way more capacious than any individual. So in some sense, there is this
link |
01:43:41.600
mind like thing already out there. That's just vastly more intelligent than any individual is.
link |
01:43:49.440
And we think, okay, that's... You just accept that as a fact. That's the basic fabric of our
link |
01:43:56.720
existence. There's a super intelligent thing. Yeah, you get used to a lot of... I mean,
link |
01:44:00.640
there's already Google and Twitter and Facebook, these recommender systems that are the basic
link |
01:44:07.600
fabric of our... I could see them becoming... I mean, do you think of the collective intelligence
link |
01:44:14.800
of these systems as already perhaps reaching superintelligence level? Well, I mean, so here
link |
01:44:20.880
it comes to the concept of intelligence and the scale and what human level means.
link |
01:44:29.200
The kind of vagueness and the determinacy of those concepts starts to
link |
01:44:36.560
dominate how you would answer that question. So, say the Google search engine has a very high
link |
01:44:42.240
capacity of a certain kind, like retrieving... Remembering and retrieving information,
link |
01:44:47.600
particularly like text or images that are... You have a kind of string, a word string key.
link |
01:44:58.800
Obviously, superhuman at that. But a vast set of other things it can't even do at all,
link |
01:45:06.000
not just not do well. So you have these current AI systems that are superhuman in some limited
link |
01:45:13.520
domain and then radically subhuman in all other domains. Same with a chess...
link |
01:45:21.440
A simple computer that can multiply really large numbers, right? So it's going to have this one
link |
01:45:25.360
spike of superintelligence and then a kind of zero level of capability across all other cognitive
link |
01:45:31.520
fields. Yeah, I don't necessarily think the generalness... I mean, I'm not so attached with it,
link |
01:45:36.640
but it's a gray area and it's a feeling, but to me, alpha zero is somehow much more intelligent,
link |
01:45:47.440
much, much more intelligent than D blue. And to say which domain... Well, you could say, well,
link |
01:45:53.440
these are both just board games. They're both just able to play board games. Who cares if they're
link |
01:45:57.440
going to do better or not? But there's something about the learning and the self play that makes
link |
01:46:03.200
it crosses over into that land of intelligence that doesn't necessarily need to be general.
link |
01:46:09.520
And the same way Google is much closer to D blue currently in terms of its search engine than it
link |
01:46:15.600
is to sort of the alpha zero. And the moment it becomes... And the moment these recommender systems
link |
01:46:21.120
really become more like alpha zero, but being able to learn a lot without the constraints of
link |
01:46:27.920
being heavily constrained by human interaction, that seems like a special moment in time.
link |
01:46:34.320
I mean, certainly learning ability seems to be an important facet of general intelligence,
link |
01:46:42.800
that you can take some new domain that you haven't seen before and you weren't specifically
link |
01:46:48.160
preprogrammed for and then figure out what's going on there and eventually become really good at it.
link |
01:46:53.280
So that's something alpha zero has much more of than D blue had. And in fact, I mean, systems
link |
01:47:01.760
like alpha zero can learn not just go, but other, in fact, probably beat D blue in chess and so
link |
01:47:08.800
forth. So you do see this general and it matches the intuition. We feel it's more intelligent.
link |
01:47:15.120
And it also has more of this general purpose learning ability. And if we get systems that
link |
01:47:20.800
have even more general purpose learning ability, it might also trigger an even stronger intuition
link |
01:47:24.640
that they're actually starting to get smart. So if you were to pick a future, what do you think a
link |
01:47:30.480
utopia looks like with AGI systems? Is it the neural link brain computer interface world where
link |
01:47:39.520
we're kind of really closely interlinked with AI systems? Is it possibly where AGI systems replace
link |
01:47:46.640
us completely while maintaining the values and the consciousness? Is it something like it's a
link |
01:47:54.320
completely invisible fabric? Like you mentioned a society where just aids and a lot of stuff that
link |
01:48:00.000
we do like curing diseases and so on. What is utopia if you get to pick? Yeah, I mean, it is a good
link |
01:48:05.360
question and a deep and difficult one. I'm quite interested in it. I don't have all the answers
link |
01:48:12.960
yet, but might never have. But I think there are some different observations one can make. One is if
link |
01:48:20.160
this scenario actually did come to pass, it would open up this vast space of possible modes of
link |
01:48:28.640
being. On one hand, material and resource constraints would just be expanded dramatically.
link |
01:48:36.080
So there would be a lot of a big pie, let's say. Also, it would enable us to do things
link |
01:48:47.360
including to ourselves or like that, it would just open up this much larger design space
link |
01:48:53.200
and option space than we have ever had access to in human history. So I think two things follow
link |
01:49:00.560
from that. One is that we probably would need to make a fairly fundamental rethink of what
link |
01:49:07.600
ultimately we value. Think things through more from first principles. The context would be so
link |
01:49:12.640
different from the familiar that we could have just take what we've always been doing and then
link |
01:49:17.920
oh, well, we have this cleaning robot that cleans the dishes in the sink and a few other small
link |
01:49:23.760
things. I think we would have to go back to first principles. So even from the individual
link |
01:49:28.560
level, go back to the first principles of what is the meaning of life, what is happiness,
link |
01:49:32.800
what is fulfillment. And then also connected to this large space of resources is that it
link |
01:49:41.920
would be possible and I think something we should aim for is to do well by the lights
link |
01:49:51.360
of more than one value system. That is, we wouldn't have to choose only one value criterion and say
link |
01:50:05.760
we're going to do something that scores really high on the metric of, say, hedonism. And then it's
link |
01:50:13.920
like a zero by other criteria, like kind of wireheaded brains in a vat. And it's like a lot
link |
01:50:21.760
of pleasure. That's good. But then like no beauty, no achievement like that. I think to some
link |
01:50:28.800
significant, not unlimited sense, but the significant sense, it would be possible to do
link |
01:50:34.640
very well by many criteria. Like maybe you could get like 98% of the best according to several
link |
01:50:42.400
criteria at the same time, given this great expansion of the option space. And so have
link |
01:50:50.960
competing value systems, competing criteria as sort of forever, just like our Democrat versus
link |
01:50:59.440
Republican, there seems to be this always multiple parties that are useful for our progress in society,
link |
01:51:05.520
even though it might seem dysfunctional inside the moment, but having the multiple value systems
link |
01:51:11.040
seems to be beneficial for, I guess, a balance of power. So that's not exactly what I have in mind,
link |
01:51:18.960
that it's, well, although it may be in an indirect way, it is. But that if you had the chance to do
link |
01:51:27.440
something that scored well on several different metrics, our first instinct should be to do that
link |
01:51:34.480
rather than immediately leap to the thing, which ones of these value systems are we going to screw
link |
01:51:40.000
over? Let's first try to do very well by all of them. Then it might be that you can't get 100%
link |
01:51:46.560
of all, and you would have to then have the hard conversation about which one will only get 97%.
link |
01:51:51.600
There you go. There's my cynicism that all of existence is always a tradeoff. But you say,
link |
01:51:57.120
maybe it's not such a bad tradeoff. Let's first try that. Well, this would be a distinctive context
link |
01:52:02.160
in which at least some of the constraints would be removed. There's probably still be tradeoffs in
link |
01:52:09.920
the end. It's just that we should first make sure we at least take advantage of this abundance.
link |
01:52:17.120
In terms of thinking about this, one should think in this kind of frame of mind of generosity
link |
01:52:25.760
and inclusiveness to different value systems and see how far one can get there first.
link |
01:52:34.560
And I think one could do something that would be very good according to many different criteria.
link |
01:52:41.600
We talked about AGI fundamentally transforming the value system of our existence, the meaning of
link |
01:52:50.800
life. But today, what do you think is the meaning of life? What are you the silliest or perhaps the
link |
01:52:57.600
biggest question? What's the meaning of life? What's the meaning of existence? What gives
link |
01:53:03.200
your life fulfillment, purpose, happiness, meaning? Yeah, I think these are, I guess, a bunch of
link |
01:53:12.720
different related questions in there that one can ask. Happiness, meaning,
link |
01:53:18.000
yeah. You could imagine somebody getting a lot of happiness from something that
link |
01:53:22.640
they didn't think was meaningful. Mindless watching reruns of some television series,
link |
01:53:30.560
waiting junk food, maybe some people that gives pleasure, but they wouldn't think
link |
01:53:34.560
it had a lot of meaning. Whereas, conversely, something that might be quite loaded with meaning
link |
01:53:39.200
might not be very fun always. Some difficult achievement that really helps a lot of people
link |
01:53:44.240
maybe requires self sacrifice and hard work. And so these things can, I think, come apart,
link |
01:53:53.360
which is something to bear in mind also when you're thinking about these utopia questions that
link |
01:54:03.520
you might actually start to do some constructive thinking about that. You might have to isolate
link |
01:54:09.120
it and distinguish these different kinds of things that might be valuable in different ways.
link |
01:54:16.160
Make sure you can sort of clearly perceive each one of them, and then you can think about how
link |
01:54:20.160
you can combine them. And just as you said, hopefully come up with a way to maximize all of
link |
01:54:26.720
them together. Yeah, or at least get, I mean, maximize or get a very high score on a wide
link |
01:54:33.040
range of them, even if not literally all. You can always come up with values that are
link |
01:54:37.120
exactly opposed to one another, right? But I think for many values, they're kind of opposed with,
link |
01:54:44.320
if you place them within a certain dimensionality of your space, like there are shapes that are kind
link |
01:54:49.680
of, you can't untangle like in a given dimensionality. But if you start adding dimensions, then it
link |
01:54:56.880
might in many cases just be that they are easy to pull apart. And you could. So we'll see how
link |
01:55:03.040
much space there is for that. But I think that there could be a lot in this context of radical
link |
01:55:08.240
abundance, if ever we get to that. I don't think there's a better way to end it, Nick. You've
link |
01:55:15.520
influenced a huge number of people to work on what could very well be the most important
link |
01:55:21.440
problems of our time. So it's a huge honor. Thank you so much for talking to me. Well,
link |
01:55:24.400
thank you for coming by, Lex. That was fun. Thank you. Thanks for listening to this conversation
link |
01:55:29.440
with Nick Bostrom. And thank you to our presenting sponsor, Cash App. Please consider supporting
link |
01:55:34.720
the podcast by downloading Cash App and using code Lex podcast. If you enjoy this podcast,
link |
01:55:41.120
subscribe on YouTube, review it with five stars on Apple podcast, support it on Patreon, or simply
link |
01:55:46.240
connect with me on Twitter at Lex Friedman. And now let me leave you with some words from Nick
link |
01:55:53.200
Bostrom. Our approach to existential risks cannot be one of trial and error. There's no opportunity
link |
01:56:01.280
to learn from errors. The reactive approach, see what happens, limit damages and learn from experience
link |
01:56:08.400
is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate
link |
01:56:15.040
new types of threats and a willingness to take decisive preventative action and to bear the
link |
01:56:20.320
costs, moral and economic of such actions. Thank you for listening and hope to see you next time.