back to index

Joscha Bach: Artificial Consciousness and the Nature of Reality | Lex Fridman Podcast #101


small model | large model

link |
00:00:00.000
The following is a conversation with Yosha Bach, VP of Research at the AI Foundation
link |
00:00:05.520
with a history of research positions at MIT and Harvard.
link |
00:00:09.440
Yosha is one of the most unique and brilliant people in the artificial intelligence community,
link |
00:00:15.680
exploring the workings of the human mind, intelligence, consciousness, life on earth,
link |
00:00:21.360
and the possibly simulated fabric of our universe.
link |
00:00:25.920
I can see myself talking to Yosha many times in the future.
link |
00:00:28.640
A quick summary of the ads. Two sponsors, ExpressVPN and Cash App. Please consider supporting
link |
00:00:36.400
the podcast by signing up at expressvpn.com slash lexpod and downloading Cash App and using code
link |
00:00:43.760
lexpodcast. This is the artificial intelligence podcast. If you enjoy it, subscribe on YouTube,
link |
00:00:51.120
review it with five stars on the app of podcast, support it on Patreon, or simply connect with me
link |
00:00:56.000
on Twitter at Lex Freedman. Since this comes up more often than I ever would have imagined,
link |
00:01:03.440
I challenge you to try to figure out how to spell my last name without using the letter E.
link |
00:01:09.600
And it'll probably be the correct way. As usual, I'll do a few minutes of ads now
link |
00:01:14.720
and never in yads in the middle that can break the flow of the conversation.
link |
00:01:18.960
This show is sponsored by ExpressVPN. Get it at expressvpn.com slash lexpod.
link |
00:01:25.120
To support this podcast and to get an extra three months free on a one year package.
link |
00:01:30.720
I've been using ExpressVPN for many years. I love it. I think ExpressVPN is the best VPN out there.
link |
00:01:38.480
They told me to say it, but I think it actually happens to be true. It doesn't log your data,
link |
00:01:44.480
it's crazy fast, and it's easy to use literally just one big power on button.
link |
00:01:49.360
Again, for obvious reasons, it's really important that they don't log your data.
link |
00:01:54.960
It works on Linux and everywhere else too. Shout out to my favorite flavor of Linux,
link |
00:02:00.000
Ubuntu Mate 2004. Once again, get it at expressvpn.com slash lexpod,
link |
00:02:07.200
to support this podcast and to get an extra three months free on a one year package.
link |
00:02:14.240
This show is presented by Cash App, the number one finance app in the App Store.
link |
00:02:18.800
When you get it, use code lexpodcast. Cash App lets you send money to friends, buy Bitcoin,
link |
00:02:25.360
and invest in the stock market with as little as $1. Since Cash App does fractional share trading,
link |
00:02:31.200
let me mention that the order execution algorithm that works behind the scenes to create the abstraction
link |
00:02:36.960
of the fractional orders is an algorithmic marvel. So big props to the Cash App engineers
link |
00:02:42.480
for taking a step up to the next layer of abstraction over the stock market,
link |
00:02:46.240
making trading more accessible for new investors and diversification much easier.
link |
00:02:51.760
So again, if you get Cash App from the App Store, Google Play and use the code lexpodcast,
link |
00:02:57.760
you get $10 and Cash App will also donate $10 to first, an organization that is helping advance
link |
00:03:04.240
robotics and STEM education for young people around the world. And now here's my conversation
link |
00:03:11.440
with Yosha Bach. As you've said, you grew up in a forest in East Germany, just as what we're talking
link |
00:03:18.560
about off mic to parents who are artists. And now I think, at least to me, you've become one of the
link |
00:03:25.200
most unique thinkers in the AI world. So can we try to reverse engineer your mind a little bit?
link |
00:03:31.520
What were the key philosophers, scientists, ideas, maybe even movies, or just realizations
link |
00:03:38.320
that impact on you when you're growing up that kind of led to the trajectory or were the key
link |
00:03:44.560
sort of crossroads in the trajectory of your intellectual development?
link |
00:03:49.600
My father came from a long tradition of architects, a distant branch of the Bach family.
link |
00:03:57.200
And so basically, he was technically a nerd. And nerds need to interface in society with
link |
00:04:04.400
nonstandard ways. Sometimes I define a nerd as somebody who thinks that the purpose of
link |
00:04:09.920
communication is to submit your ideas to peer review. And normal people understand that the
link |
00:04:16.560
primary purpose of communication is to negotiate alignment. And these purposes tend to conflict,
link |
00:04:22.640
which means that nerds have to learn how to interact with society at large.
link |
00:04:27.120
Who is the reviewer in the nerd's view of communication?
link |
00:04:32.080
Everybody who you consider to be a peer. So whatever happens as individual is around,
link |
00:04:37.680
well, you would try to make him or her the gift of information.
link |
00:04:43.200
Okay. So you're not, by the way, my research will malinform me. So you're
link |
00:04:50.080
architect or artist? So he did study architecture. But basically, my grandfather made the wrong
link |
00:04:57.200
decision. He married an aristocrat and was drawn into the war. And he came back after 15 years.
link |
00:05:05.520
So basically, my father was not parented by a nerd, by somebody who tried to tell him what to do
link |
00:05:13.680
and expected him to do what he was told. And he was unable to. He's unable to do things if
link |
00:05:20.240
he's not intrinsically motivated. So in some sense, my grandmother broke her son. And her son
link |
00:05:25.440
responded when he became an architect to become an artist. So he built 100 Wasser architecture.
link |
00:05:32.080
He built houses without right angles. He'd built lots of things that didn't work in the more brutalist
link |
00:05:37.280
traditions of Eastern Germany. And so he bought an old watermill, moved out to the countryside,
link |
00:05:43.280
and did only what he wanted to do, which was art. Eastern Germany was perfect for Bohem,
link |
00:05:48.400
because you had complete material safety. Putt was heavily subsidized, Haskell was free.
link |
00:05:53.760
You didn't have to worry about rent or pensions or anything. So it's a socialized communist side
link |
00:05:58.240
of the country. Yes. And the other thing is it was almost impossible not to be in political
link |
00:06:02.240
disagreement with your government, which is very productive for artists. So everything that you
link |
00:06:06.160
do is intrinsically meaningful, because it will always touch on the deeper currents of society,
link |
00:06:11.680
of culture, and being conflict with it, and tension with it. And you will always have to
link |
00:06:15.840
define yourself with respect to this. So what impact did your father, this outside of the box
link |
00:06:22.720
thinker against the government, against the world artists? He was actually not a thinker. He was
link |
00:06:28.480
somebody who only got self aware to the degree that he needed to make himself functional.
link |
00:06:33.440
So in some sense, he was also in the late 1960s, and he was in some sense a hippie. So he became
link |
00:06:41.120
a one person cult. He lived out there in his kingdom. He built big sculpture gardens and
link |
00:06:46.160
started many avenues of art and so on, and convinced a woman to live with him. She was
link |
00:06:54.080
also an architect, and she adored him and decided to share her life with him. And I basically grew
link |
00:06:59.680
up in a big cave full of books. I'm almost feral. And I was bored out there. It was very, very
link |
00:07:06.400
beautiful, very quiet, and quite lonely. So I started to read. And by the time I came to school,
link |
00:07:12.480
I've read everything until fourth grade and then some. And there was not a real way for me to relate
link |
00:07:17.760
to the outside world. And I couldn't quite put my finger on why. And today I know it was because I
link |
00:07:23.360
was a nerd, obviously. And it was the only nerd around. So there were no other kids like me.
link |
00:07:28.960
And there was nobody interested in physics or computing or mathematics and so on.
link |
00:07:34.560
And this village school that I went to, basically a nice school. Kids were nice to me. I was not
link |
00:07:40.240
beaten up, but I also didn't make many friends or build deep relationships. They only happened in
link |
00:07:45.120
starting from ninth grade when I went into a school for mathematics and physics.
link |
00:07:49.280
Do you remember any key books from this moment? I basically read everything. So I went to the
link |
00:07:54.320
library and I worked my way through the children's and young adult sections. And then I read a lot
link |
00:07:59.920
of science fiction. For instance, Danislav Lem, basically the great author of cybernetics,
link |
00:08:05.440
has influenced me back then. I didn't see him as a big influence because everything that he wrote
link |
00:08:09.600
seemed to be so natural to me. And it's only later that I contrasted it with what other people wrote.
link |
00:08:15.920
Another thing that was very influential on me were the classical philosophers
link |
00:08:19.680
and also the literature of romanticism. So German poetry and art, Dr. Hilzhoff and
link |
00:08:26.880
Heine up to Hesse and so on. I love Hesse. So at which point do the classical philosophers end?
link |
00:08:34.880
At this point, or in the 21st century, what's the latest classical philosopher? Does this stretch
link |
00:08:40.400
through even as far as Nietzsche or is this, are we talking about Plato and Aristotle?
link |
00:08:45.840
I think that Nietzsche is the classical equivalent of a shit poster.
link |
00:08:52.720
He's very smart and easy to read. But he's not so much trolling others. He's trolling himself
link |
00:08:57.920
because he was at odds with the world. Largely, his romantic relationships didn't work out.
link |
00:09:02.480
He got angry and he basically became a nihilist. Isn't that a beautiful way to be as an intellectual
link |
00:09:09.760
is to constantly be trolling yourself, to be in that conflict, in that tension?
link |
00:09:14.960
I think it's a lack of self awareness. At some point, you have to understand the
link |
00:09:18.800
comedy of your own situation. If you take yourself seriously and you are not functional,
link |
00:09:23.840
it ends in tragedy as it did for Nietzsche. I think you think he took himself too seriously in that
link |
00:09:28.880
tension. And if you find the same thing in Hesse and so on, the Steppenwolf syndrome is classic
link |
00:09:34.080
adolescence, where you basically feel misunderstood by the world and you don't understand that all
link |
00:09:38.720
the misunderstandings are the result of your own lack of self awareness. Because you think that you
link |
00:09:44.000
are a prototypical human and the others around you should behave the same way as you expect them
link |
00:09:48.880
based on your innate instincts and it doesn't work out. And you become a transcendentalist
link |
00:09:53.520
to deal with that. So it's very, very understandable and have great sympathies for this to the degree
link |
00:09:59.040
that I can have sympathy for my own intellectual history. But you have to grow out of it.
link |
00:10:04.640
So as an intellectual, a life well lived, a journey well traveled is one where you don't
link |
00:10:09.600
take yourself seriously. No, I think that you are neither serious or not serious yourself,
link |
00:10:16.000
because you need to become unimportant as a subject. That is, if you are a philosopher,
link |
00:10:22.240
belief is not a verb. You don't do this for the audience, you don't do it for yourself,
link |
00:10:27.360
you have to submit to the things that are possibly true. And you have to follow wherever
link |
00:10:32.240
your career leads. But it's not about you, it has nothing to do with you.
link |
00:10:36.080
So do you think then people like Ayn Rand believed sort of an idea of there's objective
link |
00:10:42.080
truth. So what's your sense in the philosophical, if you remove yourself that's objective from
link |
00:10:47.920
the picture, you think it's possible to actually discover ideas that are true? Or are we just
link |
00:10:52.560
in a mesh of relative concepts that are neither true nor false? It's just a giant mess.
link |
00:10:57.840
You cannot define objective truth without understanding the nature of truth in the first
link |
00:11:02.720
place. So what does the brain mean by saying that it's cover something as truth? So for instance,
link |
00:11:08.480
a model can be predictive or not predictive. Then there can be a sense in which a mathematical
link |
00:11:14.560
statement can be true because it's defined as true under certain conditions. So it's basically
link |
00:11:19.600
a particular state that a variable can have in a simple game. And then you can have a correspondence
link |
00:11:27.120
between systems and talk about truth, which is again a type of model correspondence. And there
link |
00:11:31.440
also seems to be a particular kind of ground truth. So for instance, you're confronted with the
link |
00:11:36.080
enormity of something existing at all, right? That's stunning when you realize something exists
link |
00:11:42.160
rather than nothing. And this seems to be true, right? There's an absolute truth in the fact
link |
00:11:47.840
that something seems to be happening. Yeah, that to me is a showstopper. I could just think about
link |
00:11:52.880
that idea and be amazed by that idea for the rest of my life and not go any farther, because I don't
link |
00:11:58.240
even know the answer to that. Why does anything exist at all? Well, the easiest answer is existence
link |
00:12:02.800
is the default, right? So this is the lowest number of bits that you would need to encode this.
link |
00:12:06.800
Whose answer? The simplest answer to this is that existence is the default.
link |
00:12:11.280
What about non existence? I mean, that seems non existence might not be a meaningful notion in
link |
00:12:16.400
this sense. So in some sense, if everything that can exist exists for something to exist,
link |
00:12:21.280
it probably needs to be implementable. The only thing that can be implemented is finite
link |
00:12:26.000
automata. So maybe the whole of existence is the superposition of all finite automata. And
link |
00:12:30.560
we are in some region of the fractal that has the properties that it can contain us.
link |
00:12:34.000
What does it mean to be a superposition of finite? So any superposition of all like all
link |
00:12:41.600
possible rules? Imagine that every automaton is basically an operator that acts on some
link |
00:12:46.800
substrate. And as a result, you get emergent patterns. What's the substrate?
link |
00:12:52.640
I have no idea to know. So it's basically some substrate. It's something that can store information.
link |
00:12:58.400
Something that can store information. There's a time. Something that can hold state.
link |
00:13:01.680
Still, doesn't make sense to me the why that exists at all. I could just sit there with a
link |
00:13:06.800
beer or a vodka and just enjoy the fact, pondering the why.
link |
00:13:11.520
May not have a why. This might be the wrong direction to ask into this. So there could be no
link |
00:13:16.960
relation in the y direction without asking for a purpose or for a cause. It doesn't mean that
link |
00:13:22.800
everything has to have a purpose or a cause, right? So we mentioned some philosophers in that
link |
00:13:28.480
early just taking a brief step back into that. Okay, so we asked ourselves when did classical
link |
00:13:33.760
philosophy end? I think for Germany, it largely ended with the first revolution. That's basically
link |
00:13:39.040
when we... Which one was that? This was when we ended the monarchy and started a democracy. And at
link |
00:13:45.120
this point, we basically came up with a new form of government that didn't have a good sense of
link |
00:13:50.640
this new organism that society wanted to be and in a way it decapitated the universities.
link |
00:13:56.080
So the universities went on through modernism like a headless chicken. At the same time,
link |
00:14:00.640
democracy failed in Germany and we got fascism as a result. And it burned down things in a similar
link |
00:14:06.240
way as Stalinism burned down intellectual traditions in Russia. And Germany, both
link |
00:14:11.040
Germanies have not recovered from this. Eastern Germany had this Valger dialectic materialism
link |
00:14:16.320
and Western Germany didn't get much more edgy than Habermas. So in some sense,
link |
00:14:21.040
both countries lost their intellectual traditions and killing off and driving out the Jews didn't
link |
00:14:25.120
have. Yeah, so that was the end. That was the end of really rigorous what you would say is
link |
00:14:32.560
classical philosophy. There's also this thing that in some sense, the low hanging foods in
link |
00:14:38.800
philosophy were mostly wrapped. And the last big things that we discovered was the constructivist
link |
00:14:46.560
turn in mathematics. So to understand that the parts of mathematics that work are computation.
link |
00:14:52.160
There was a very significant discovery in the first half of the 20th century. And it hasn't
link |
00:14:57.760
fully permeated philosophy and even physics yet. Physicists checked out the code libraries
link |
00:15:03.120
for mathematics before constructivism became universal. What's constructivism? What are you
link |
00:15:09.280
referring to girls and completeness there and that kind of those kinds of ideas?
link |
00:15:11.840
So basically, Gödel himself, I think, didn't get it yet. Hilbert could get it. Hilbert saw that,
link |
00:15:17.360
for instance, countries set theoretical experiments and mathematics led into contradictions.
link |
00:15:21.920
And he noticed that with the current semantics, we cannot build a computer in mathematics that
link |
00:15:27.920
runs mathematics without crashing. And Gödel could prove this. And so what Gödel could show is using
link |
00:15:34.080
classical mathematical semantics, you run into contradictions. And because Gödel strongly
link |
00:15:38.800
believed in these semantics and more than in what he could observe and so on, he was shocked.
link |
00:15:44.240
It basically shook his world to the core, because in some sense, he felt that the world has to be
link |
00:15:48.560
implemented in classical mathematics. And for Turing, it wasn't quite so bad. I think that
link |
00:15:54.080
Turing could see that the solution is to understand that quantum mathematics was computation all
link |
00:15:59.200
along, which means you, for instance, pi in classical mathematics is a value. It's also a
link |
00:16:05.920
function, but it's the same thing. And in computation, a function is only a value when
link |
00:16:11.040
you can compute it. And if you cannot compute the last digit of pi, you only have a function.
link |
00:16:15.440
You can plug this function into your local sun. You let it run until the sun burns out.
link |
00:16:19.520
This is it. This is the last digit of pi you will know. But it also means there can be no process
link |
00:16:24.240
in the physical universe or in any physically realized computer that depends on having known
link |
00:16:29.120
the last digit of pi. Which means there are parts of physics that are defined in such a way that
link |
00:16:34.880
cannot strictly be true, because assuming that this could be true leads into contradictions.
link |
00:16:39.120
So I think putting computation at the center of the world view is actually the right way to think
link |
00:16:44.960
about it. Yes. And Wittgenstein could see it. And Wittgenstein basically preempted the logitist
link |
00:16:50.560
program of AI that Minsky started later, like 30 years later. Turing was actually a pupil of
link |
00:16:56.320
Wittgenstein. Really? So I didn't know there's any connection between Turing and Wittgenstein.
link |
00:17:00.480
And Wittgenstein even canceled some classes when Turing was not present, because he thought it was
link |
00:17:03.840
not worth spending the time on with the others. If you read the attractardos, it's a very beautiful
link |
00:17:09.680
book, basically one thought on 75 pages. It's very nontipical for philosophy, because it doesn't have
link |
00:17:16.400
arguments in it, and it doesn't have references in it. It's just one thought that is not intending
link |
00:17:21.600
to convince anybody. This says it's mostly for people that have the same insight as me.
link |
00:17:26.320
Just spell it out. And this insight is there is a way in which mathematics and philosophy ought to
link |
00:17:32.400
meet. Mathematics tries to understand the domain of all languages by starting with those that
link |
00:17:37.600
are so formalizable that you can prove all the properties of the statements that you make.
link |
00:17:42.560
But the price that you pay is that your language is very, very simple. So it's very hard to say
link |
00:17:47.040
something meaningful in mathematics. And it looks complicated to people, but it's far less
link |
00:17:51.760
complicated than what our brain is casually doing all the time, and it makes sense of reality.
link |
00:17:56.720
And philosophy is coming from the top. So it's mostly starting from natural languages with vaguely
link |
00:18:02.400
defined concepts. And the hope is that mathematics and philosophy can meet at some point. And
link |
00:18:07.680
Wittgenstein was trying to make them meet. And he already understood that, for instance,
link |
00:18:11.120
you could express everything with the nant calculus, that you could reduce the entire logic
link |
00:18:15.600
to nant gates as we do in our modern computers. So in some sense, he already understood Turing
link |
00:18:20.320
universality before Turing spelled it out. I think when he wrote the tractatus, he didn't
link |
00:18:24.880
understand yet that the idea was so important and significant. And as I suspect then, when Turing
link |
00:18:29.520
wrote it out, nobody cared that much. Turing was not that famous. When he lived, it was mostly his
link |
00:18:35.200
work in decrypting the German quotes that made him famous or gave him some notoriety. But
link |
00:18:41.680
the same status that he has to computer science right now, and yeah, I something that I think
link |
00:18:46.080
he'll acquire later. That's kind of interesting. Do you think of computation and computer science?
link |
00:18:51.120
And you kind of represent that to me is maybe that's the modern day. You, in a sense, are the
link |
00:18:56.400
new philosopher by sort of the computer scientist who dares to ask the bigger questions that
link |
00:19:03.840
philosophy originally started. Is the new philosopher? Certainly not me, I think. I'm
link |
00:19:09.280
mostly still this child that grows up in a very beautiful valley and looks at the world from
link |
00:19:14.240
the outside and tries to understand what's going on. And my teachers tell me things and they largely
link |
00:19:18.640
don't make sense. So I have to make my own models. I have to discover the foundations of what the
link |
00:19:23.440
others are saying. I have to try to fix them to be charitable. I try to understand what they must
link |
00:19:27.840
have thought originally over their teachers or their teacher's teachers must have thought until
link |
00:19:32.640
everything got lost in translation and how to make sense of the reality that we are in.
link |
00:19:36.800
And whenever I have an original idea, I'm usually late to the party by say 400 years. And the only
link |
00:19:42.080
thing that's good is that the parties get smaller and smaller the older I get and the more I explore.
link |
00:19:47.600
The parties get smaller and more exclusive.
link |
00:19:49.840
And more exclusive. So it seems like one of the key qualities of your upbringing was that you
link |
00:19:56.160
were not tethered, whether it's because of your parents or in general, maybe you're something
link |
00:20:02.320
within your mind, some genetic material, they were not tethered to the ideas of the general
link |
00:20:08.640
populace, which is actually a unique property where kind of the education system and whatever,
link |
00:20:15.680
not education system, just existing in this world forces certain sets of ideas onto you.
link |
00:20:21.120
Can you disentangle that? Why are you not so tethered? Even in your work today,
link |
00:20:28.480
you seem to not care about perhaps a best paper in Europe, right? Being tethered to particular
link |
00:20:38.400
things that current today in this year, people seem to value as a thing you put on your CV and
link |
00:20:44.960
resume. You're a little bit more outside of that world, outside of the world of ideas that people
link |
00:20:50.160
are especially focusing the benchmarks of today, the things. Can you disentangle that? Because
link |
00:20:56.240
I think that's inspiring. And if there were more people like that, we might be able to solve some
link |
00:21:00.720
of the bigger problems that sort of AI dreams to solve. And there's a big danger in this,
link |
00:21:07.360
because in a way you are expected to marry into an intellectual tradition and visit this tradition
link |
00:21:13.520
into a particular school. If everybody comes up with their own paradigms, the whole thing is not
link |
00:21:18.560
cumulative as an enterprise, right? So in some sense, you need a healthy balance,
link |
00:21:22.960
you need paradigmatic thinkers, and you need people that work within given paradigms. Basically,
link |
00:21:27.760
scientists today define themselves largely by methods. And it's almost a disease that we think
link |
00:21:33.040
as a scientist, somebody who was convinced by their guidance counselor that they should join
link |
00:21:39.280
a particular discipline, and then they find a good mentor to learn the right methods. And then
link |
00:21:43.040
they are lucky enough and privileged enough to join the right team. And then their name will
link |
00:21:48.160
show up on influential papers. But we also see that there are diminishing returns with this approach.
link |
00:21:54.160
And when our field computer science and AI started, most of the people that joined this field
link |
00:22:00.240
had interesting opinions. And today's thinkers in AI either don't have interesting opinions at all,
link |
00:22:06.240
or these opinions are inconsequential for what they're actually doing. Because what they're
link |
00:22:09.840
doing is they apply the state of the art methods with a small epsilon. And this is often a good
link |
00:22:16.880
idea if you think that this is the best way to make progress. And for me, it's first of all,
link |
00:22:22.400
very boring. If somebody else can do it, why should I do it? If the current methods of machine
link |
00:22:28.160
learning lead to strong AI, why should I be doing it? Well, just wait until they're done and wait
link |
00:22:33.760
until they do this on the beach or read interesting books or write some and have fun. But if you don't
link |
00:22:41.120
think that we are currently doing the right thing, if we are missing some perspectives,
link |
00:22:46.320
then it's required to think outside of the box. It's also required to understand the boxes.
link |
00:22:53.280
But it's necessary to understand what worked and what didn't work and for what reasons.
link |
00:22:59.200
So you have to be willing to ask new questions and design new methods whenever you want to answer
link |
00:23:04.240
them. And you have to be willing to dismiss the existing methods if you think that they're not
link |
00:23:09.280
going to yield the right answers. It's very bad career advice to do that. So maybe to briefly
link |
00:23:16.960
stay for one more time in the early days, when would you say for you was the dream
link |
00:23:23.440
before we dive into the discussions that we just almost started? When was the dream to understand
link |
00:23:29.680
or maybe to create human level intelligence born for you? I think that you can see AI largely today
link |
00:23:37.840
as advanced information processing. If you would change the acronym of AI into that, most people
link |
00:23:45.680
in the field would be happy. It would not change anything what they're doing. We're automating
link |
00:23:49.840
statistics. And many of the statistical models are more advanced than what statisticians had in
link |
00:23:56.720
the past. And it's pretty good work. It's very productive. And the other aspect of AI is philosophical
link |
00:24:03.040
project. And this philosophical project is very risky. And very few people work on it. And it's
link |
00:24:08.640
not clear if it succeeds. So first of all, you keep throwing a lot of really interesting ideas.
link |
00:24:15.360
And I have to pick which ones we go with. But first of all, you use the term information
link |
00:24:21.920
processing, just information processing, as if it's the mere, it's the muck of existence,
link |
00:24:30.960
as if it's the epitome that the entirety of the universe might be information processing,
link |
00:24:37.200
the consciousness, the intelligence might be information processing. So that maybe you can
link |
00:24:40.400
comment on if the advanced information processing is a limiting round of ideas. And then the other
link |
00:24:49.520
one is, what do you mean by the philosophical project? So I suspect that general intelligence is
link |
00:24:55.440
the result of trying to solve general problems. So intelligence, I think, is the ability to model.
link |
00:25:02.160
It's not necessarily goal directed rationality or something. Many intelligent people are bad at this.
link |
00:25:07.280
But it's the ability to be presented with a number of patterns and see a structure in those
link |
00:25:13.440
patterns and be able to predict the next set of patterns to make sense of things. And some problems
link |
00:25:20.560
are very general. Usually intelligence serves control. So you make these models for a particular
link |
00:25:25.120
purpose of interacting as an agent with the world and getting certain results. But the intelligence
link |
00:25:30.880
itself is in the sense instrumental to something. But by itself, it's just the ability to make models.
link |
00:25:35.520
And some of the problems are so general that the system that makes them needs to understand
link |
00:25:40.000
what itself is and how it relates to the environment. So as a child, for instance,
link |
00:25:44.800
you notice you do certain things despite you perceiving yourself as wanting different things.
link |
00:25:50.320
So you become aware of your own psychology. You become aware of the fact that you have
link |
00:25:55.600
complex structure in yourself and you need to model yourself, to reverse engineer yourself,
link |
00:25:59.760
to be able to predict how you will react to certain situations and how you deal with yourself
link |
00:26:04.560
in relationship to your environment. And this process, if this project, if you reverse engineer
link |
00:26:09.760
yourself in your relationship to reality in the nature of a universe that can contain you,
link |
00:26:14.160
if you go all the way, this is basically the project of AI or you could say the project of AI
link |
00:26:19.280
is a very important component in it. The true Turing test in a way is you ask a system,
link |
00:26:24.640
what is intelligence? If that system is able to explain what it is, how it works,
link |
00:26:30.640
then you should assign it the property of being intelligent in this general sense. So the test
link |
00:26:36.640
that Turing was administering in a way, I don't think that he couldn't see it, but he didn't
link |
00:26:41.200
express it yet in the original 1950 paper, is that he was trying to find out whether he was
link |
00:26:47.760
generally intelligent. Because in order to take this test, the rub is, of course,
link |
00:26:51.280
you need to be able to understand what the system is saying. And we don't yet know if we
link |
00:26:55.120
can build an AI. We don't yet know if we are generally intelligent. Basically, you win the
link |
00:26:59.840
Turing test by building an AI. Yes. So in a sense, hidden within the Turing test is a kind of recursive
link |
00:27:06.720
test. Yes, it's a test on us. The Turing test is basically a test of the conjecture whether people
link |
00:27:12.480
are intelligent enough to understand themselves. Okay, but you also mentioned a little bit of a
link |
00:27:18.080
self awareness. And then the project of AI, do you think this kind of emergent self awareness
link |
00:27:23.200
is one of the fundamental aspects of intelligence? So as opposed to goal oriented, as you said,
link |
00:27:30.640
kind of puzzle solving, is coming to grips with the idea that you're an agent in the world.
link |
00:27:39.440
Find that many highly intelligent people are not very self aware, right? So self awareness
link |
00:27:44.560
and intelligence are not the same thing. And you can also be self aware if you have good
link |
00:27:48.880
priorities, especially without being especially intelligent. So you don't need to be very good
link |
00:27:54.000
at solving puzzles if the system that you are already implements the solution.
link |
00:27:58.800
But I do find intelligence, you kind of mentioned children, right? Is that the fundamental project
link |
00:28:06.320
of AI is to create the learning system that's able to exist in the world. So you kind of drew a
link |
00:28:14.000
difference in self awareness and intelligence. And yet you said that the self awareness seems
link |
00:28:21.600
to be important for children. So I call this ability to make sense of the world and your own
link |
00:28:27.440
place in it. So to make you able to understand what you're doing in this world, sentience.
link |
00:28:32.240
And I would distinguish sentience from intelligence because sentience is
link |
00:28:37.040
possessing certain classes of models. And intelligence is a way to get to these models
link |
00:28:41.680
if you don't already have them. I see. So can you maybe pause a bit and try to
link |
00:28:51.680
answer the question that we just said we may not be able to answer? And it might be a recursive
link |
00:28:57.120
meta question of what is intelligence? I think that intelligence is the ability to make models.
link |
00:29:03.840
So models, I think it's useful as examples, very popular now, neural networks,
link |
00:29:09.920
form representations of large scale data set. They form models of those data sets.
link |
00:29:20.080
When you say models and look at today's neural networks, what are the difference of how you're
link |
00:29:24.960
thinking about what is intelligent in saying that intelligence is the process of making models?
link |
00:29:31.520
Two aspects to this question. One is the representation as the representation adequate
link |
00:29:37.040
for the domain that we want to represent. And the other one is the type of the model that you
link |
00:29:42.880
arrive at adequate. So basically, are you modeling the correct domain? And I think in both of these
link |
00:29:50.240
cases, modern AI is lacking still. And I think that I'm not saying anything new here. I'm not
link |
00:29:55.040
criticizing the field. Most of the people that design our paradigms are aware of that. And so one
link |
00:30:02.400
aspect that we are missing is unified learning. When we learn, we at some point discover that
link |
00:30:07.600
everything that we sense is part of the same object, which means we learn it all into one model.
link |
00:30:13.200
And we call this model the universe. So an experience of the world that we are embedded on
link |
00:30:17.040
is not a secret direct wire to physical reality. Physical reality is a weird quantum graph that
link |
00:30:22.320
we can never experience or get access to. But it has these properties that it can create certain
link |
00:30:27.840
patterns that our systemic interface to the world. And we make sense of these patterns and the relationship
link |
00:30:32.720
between the patterns that we discover is what we call the physical universe. So at some point in
link |
00:30:37.520
our development as a nervous system, we discover that everything that we relate to in the world
link |
00:30:45.680
can be mapped to a region in the same three dimensional space by and large. We now know
link |
00:30:50.800
in physics that this is not quite true. The world is not actually three dimensional,
link |
00:30:54.640
but the world that we are entangled with at the level which we are entangled with is largely
link |
00:30:58.880
a flat three dimensional space. And so this is the model that our brain is intuitively making.
link |
00:31:04.960
And this is, I think, what gave rise to this intuition of res extensor of this material world,
link |
00:31:10.400
this material domain. It's one of the mental domains, but it's just the class of all models
link |
00:31:14.160
that relate to this environment, this three dimensional physics engine in which we are
link |
00:31:18.640
embedded physics engine, which we're embedded. I love that, right? Just slowly pause. So the
link |
00:31:28.240
quantum graph, I think you called it, which is the real world, which you can never get access to,
link |
00:31:34.400
there's a bunch of questions I want to sort of disentangle that, but maybe one useful one,
link |
00:31:40.560
one of your recent talks I looked at, can you just describe the basics? Can you talk about
link |
00:31:44.640
what is dualism, what is idealism, what is materialism, what is functionalism, and what connects
link |
00:31:50.000
with you most in terms of, because you just mentioned, there's a reality we don't have access to,
link |
00:31:53.920
okay? What does that even mean? And why don't we get access to it? Are we part of that reality?
link |
00:32:00.800
Why can't we access it? So the particular trajectory that mostly exists in the West
link |
00:32:06.160
is the result of our indoctrination by a cult for 2000 years. A cult, which one?
link |
00:32:11.360
Yes, the Catholic cult mostly. And for better or worse, it has created or defined many of
link |
00:32:18.400
the modes of interaction that we have that has created this society, but it has also in some
link |
00:32:23.120
sense scarred our rationality. And the intuition that exists, if you would translate the mythology
link |
00:32:32.400
of the Catholic Church into the modern world is that the world in which you and me interact
link |
00:32:37.680
is something like a multiplayer roleplaying adventure. And the money and the objects that
link |
00:32:42.880
we have in this world, this is all not real. Eastern philosophers would say it's Maya. It's
link |
00:32:49.280
just stuff that appears to be meaningful, and this embedding in this meaning, and if you believe
link |
00:32:55.280
in it, it's Samsara. It's basically the identification with the needs of the mundane,
link |
00:33:00.880
secular everyday existence. And the Catholics also introduced the notion of higher meaning,
link |
00:33:07.600
the sacred. And this existed before, but eventually the natural shape of God is the
link |
00:33:13.280
platonic form of the civilization that you're part of. It's basically the superorganism that is
link |
00:33:17.280
formed by the individuals as an intentional agent. And basically, the Catholics used a relatively
link |
00:33:24.000
crude mythology to implement software on the minds of people and get the software synchronized to
link |
00:33:29.280
make them walk on lockstep to basically get this God online and to make it efficient and effective.
link |
00:33:36.560
And I think God technically is just a self that spends multiple brains as opposed to your and
link |
00:33:42.480
myself, which mostly exists just on one brain, right? And so in some sense, you can construct a
link |
00:33:47.760
self functionally as a function that is implemented by brains that exists across brains. And this is
link |
00:33:53.680
a God with a small g. That's one of the, if you have all Harari kind of talking about,
link |
00:33:59.920
this is one of the nice features of our brains, it seems to that we can all download the same
link |
00:34:04.560
piece of software like God in this case and kind of share it. Yeah. So basically, you give everybody
link |
00:34:09.280
a spec and the mathematical constraints that are intrinsic to information processing make sure
link |
00:34:16.560
that given the same spec, you come up with a compatible structure. Okay. So that's, there's
link |
00:34:21.600
the space of ideas that we'll share. And we think that's kind of the mind. But that's separate from
link |
00:34:27.200
the idea is from Christianity, from religion is that there's a separate thing between the mind.
link |
00:34:35.200
There is a real world. And this real world is the world in which God exists. God is the
link |
00:34:40.160
quarter of the multiplayer adventure, so to speak. And we are all players in this game.
link |
00:34:45.920
And that's dualism, you would say. But the dualism aspect is because the mental realm
link |
00:34:51.200
exists in a different implementation than a physical realm. And the mental realm is real.
link |
00:34:57.200
And a lot of people have this intuition that there is this real room in which you and me
link |
00:35:01.440
talk and speak right now, then comes a layer of physics and abstract rules and so on. And then
link |
00:35:07.760
comes another real room where our souls are. And our true form isn't a thing that gives us
link |
00:35:12.720
phenomenal experience. And this is, of course, a very confused notion that you would get.
link |
00:35:17.120
And it's basically the result of connecting materialism and idealism in the wrong way.
link |
00:35:24.640
So, okay. I apologize, but I think it's really helpful if we just try to define terms. What
link |
00:35:31.840
is dualism? What is idealism? What is materialism for people who don't know?
link |
00:35:34.880
So, the idea of dualism in our cultural tradition is that there are two substances,
link |
00:35:39.520
a mental substance and a physical substance. And they interact by different rules. And the
link |
00:35:45.600
physical world is basically causally closed and is built on a low level causal structure. So,
link |
00:35:51.600
they're basically a bottom level that is causally closed that's entirely mechanical
link |
00:35:56.160
and mechanical in the widest sense. So, it's computational. There's basically a physical
link |
00:36:00.240
world in which information flows around and physics describes the laws of how information
link |
00:36:05.200
flows around in this world. Would you compare it to like a computer where you have a hardware
link |
00:36:09.680
and software? The computer is a generalization of information flowing around basically,
link |
00:36:14.080
but you won't discover that there is a universal principle. You can define this universal machine
link |
00:36:20.240
that is able to perform all the computations. So, all these machines have the same power.
link |
00:36:25.200
This means that you can always define a translation between them as long as they have
link |
00:36:29.120
unlimited memory to be able to perform each other's computations.
link |
00:36:34.320
So, would you then say that materialism is this whole world is just the hardware and idealism
link |
00:36:39.760
is this whole world is just the software? Not quite. I think that most idealists don't have
link |
00:36:44.480
a notion of software yet because software also comes down to information processing.
link |
00:36:49.520
So, what you notice is the only thing that is real to you and me is this experiential world in
link |
00:36:54.560
which things matter, in which things have taste, in which things have color, phenomenal content,
link |
00:36:59.040
and so on. And you realize that. You are bringing up consciousness, okay.
link |
00:37:01.840
And this is distinct from the physical world in which things have values in only in an abstract
link |
00:37:07.840
sense. And you only look at cold patterns moving around. So, how does anything feel like something?
link |
00:37:15.520
And this connection between the two things is very puzzling to a lot of people,
link |
00:37:19.200
of course, too many philosophers. So, idealism starts out with the notion that mind is primary.
link |
00:37:23.360
Materialism thinks that matter is primary. And so, for the idealist, the material patterns that we
link |
00:37:30.480
say are playing out are part of the dream that the mind is dreaming. And we exist in mind on a
link |
00:37:37.280
higher plane of existence, if you want. And for the materialist, there is only this material
link |
00:37:43.920
thing and that generates some models and we are the result of these models. And in some sense,
link |
00:37:50.160
I don't think that we should understand, if you understand it properly, materialism and
link |
00:37:55.440
idealism is a dichotomy, but there's two different aspects of the same thing.
link |
00:37:59.760
So, the weird thing is we don't exist in the physical world. We do exist inside of a story
link |
00:38:04.000
that the brain tells itself. Okay. Let me, my information processing, take that in.
link |
00:38:15.040
We don't exist in the physical world. We exist in the narrative.
link |
00:38:18.080
Basically, a brain cannot feel anything. A New Yorker cannot feel anything. They're physical
link |
00:38:22.160
things. Physical systems are unable to experience anything. But it would be very useful for the
link |
00:38:26.720
brain or for the organism to know what it would be like to be a person and to feel something.
link |
00:38:31.600
So, the brain creates a simulacrum of such a person that it uses to model the interactions of
link |
00:38:36.880
the person. It's the best model of what that brain, this organism thinks it is in relationship to
link |
00:38:41.840
its environment. So, it creates that model. It's a story, a multimedia novel that the brain is
link |
00:38:46.160
continuously writing and updating. But you also kind of said that, you said that we kind of exist
link |
00:38:51.520
in that story. What is real in any of this? So, again, these terms are, you kind of said there's
link |
00:39:05.040
a quantum graph. I mean, what is this whole thing running on then? Is this story, and is it completely,
link |
00:39:12.400
fundamentally impossible to get access to it? Because isn't the story supposed to,
link |
00:39:17.280
isn't the brain in something existing in some kind of context?
link |
00:39:24.240
So, what we can identify as computer scientists, we can engineer systems and test our theories
link |
00:39:30.480
this way that might have the necessary insufficient properties to produce the phenomena that we are
link |
00:39:36.560
observing, which is there is a self in a virtual world that is generated in somebody's New York
link |
00:39:42.000
cortex that is contained in the skull of this primate here. And when I point at this, this
link |
00:39:47.760
indexicality is of course wrong. But I do create something that is likely to give rise to patterns
link |
00:39:54.640
on your retina that allow you to interpret what I'm saying. But we both know that the world that
link |
00:40:00.000
you and me are seeing is not the real physical world. What we are seeing is a virtual reality
link |
00:40:04.800
generated in your brain to explain the patterns on your retina.
link |
00:40:08.000
How close is it to the real world? That's kind of the question. When you have people like
link |
00:40:15.600
Donald Hoffman, let's say that you're really far away. The thing we're seeing, you and I now,
link |
00:40:21.200
that interface we have is very far away from anything. We don't even have anything close
link |
00:40:26.640
like to the sense of what the real world is. Or is it a very surface piece of architecture?
link |
00:40:32.000
Imagine you look at the Mandelbrot fractal, right? This famous thing that when Mandelbrot
link |
00:40:37.040
is covered, you see an overall shape in there, right? But if you truly understand it,
link |
00:40:43.040
you know it's two lines of code. It's basically in a series that is being tested for complex
link |
00:40:50.080
numbers in the complex number plane for every point. And for those where the series is diverging,
link |
00:40:56.320
you paint this black. And where it's converging, you don't. And you get the intermediate colors
link |
00:41:04.160
by taking how far it diverges. This gives you this shape of this fractal. But imagine you live
link |
00:41:13.040
inside of this fractal and you don't have access to where you are in the fractal. Or you have not
link |
00:41:18.160
discovered the generator function even. So what you see is all I can see right now is this viral.
link |
00:41:23.520
And this viral moves a little bit to the right. Is this an accurate model of reality? Yes,
link |
00:41:27.280
it is. It is an adequate description. You know that there is actually no spiral in the Mandelbrot
link |
00:41:33.360
fractal. It only appears like this to an observer that is interpreting things as a two dimensional
link |
00:41:38.640
space and then defines certain regularities in there at a certain scale that it currently
link |
00:41:43.440
observes. Because if you zoom in, the spiral might disappear and turn out to be something
link |
00:41:46.880
different at a different resolution, right? So at this level, you have the spiral and then you
link |
00:41:50.880
discover the spiral moves to the right and at some point it disappears. So you have a singularity.
link |
00:41:55.280
At this point, your model is no longer valid. You cannot predict what happens beyond the singularity.
link |
00:42:00.320
But you can observe again and you will see it hit another spiral and at this point it
link |
00:42:04.640
disappeared. So maybe now have a second order law. And if you make 30 layers of these laws,
link |
00:42:09.280
then you have a description of the world that is similar to the one that we come up with when we
link |
00:42:13.280
describe the reality around us. It's reasonably predictive. It does not cut to the core of it.
link |
00:42:18.400
So you explain how it's being generated, how it actually works. But it's relatively good to
link |
00:42:23.520
explain the universe that we are entangled with. But you don't think the tools of computer science
link |
00:42:27.040
or the tools of physics could step outside, see the whole drawing and get at the basic
link |
00:42:33.120
mechanism of how the spiral is generated. Imagine you would find yourself embedded
link |
00:42:39.280
into a motherboard fractal and you try to figure out what works. And you have somehow of a Turing
link |
00:42:43.280
machine with enough memory to think. And as a result, you come to this idea, it must be some
link |
00:42:49.920
kind of automaton. And maybe you just enumerate all the possible automata until you get to the
link |
00:42:54.320
one that produces your reality. So you can identify necessary and sufficient condition.
link |
00:42:59.280
For instance, we discover that mathematics itself is the domain of all languages. And then we see
link |
00:43:04.800
that most of the domains of mathematics that we have discovered are in some sense describing
link |
00:43:09.520
the same fractals. This is what category theory is obsessed about, that you can map these different
link |
00:43:13.600
domains to each other. So they're not that many fractals. And some of these have interesting
link |
00:43:18.720
structure and symmetry breaks. And so you can discover what region of this global fractal
link |
00:43:25.600
you might be embedded in from first principles. But the only way you can get there is from
link |
00:43:29.680
first principles. So basically, your understanding of the universe has to start with automata and
link |
00:43:34.320
then number theory and then spaces and so on. Yeah, I think like Stephen Wolfram still dreams
link |
00:43:39.280
that he'll be able to arrive at the fundamental rules of the cellular automata or the generalization
link |
00:43:45.760
of which is behind our universe. You've said on this topic, you said in a recent conversation
link |
00:43:54.160
that quote, some people think that a simulation can't be conscious and only a physical system can.
link |
00:44:00.560
But they got it completely backward. A physical system cannot be conscious. Only a simulation
link |
00:44:05.840
can be conscious. Consciousness is a simulated property, the simulated self. Just like you said,
link |
00:44:11.920
the mind is kind of the we call it story narrative. There's a simulation or so our mind is essentially
link |
00:44:18.480
a simulation. Usually, I try to use the terminology so that the mind is basically a principles
link |
00:44:25.520
that produce the simulation. It's the software that is implemented by your brain. And the mind
link |
00:44:30.480
is creating both the universe that we are in and the self, the idea of a person that is on the other
link |
00:44:37.200
side of attention and is embedded in this world. Why is that important, that idea of a self?
link |
00:44:43.120
Why is that an important feature in the simulation? It's basically a result of the purpose that the
link |
00:44:49.760
mind has. It's a tool for modeling. We are not actually monkeys. We are side effects of the
link |
00:44:54.560
regulation needs of monkeys. And what the monkey has to regulate is the relationship of an organism
link |
00:45:02.320
to an outside world that is in large part also consisting of other organisms. And as a result,
link |
00:45:09.280
it basically has regulation targets that it tries to get to. These regulation targets start with
link |
00:45:13.920
priors. They're basically like unconditional reflexes that we are more or less born with.
link |
00:45:18.160
And then we can reverse engineer them to make them more consistent. And then we get more detailed
link |
00:45:22.640
models about how the world works and how to interact with it. And so these priors that you
link |
00:45:27.600
commit to are largely target values that our needs should approach, set points. And this deviation
link |
00:45:33.600
to the set point creates some urge, some tension. And we find ourselves living inside of feedback
link |
00:45:39.840
loops, right? Consciousness emerges over dimensions of disagreements with the universe. Things where
link |
00:45:44.960
you care, things are not the way they should be, but you need to regulate. And so in some sense,
link |
00:45:50.320
the sense itself is the result of all the identifications that you're having. And identification
link |
00:45:54.960
is a regulation target that you're committing to. It's a dimension that you care about. What you
link |
00:45:59.600
think is important. And this is also what locks you in. If you let go of these commitments of
link |
00:46:05.280
these identifications, you get free. There's nothing that you have to do anymore. And if you
link |
00:46:10.480
let go of all of them, you're completely free and you can enter Nirvana because you're done.
link |
00:46:15.280
And actually, this is a good time to pause and say thank you to a sort of friend of mine,
link |
00:46:20.480
Gustav Sorastrum, who introduced me to your work. I want to give him a shout out. He's a
link |
00:46:25.600
brilliant guy. And I think the AI community is actually quite amazing. And Gustav is a good
link |
00:46:30.000
representative of that. You are as well. So I'm glad, first of all, I'm glad the internet exists,
link |
00:46:34.800
the YouTube exists, where I can watch your talks and then get to your book and study your writing
link |
00:46:41.200
and think about, you know, that's amazing. Okay, but you've kind of described instead of this
link |
00:46:47.600
emergent phenomenon of consciousness from the simulation. So what about the hard problem of
link |
00:46:53.200
consciousness? Can you just linger on it? Like, why does it still feel like I understand you're
link |
00:47:03.120
kind of the self is an important part of the simulation. But why does the simulation feel
link |
00:47:09.040
like something? So if you look at a book by say, George R. R. Martin, where the characters have
link |
00:47:14.960
plausible psychology, and they stand on a hill, because they want to conquer the city below the
link |
00:47:19.920
hill, and they're done in it, and they look at the color of the sky, and they are apprehensive,
link |
00:47:24.080
and feel empowered and all these things. Why do they have these emotions? It's because it's
link |
00:47:27.840
written into the story, right? And it's written to the story because it's an adequate model of
link |
00:47:31.920
the person that predicts what they're going to do next. And the same thing has happened too far.
link |
00:47:37.360
So it's basically a story that our brain is writing. It's not written in words. It's written
link |
00:47:41.680
in perceptual content, basically multimedia content. And it's a model of what the person
link |
00:47:48.160
would feel if it existed. So it's a virtual person. And you and me happen to be this virtual person.
link |
00:47:54.800
So this virtual person gets access to the language center and talks about the sky being blue.
link |
00:48:00.400
And this is us. But hold on a second. Do I exist in your simulation?
link |
00:48:05.520
You do exist in an almost similar way as me. So there are internal states that are less accessible
link |
00:48:14.640
for me that you have, and so on. And my model might not be completely adequate.
link |
00:48:20.640
There are also things that I might perceive about you that you don't perceive. But in some sense,
link |
00:48:25.120
both you and me are some puppets, two puppets that enact this play in my mind. And I identify with
link |
00:48:31.520
one of them because I can control one of the puppet directly. And with the other one, I can
link |
00:48:36.960
create things in between. So for instance, we can go on an interaction that even leads to a
link |
00:48:41.760
coupling to a feedback loop. So we can sync things together in a certain way or feel things together.
link |
00:48:47.120
But this coupling itself is not a physical phenomenon. It's entirely a software phenomenon.
link |
00:48:51.680
It's the result of two different implementations interacting with each other.
link |
00:48:54.960
So that's interesting. So are you suggesting, like the way you think about it, is the entirety
link |
00:49:02.080
of existence, the simulation, and where kind of each mind is a little sub simulation?
link |
00:49:08.640
That like, why don't you, why doesn't your mind have access to my mind's full state?
link |
00:49:18.560
Like, for the same reason that my mind doesn't have access to its own full state.
link |
00:49:22.880
There is no trick involved. So basically, when I know something about myself, it's because I
link |
00:49:30.800
made a model. So what part of your brain is tasked with modeling what other parts of your brain are
link |
00:49:35.760
doing? Yes. But there seems to be an incredible consistency about this world in the physical
link |
00:49:41.600
sense that there's repeatable experiments and so on. How does that fit into our silly
link |
00:49:47.600
the center of the ape's simulation of the world? So why is everything so repeatable?
link |
00:49:53.120
And not everything. There's a lot of fundamental physics experiments that are repeatable
link |
00:49:59.600
for a long time, all over the place, and so on. The laws of physics. How does that fit in?
link |
00:50:05.040
It seems that the parts of the world that are not deterministic are not long lived.
link |
00:50:10.400
So if you build a system, any kind of automaton, so if you build simulations of something,
link |
00:50:17.120
you'll notice that the phenomena that endure are those that give rise to stable dynamics.
link |
00:50:23.520
So basically, if you see anything that is complex in the world, it's the result of
link |
00:50:27.200
usually of some control of some feedback that keeps it stable around certain attractors.
link |
00:50:31.920
And the things that are not stable that don't give rise to certain harmonic patterns and so on,
link |
00:50:36.640
they tend to get weeded out over time. So if we are in a region of the universe that
link |
00:50:42.720
sustains complexity, which is required to implement minds like ours, this is going to be a region of
link |
00:50:49.120
the universe that is very tightly controlled and controllable. So it's going to have lots of
link |
00:50:54.240
interesting symmetries and also symmetry breaks that allow to the creation of structure.
link |
00:51:00.480
But they exist where? So there's such an interesting idea that our mind is simulation
link |
00:51:04.960
that's constructing the narrative. My question is just to try to understand how that fits with the
link |
00:51:14.240
entirety of the universe. You're saying that there's a region of this universe that allows
link |
00:51:18.400
enough complexity to create creatures like us. But what's the connection between the brain,
link |
00:51:25.200
the mind, and the broader universe? Which comes first, which is more fundamental?
link |
00:51:30.400
Is the mind the starting point, the universe is emergent? Is the universe the starting point,
link |
00:51:35.520
the minds are emergent? I think quite clearly the latter. That's at least a much easier explanation
link |
00:51:41.600
because it allows us to make causal models. And I don't see any way to construct an inverse
link |
00:51:46.720
causality. So what happens when you die to your mind's simulation? My implementation ceases. So
link |
00:51:53.760
basically the thing that implements myself will no longer be present, which means if I am not
link |
00:51:58.560
implemented on the minds of other people, the thing that I identify with. The weird thing is I
link |
00:52:04.160
don't actually have an identity beyond the identity that I construct. If I was the Dalai Lama,
link |
00:52:10.400
he identifies as a form of government. So basically the Dalai Lama gets reborn, not because he's
link |
00:52:16.000
confused, but because he is not identifying as a human being. He runs on a human being. He's
link |
00:52:23.360
basically a governmental software that is instantiated in every new generation and you. So his
link |
00:52:28.960
advice is to pick someone who does this in the next generation. So if you identify with this,
link |
00:52:34.000
you are no longer a human and you don't die in the sense that what dies is only the body of the
link |
00:52:38.960
human that you run on. To kill the Dalai Lama, you would have to kill his tradition. And if we
link |
00:52:45.120
look at ourselves, we realize that we are to a small part like this, most of us. So for instance,
link |
00:52:49.520
if you have children, you realize something lives on in them. Or if you spark an idea in the world,
link |
00:52:54.880
something lives on. Or if you identify with a society around you, because you are in part that,
link |
00:53:00.000
you're not just this human being. Yeah. So in a sense, you are kind of like a Dalai Lama.
link |
00:53:04.880
In a sense that you, Joshua Bach, is just a collection of ideas. So you have this operating
link |
00:53:11.440
system on which a bunch of ideas live and interact. And then once you die, some of them
link |
00:53:16.640
jump off the ship. You put it the other way. Identity is a software state. It's a construction.
link |
00:53:22.640
It's not physically real. Identity is not a physical concept. It's basically a representation
link |
00:53:28.720
of different objects on the same world line. But identity lives and dies. Are you attached?
link |
00:53:36.960
What's the fundamental thing? Is it the ideas that come together to form identity? Or is each
link |
00:53:43.440
individual identity actually a fundamental thing? It's a representation that you can get
link |
00:53:47.440
agency over if you care. So basically, you can choose what you identify with if you want to.
link |
00:53:52.320
No, but it just seems, if the mind is not real, that the birth and death is not a crucial part
link |
00:54:03.920
of it. Well, maybe I'm silly. Maybe I'm attached to this whole biological organism, but it seems
link |
00:54:15.360
that the physical being a physical object in this world is an important aspect of birth
link |
00:54:22.480
and death. It feels like it has to be physical to die. It feels like simulations don't have to die.
link |
00:54:28.640
The physics that we experience is not the real physics. There is no color and sound in the real
link |
00:54:33.360
world. Color and sound are types of representations that you get if you want to model reality with
link |
00:54:39.280
oscillators. So colors and sound in some sense have octaves. And it's because they are represented
link |
00:54:44.480
probably with oscillators. So that's why colors form a circle of use. And colors have harmonics,
link |
00:54:50.960
sounds have harmonics as a result of synchronizing oscillators in the brain. So the world that we
link |
00:54:56.720
subjectively interact with is fundamentally the result of the representation mechanisms in our
link |
00:55:02.160
brain. They are mathematically to some degree universal. There are certain regularities that
link |
00:55:06.640
you can discover in the patterns and not others. But the patterns that we get, this is not the real
link |
00:55:11.440
world. The world that we interact with is always made of too many parts to count. So when you look
link |
00:55:16.960
at this table and so on, it's consisting of so many molecules and atoms that you cannot count
link |
00:55:22.160
them. So you only look at the aggregate dynamics, at limit dynamics. If you had almost infinitely
link |
00:55:27.840
many particles, what would be the dynamics of the table? And this is roughly what you get.
link |
00:55:32.880
So geometry that we are interacting with is the result of discovering those operators that
link |
00:55:38.320
work in the limit that you get by building an infinite series that converges. For those parts
link |
00:55:43.120
where it converges is geometry. For those parts where it doesn't converge, it's chaos.
link |
00:55:47.280
Right. And then so all of that is filtered through the consciousness that's emergent in our
link |
00:55:53.920
narrative. So the consciousness gives it color, gives it feeling, gives it flavor.
link |
00:55:59.040
So I think the feeling, flavor and so on is given by the relationship that a feature has
link |
00:56:05.120
to all the other features. It's basically a giant relational graph that is our subjective universe.
link |
00:56:10.800
The color is given by those aspects of the representation or this experiential
link |
00:56:15.920
color where you care about, where you have identifications, where something means something,
link |
00:56:20.640
where you are the inside of a feedback loop and the dimensions of caring are basically
link |
00:56:25.120
dimensions of this motivational system that we emerge over.
link |
00:56:28.560
The meaning of the relations, the graph, can you elaborate that a little bit?
link |
00:56:33.840
Like where does the, maybe we can even step back and ask the question of what is consciousness
link |
00:56:39.840
to be more systematically? What do you, how do you think about consciousness?
link |
00:56:46.160
I think that consciousness is largely a model of the contents of your attention.
link |
00:56:49.760
It's a mechanism that has evolved for a certain type of learning.
link |
00:56:54.000
At the moment, our machine learning systems largely work by building chains of weighted
link |
00:57:00.240
sums of real numbers with some nonlinearity. And you learn by piping an error signal through
link |
00:57:08.160
these different chained layers and adjusting the weights in these weighted sums.
link |
00:57:14.320
And you can approximate most polynomials with this if you have enough training data.
link |
00:57:19.680
But the price is you need to change a lot of these weights. Basically, the error is piped
link |
00:57:25.360
backwards into the system until it accumulates at certain junctures in the network.
link |
00:57:29.440
And everything else evens out statistically. And only at these junctures, this is where
link |
00:57:33.760
you had the actual error in the network, you make the change there.
link |
00:57:36.400
This is a very slow process. And our brains don't have enough time for that because we
link |
00:57:40.880
don't get old enough to play go the way that our machines learn to play go.
link |
00:57:44.720
So instead, what we do is an attention based learning. We pinpoint the probable region
link |
00:57:49.200
in the network where we can make an improvement. And then we store this binding state together
link |
00:57:56.320
with the expected outcome in a protocol. And this ability to make indexed memories
link |
00:58:00.640
for the purpose of learning to revisit these commitments later.
link |
00:58:04.480
This requires a memory of the contents of our attention. Another aspect is when I construct
link |
00:58:10.400
my reality and make mistakes. So I see things that turn out to be reflections or shadows
link |
00:58:15.600
and so on, which means I have to be able to point out which features of my perception
link |
00:58:20.080
gave rise to the present construction of reality. So the system needs to pay attention
link |
00:58:25.920
to the features that are currently in its focus. And it also needs to pay attention to
link |
00:58:31.920
whether it pays attention itself, in part because the attentional system gets trained
link |
00:58:35.280
with the same mechanism, so it's reflexive, but also in part because your attention lapses
link |
00:58:39.680
if you don't pay attention to the attention itself. So it's the thing that I'm currently
link |
00:58:44.160
seeing just a dream that my brain has spun off into some kind of daydream, or am I still
link |
00:58:49.760
paying attention to my percept? So you have to periodically go back and see whether you're
link |
00:58:54.160
still paying attention. And if you have this loop and you make it tight enough between the
link |
00:58:58.160
system becoming aware of the contents of its attention and the fact that it's paying
link |
00:59:02.400
attention itself and makes the object of its attention, I think this is the loop over which
link |
00:59:06.960
we wake up. So there's this attentional mechanism that's somehow self referential that's fundamental
link |
00:59:14.080
to what consciousness is. So just to ask you a question, I don't know how much you're familiar
link |
00:59:20.400
with the recent breakthroughs in natural language processing. They use attentional mechanisms,
link |
00:59:24.720
they use something called transformers to learn patterns and sentences by allowing them at work
link |
00:59:35.040
to focus its attention to particular parts of the sentence at each individual. So parametrize
link |
00:59:41.440
and make it learnable the dynamics of a sentence by having a little window into the sentence.
link |
00:59:49.120
Do you think that's like a little step towards that eventually will take us to the
link |
00:59:56.640
intentional mechanisms from which consciousness can emerge?
link |
01:00:00.240
Not quite. I think it models only one aspect of attention. In the early days of automated
link |
01:00:06.560
language translation, there was an example that I found particularly funny, where somebody
link |
01:00:11.200
tried to translate a text from English into German and it was a bet broke the window.
link |
01:00:16.080
And the translation in German was
link |
01:00:20.320
eine Fliedermaus zerbracht das Fenster mit einem baseball Schläger. So to translate back into
link |
01:00:27.040
a bet, this flying mammal broke the window with a baseball bet. And it seemed to be the most
link |
01:00:34.400
similar to this program because it somehow maximized the possibility of translating
link |
01:00:39.760
the concept bet into German in the same sentence. And this is some mistake that the
link |
01:00:44.400
transformer model is not doing because it's tracking identity. And the attentional mechanism
link |
01:00:49.200
in the transformer model is basically putting its finger on individual concepts and make sure
link |
01:00:53.920
that these concepts pop up later in the text. And tracks basically the individuals through
link |
01:01:00.320
the text. And this is why the system can learn things that other systems couldn't before it,
link |
01:01:05.520
which makes it, for instance, possible to write a text where it talks about the scientist,
link |
01:01:09.520
then the scientist has a name and has a pronoun, and it gets a consistent story about that thing.
link |
01:01:15.200
What it does not do, it doesn't fully integrate this. So this meaning falls apart at some point.
link |
01:01:19.520
It loses track of this context. It does not yet understand that everything that it says has to
link |
01:01:24.400
refer to the same universe. And this is where this thing falls apart. But the attention in a
link |
01:01:30.240
transformer model does not go beyond tracking identity. And tracking identity is an important
link |
01:01:34.800
part of attention. But it's a different, very specific attentional mechanism. And it's not
link |
01:01:39.920
the one that gives rise to the type of consciousness that we have.
link |
01:01:42.480
Okay, just to link on, what do you mean by identity in the context of language?
link |
01:01:47.200
So when you talk about language, we have different words that can refer to the same concept.
link |
01:01:52.480
Got it. And in the sense that...
link |
01:01:53.520
It's a space of concepts.
link |
01:01:54.960
Yes. And it can also be in a nominal sense or in an inexical sense that you say
link |
01:02:01.360
where this word does not only refer to this class of objects, but it refers to a definite
link |
01:02:06.880
object to some kind of agent that waves their way through the story and is only referred by
link |
01:02:13.360
different ways in the language. So the language is basically a projection from a conceptual
link |
01:02:19.440
representation from a scene that is evolving into a discrete string of symbols. And what
link |
01:02:25.520
the transformer is able to do, it learns aspects of this projection mechanism that other models
link |
01:02:31.280
couldn't learn. So have you ever seen an artificial intelligence or any kind of construction idea
link |
01:02:37.040
that allows for unlike neural networks or perhaps within neural networks that's able to form
link |
01:02:43.120
something where the space of concepts continues to be integrated? So what you're describing,
link |
01:02:49.760
building a knowledge base, building this consistent larger and larger sets of ideas that would then
link |
01:02:56.640
allow for a deeper understanding? Wittgenstein thought that we can build everything from language,
link |
01:03:02.720
from basically a logical grammatical construct. And I think to some degree,
link |
01:03:07.440
this was also what Minsky believed. So that's why he focused so much on common sense reasoning and
link |
01:03:12.560
so on. And a project that was inspired by him was psych. That was basically...
link |
01:03:18.880
That's still going on.
link |
01:03:19.600
Yes. Of course, ideas don't die. Only people die.
link |
01:03:23.680
Okay. That's true. And out psych is a productive project. It's just probably not one that is
link |
01:03:30.640
going to converge to general intelligence. The thing that Wittgenstein couldn't solve,
link |
01:03:35.520
and he looked at this in his book at the end of his life, Philosophical Investigations,
link |
01:03:40.160
was the notion of images. So images play an important role in tractatus. The tractatus
link |
01:03:45.040
is an attempt to basically turn philosophy into logical probing language to design a
link |
01:03:49.040
logical language in which you can do actual philosophy that's rich enough for doing this.
link |
01:03:53.760
And the difficulty was to deal with perceptual content. And eventually, I think he decided
link |
01:04:00.000
that he was not able to solve it. And I think this preempted the failure of the logitist program
link |
01:04:05.840
in AI. And the solution, as we see it today, is we need more general function approximation.
link |
01:04:10.800
There are functions, geometric functions, that we learn to approximate that cannot be
link |
01:04:15.360
efficiently expressed and computed in a grammatical language. We can, of course, build automata that
link |
01:04:20.240
go via number theory and so on, to learn in algebra, and then compute an approximation
link |
01:04:25.200
of this geometry. But to equate language and geometry is not an efficient way to think about it.
link |
01:04:32.560
So function where you kind of just said that neural networks are sort of,
link |
01:04:37.280
the approach that neural networks takes is actually more general than what can be expressed
link |
01:04:43.200
through language. Yes. So what can be efficiently expressed through language at the data rates
link |
01:04:50.080
at which we process grammatical language? Okay, so you don't think languages, so you
link |
01:04:55.600
just agree with Wittgenstein that language is not fundamental too? I agree with Wittgenstein.
link |
01:05:00.560
I just agree with the late Wittgenstein. And I also agree with the beauty of the early Wittgenstein.
link |
01:05:07.600
I think that the Traktatus itself is probably the most beautiful philosophical text that was
link |
01:05:11.840
written in the 20th century. But language is not fundamental to cognition and intelligence
link |
01:05:17.600
and consciousness. So I think that language is a particular way, or the natural language that
link |
01:05:22.800
we're using is a particular level of abstraction that we use to communicate with each other.
link |
01:05:27.360
But the languages in which we express geometry are not grammatical languages in the same sense.
link |
01:05:33.760
So they work slightly different. They're more general expressions of functions.
link |
01:05:37.440
And I think the general nature of a model is you have a bunch of parameters.
link |
01:05:43.440
These have a range. These are the variances of the world. And you have relationships between
link |
01:05:48.240
them, which are constraints, which say if certain parameters have these values,
link |
01:05:52.400
then other parameters have to have the following values. And this is a very early insight in
link |
01:05:58.880
computer science. And I think some of the earliest formulations is the Boltzmann machine.
link |
01:06:03.520
And the problem with the Boltzmann machine is that while it has a measure of whether it's good,
link |
01:06:07.520
this is basically the energy on the system, the amount of tension that you have left in the
link |
01:06:11.120
constraints where the constraints don't quite match. It's very difficult to, despite having this
link |
01:06:16.640
global measure, to train it. Because as soon as you add more than trivially few elements,
link |
01:06:22.720
parameters into the system, it's very difficult to get it settled in the right architecture.
link |
01:06:26.880
And so the solution that Hinton and Zanowski found was to use a restricted Boltzmann machine,
link |
01:06:34.880
which uses the hidden links, the internal links in the Boltzmann machine and only
link |
01:06:39.520
has basically input and output layer. But this limits the expressivity of the Boltzmann machine.
link |
01:06:44.560
So now he builds a network of small of these primitive Boltzmann machines. And in some sense,
link |
01:06:48.960
you can see almost continuous development from this to the deep learning models that we're using
link |
01:06:53.440
today, even though we don't use Boltzmann machines at this point. But the idea of the Boltzmann
link |
01:06:58.400
machine is you take this model, you clamp some of the values to perception, and this forces
link |
01:07:02.640
the entire machine to go into a state that is compatible with the states that you currently
link |
01:07:06.240
perceive. And this state is your model of the world. So I think it's a very general way of
link |
01:07:12.560
thinking about models. But we have to use a different approach to make it work. And this is,
link |
01:07:18.480
we have to find different networks that train the Boltzmann machine. So the mechanism that
link |
01:07:23.040
trains the Boltzmann machine and the mechanism that makes the Boltzmann machine settle into its
link |
01:07:27.360
state are distinct from the constrained architecture of the Boltzmann machine itself.
link |
01:07:33.840
The kind of mechanism that we want to develop, you're saying?
link |
01:07:36.400
Yes. So there's the direction in which I think our research is going to go. It's going to,
link |
01:07:41.920
for instance, what you notice in perception is our perceptual models of the world are not
link |
01:07:46.800
probabilistic, but possibilistic, which means you should be able to perceive things that are
link |
01:07:51.680
improbable but possible. Perceptual state is valid, not if it's probable, but if it's possible,
link |
01:07:58.560
if it's coherent. So if you see a tiger coming after you should be able to see this, even if
link |
01:08:03.440
it's unlikely. And the probability is necessary for convergence of the model. So given the state
link |
01:08:10.560
of possibilities that is very, very large and a set of perceptual features, how should you
link |
01:08:16.880
change the states of the model to get it to converge with your perception? But the space of
link |
01:08:25.280
ideas that are coherent with the context that you're sensing is perhaps not as large. That's
link |
01:08:32.480
perhaps pretty small. The degree of coherence that you need to achieve depends, of course,
link |
01:08:38.240
how deep your models go. For instance, politics is very simple when you know very little about
link |
01:08:44.000
game theory and human nature. So the younger you are, the more obvious it is how politics
link |
01:08:48.320
should work, right? Because you get in a coherent aesthetics from relatively few inputs. And the
link |
01:08:54.800
more layers you model, the more layers you model reality, the harder it gets to satisfy all the
link |
01:09:00.400
constraints. So the current neural networks are a fundamentally supervised learning system with
link |
01:09:07.040
a feed forward neural network is back propagation to learn. What's your intuition about what kind
link |
01:09:12.240
of mechanisms might we move towards to improve the learning procedure? I think one big aspect
link |
01:09:19.920
is going to be meta learning. And architecture search starts in this direction. In some sense,
link |
01:09:24.880
the first wave of AI, classical AI, work by identifying a problem and a possible solution
link |
01:09:29.600
and implementing the solution, right? Program that plays chess. And right now, we are in the second
link |
01:09:34.640
wave of AI. So instead of writing the algorithm that implements the solution, we write an algorithm
link |
01:09:39.600
that automatically searches for an algorithm that implements the solution. So the learning system
link |
01:09:45.360
in some sense is an algorithm that itself discovers the algorithm that solves the problem,
link |
01:09:50.160
like go goes too hard to implement it by the solution by hand. But we can implement an
link |
01:09:54.880
algorithm that finds the solution. So now let's move to the third stage, right? The third stage
link |
01:09:59.680
would be meta learning. Find an algorithm that discovers a learning algorithm for the given
link |
01:10:04.400
domain. Our brain is probably not a learning system, but a meta learning system. This is
link |
01:10:09.440
one way of looking at what we are doing. There is another way, if you look at the way our brain
link |
01:10:14.080
has, for instance, implemented, there is no central control that tells all the neurons how to
link |
01:10:18.320
wire up. Instead, every neuron is an individual reinforcement learning agent. Every neuron is
link |
01:10:23.920
a single celled organism that is quite complicated and in some sense quite motivated to get fed.
link |
01:10:28.960
And it gets fed if it fires on average at the right time. And the right time depends on the
link |
01:10:36.160
context that the neuron exists in, which is the electrical and chemical environment that it has.
link |
01:10:42.080
So it basically has to learn a function over its environment that tells us when to fire to get fed.
link |
01:10:48.320
Or if you see it as a reinforcement learning agent, every neuron is in some sense
link |
01:10:52.240
making a hypothesis when it sends a signal and tries to pipe a signal through the universe
link |
01:10:57.040
and tries to get positive feedback for it. And the entire thing is set up in such a way that it's
link |
01:11:02.000
robustly self organizing into a brain, which means you start out with different neuron types
link |
01:11:07.200
that have different priors on which hypothesis to test and how to get its reward. And you put
link |
01:11:12.720
them into different concentrations in a certain spatial alignment. And then you train it in a
link |
01:11:18.480
particular order. And as a result, you get a well organized brain. Yeah, so okay, so the brain is a
link |
01:11:23.840
meta learning system with a bunch of reinforcement learning agents. And what, I think you said,
link |
01:11:33.760
but just to clarify, where do the, there's no centralized government that tells you,
link |
01:11:41.200
here's a loss function, here's a loss function, here's a loss function, like what,
link |
01:11:46.880
who is, who says what's the objective?
link |
01:11:48.720
Also governments which impose loss functions on different parts of the brain. So we have
link |
01:11:53.360
differential attention, some areas in your brain get especially rewarded when you look at faces.
link |
01:11:58.080
If you don't have that, you will get prosopagnosia, which basically mean the inability to tell people
link |
01:12:03.120
apart by their faces. And the reason that happens is because it was had an evolutionary
link |
01:12:08.960
advantage. So like evolution comes into play here about it's basically an extraordinary
link |
01:12:13.120
attention that we have for faces. I don't think that people with a prosopagnosia have a
link |
01:12:17.360
perceived defective brain, the brain just has an average attention for faces. So people with
link |
01:12:22.240
prosopagnosia don't look at faces more than they look at cups. So the level at which they resolve
link |
01:12:27.040
the geometry of faces is not higher than the one that, than for cups. And people that don't have
link |
01:12:32.240
prosopagnosia look obsessively at faces, right? For me, it's impossible to move through a crowd
link |
01:12:38.080
without scanning the faces. And as a result, we make insanely detailed models of faces that allow
link |
01:12:43.440
us to discern mental states of people. So obviously we don't know 99% of the details
link |
01:12:49.840
of this meta learning system. That's our mind. Okay. But still we took a leap from something
link |
01:12:56.000
much dumber to that from off through the evolutionary process. Can you first of all,
link |
01:13:02.480
maybe say how hard, how big of a leap is that from our brain, from our eight ancestors to
link |
01:13:10.400
multi cell organisms? And is there something we can think about? As we start to think about how to
link |
01:13:20.480
engineer intelligence, is there something we can learn from evolution? In some sense, life exists
link |
01:13:27.360
because of the market opportunity of controlled chemical reactions. We compete with dump chemical
link |
01:13:32.960
reactions. And we win in some areas against this dump combustion, because we can harness those
link |
01:13:38.960
entropy gradients where you need to add a little bit of energy in a specific way to harvest more
link |
01:13:43.120
energy. So we all competed combustion. Yes, in many regions we do. We try very hard because
link |
01:13:48.720
when we are in direct competition, we lose, right? Yeah. So because the combustion is going to
link |
01:13:54.320
close the entropy gradients much faster than we can run. Yeah, that's quite compelling.
link |
01:14:00.080
Yeah. So basically we do this because every cell has a Turing machine built into it.
link |
01:14:04.080
It's like literally a read write head on a tape. And so everything that's more complicated than a
link |
01:14:11.440
molecule that just is a vortex around attractors, that needs a Turing machine for its regulation.
link |
01:14:19.200
And then you bind cells together and you get next level organizational organism where the cells
link |
01:14:24.400
together implement some kind of software. And for me, a very interesting discovery in the last
link |
01:14:30.960
year was the word spirit. Because I realized that what spirit actually means, it's an operating
link |
01:14:35.760
system for an autonomous robot. And when the word was invented, people needed this word.
link |
01:14:40.640
But they didn't have robots that they built themselves. Yeah, the only autonomous robots
link |
01:14:44.720
that were known were people, animals, plants, ecosystems, cities and so on. And they all had
link |
01:14:49.440
spirits. And it makes sense to say that the plant is an operating system, right? If you pinch the
link |
01:14:54.240
plant in one area, then this is going to have repercussions throughout the plant. Everything
link |
01:14:58.960
in the plant is in some sense connected into some global aesthetics like in other organisms.
link |
01:15:03.600
An organism is not a collection of cells, it's a function that tells cells how to behave.
link |
01:15:09.200
And this function is not implemented as some kind of supernatural thing,
link |
01:15:14.240
like some morphogenetic field. It is an emergent result of the interactions of the
link |
01:15:19.040
each cell with each other cell, right? Oh, guys. So what you're saying is the organism is a function
link |
01:15:27.280
that tells what to do and the function emerges from the interaction of the cells.
link |
01:15:38.560
So it's basically a description of what the plant is doing in terms of macro states.
link |
01:15:43.920
And the micro states, the physical implementation are too many of them to describe them. So the
link |
01:15:49.440
software that we use to describe what the plant is doing, the spirit of the plant is the software,
link |
01:15:54.320
the operating system of the plant, right? This is a way in which we, the observers,
link |
01:15:59.520
make sense of the plant. And the same is true for people. So people have spirits,
link |
01:16:04.000
which is their operating system in a way, right? And there's aspects of that operating system that
link |
01:16:08.640
relate to how your body functions and others, how you socially interact, how you interact with
link |
01:16:12.880
yourself and so on. And we make models of that spirit. And we think it's a loaded term because
link |
01:16:19.600
it's from a pre scientific age. But it took the scientific age a long time to rediscover a term
link |
01:16:25.840
that is pretty much the same thing. And I suspect that the differences that we still see between
link |
01:16:30.720
the old word and the new word are translation errors that have been over the centuries.
link |
01:16:35.280
Can you actually linger on that? Like, why do you say that spirit, just to clarify,
link |
01:16:39.760
because I'm a little bit confused. So the word spirit is a powerful thing. But why did you say
link |
01:16:45.120
in the last year or so that you discovered this? Do you mean the same old traditional idea of a
link |
01:16:50.160
spirit? Or do you mean? I tried to find out what people mean by spirit. When people say
link |
01:16:54.880
spirituality in the US, it usually refers to the phantom limb that they develop in the absence of
link |
01:16:59.840
culture. And a culture is in some sense, you could say the spirit of a society that is long game.
link |
01:17:07.120
This thing that is becomes self aware at a level above the individuals where you say,
link |
01:17:12.720
if you don't do the following things, then the grand grand grandchildren of our children will
link |
01:17:17.120
not have nothing to eat. So if you take this long scope, where you try to maximize the length of
link |
01:17:22.800
the game that you are playing as a species, you realize that you're part of a larger thing that
link |
01:17:27.040
you cannot fully control, you probably need to submit to the ecosphere instead of trying to
link |
01:17:32.080
completely control it. There needs to be a certain level at which we can exist as a species if you
link |
01:17:38.480
want to endure. And our culture is not sustaining this anymore. We basically made this bet with
link |
01:17:44.400
the industrial revolution that we can control everything. And the modernist societies with
link |
01:17:48.320
basically unfettered growth led to a situation in which we depend on the ability to control the
link |
01:17:54.800
entire planet. And since we are not able to do that, as it seems, this culture will die.
link |
01:18:02.320
We realize that it doesn't have a future. We call our children generation Z.
link |
01:18:06.480
That's sort of our optimistic thing to do.
link |
01:18:10.480
Yeah. So you have this kind of intuition that our civilization, you said culture,
link |
01:18:16.320
but you really mean the spirit of the civilization, the entirety of the civilization
link |
01:18:23.680
may not exist for long. Can you untangle that? What's your intuition behind that? So
link |
01:18:30.720
you kind of offline mentioned to me that the industrial revolution was kind of the moment we
link |
01:18:36.480
agreed to accept the offer, sign on the paper, on the dotted line with the industrial revolution,
link |
01:18:44.240
we doomed ourselves. Can you elaborate on that?
link |
01:18:47.280
This is suspicion. I of course don't know how it plays out, but it seems to me that
link |
01:18:53.280
in a society in which you leverage yourself very far over an entropic abyss,
link |
01:18:58.800
without land on the other side, it's relatively clear that your cantilever is at some point
link |
01:19:03.680
going to break down into this entropic abyss. And you have to pay the bill.
link |
01:19:08.320
Okay. Russia is my first language. And I'm also an idiot.
link |
01:19:15.440
This is just two apes instead of playing with the banana trying to have fun by talking. Okay.
link |
01:19:23.360
Okay. Anthropic what? And what's anthropic? Entropic. Entropic. So entropic in the sense
link |
01:19:29.520
of entropy. Oh, entropic, guys. Yes. And entropic, what was the other word? Abyss.
link |
01:19:35.040
What's that? It's a big gorge. Oh, abyss. Abyss, yes. Entropic abyss. So many of the things you
link |
01:19:41.520
say are poetic, it's hurting my brain. It's amazing, right? It's mispronouncing, which makes
link |
01:19:49.040
you do more poetic. Because Wittgenstein would be proud. So entropic abyss. Okay, let's rewind,
link |
01:19:57.440
then, the Industrial Revolution. So how does that get us into the entropic abyss?
link |
01:20:05.200
So in some sense, we burned 100 million years worth of trees to get everybody plumbing.
link |
01:20:10.480
Yes. And the society that we had before that had a very limited number of people. So basically,
link |
01:20:15.600
since zero BC, we hovered between 300 and 400 million people. And this only changed with the
link |
01:20:24.240
Enlightenment and the subsequent Industrial Revolution. And in some sense, the Enlightenment
link |
01:20:30.000
freed our rationality and also freed our norms from the preexisting order gradually.
link |
01:20:35.360
And it was a process that basically happened in feedback loops, so it was not that just one
link |
01:20:40.080
caused the other. It was a dynamic that started. And the dynamic worked by basically increasing
link |
01:20:46.080
productivity to such a degree that we could feed all our children. And I think the definition of
link |
01:20:54.480
poverty is that you have as many children as you can feed before they die, which is in some sense
link |
01:21:00.400
the state that all animals on earth are in. The definition of poverty is having enough.
link |
01:21:06.240
So you can have only so many children as you can feed, and if you have more, they die. And in our
link |
01:21:11.520
societies, you can basically have as many children as you want, and they don't die.
link |
01:21:16.560
So the reason why we don't have as many children as we want is because we also have to pay a price
link |
01:21:21.920
in terms of we have to insert ourselves in the lower source of dread if we have too many.
link |
01:21:26.240
So basically, everybody in the under middle and lower upper class has only a limited number of
link |
01:21:32.320
children because having more of them would mean a big economic hit to their individual families.
link |
01:21:38.000
Because children, especially in the US, super expensive to have. And you only are taken out
link |
01:21:43.040
of this if you are basically super rich or if you are super poor. If you're super poor, it doesn't
link |
01:21:47.440
matter how many kids you have because your status is not going to change. And these children are
link |
01:21:51.920
largely not going to die of hunger. So how does this lead to self destruction? So there's a lot
link |
01:21:57.920
of unpleasant properties about this process. So basically what we try to do is we try to
link |
01:22:02.800
let our children survive even if they have diseases. Like I would have died before my
link |
01:22:10.240
mid 20s without modern medicine. And most of my friends would have as well. So many of us wouldn't
link |
01:22:16.320
live without the advantages of modern medicine and modern industrialized society. We get our
link |
01:22:23.600
protein largely by doing the entirety of nature. Imagine there would be some very clever microbe
link |
01:22:30.160
that would live in our organisms and would completely harvest them and change them into
link |
01:22:36.880
a thing that is necessary to sustain itself. And it would discover that for instance,
link |
01:22:42.640
brain cells are kind of edible, but they're not quite nice. So you need to have more fat in them
link |
01:22:47.440
and you turn them into more fat cells. And basically this big organism would become a vegetable
link |
01:22:52.400
that is barely alive and it's going to be very brittle and not resilient when the environment
link |
01:22:56.480
changes. Yeah, but the some part of that organism, the one that's actually doing all the using of
link |
01:23:02.080
the, there'll still be somebody thriving. So it relates back to this original question. I suspect
link |
01:23:09.440
that we are not the smartest thing on this planet. I suspect that basically every complex system has
link |
01:23:15.360
to have some complex regulation if it depends on feedback loops. And so for instance, it's likely
link |
01:23:23.920
that we should describe a certain degree of intelligence to plants. The problem is that
link |
01:23:28.640
plants don't have a nervous system. So they don't have a way to telegraph messages over
link |
01:23:32.880
large distances almost instantly in the plant. And instead they will rely on chemicals between
link |
01:23:38.640
adjacent cells, which means the signal processing speed depends on the signal processing with a
link |
01:23:44.320
rate of a few millimeters per second. And as a result, if the plant is intelligent, it's not
link |
01:23:50.720
going to be intelligent at similar timescales. Yeah, the ability to put the timescale is different.
link |
01:23:55.920
So you suspect we might not be the most intelligent, but we're the most intelligent
link |
01:24:02.480
and this spatial scale in our timescale. So basically if you would zoom out very far,
link |
01:24:08.080
we might discover that there have been intelligent ecosystems on the planet
link |
01:24:12.160
that existed for thousands of years in an almost undisturbed state. And it could be that these
link |
01:24:17.760
ecosystems actively related their environment. So basically change the course of the evolution
link |
01:24:22.880
vision this ecosystem to make it more efficient and less brittle.
link |
01:24:25.760
So it's possible something like plants is actually a set of living organisms,
link |
01:24:30.560
an ecosystem of living organisms that are just operating a different timescale and are far
link |
01:24:35.200
superior in intelligence to human beings. And then human beings will die out and plants will
link |
01:24:39.760
still be there and they'll be there. Yeah, there's an evolutionary adaptation
link |
01:24:45.040
playing a role at all of these levels. For instance, if mice don't get enough food
link |
01:24:49.200
and get stressed, the next generation of mice will be more sparse and more scrawny.
link |
01:24:53.600
And the reason for this is because in a natural environment, the mice have probably
link |
01:24:58.080
hidden a drought or something else. And if they're overgrays, then all the things that
link |
01:25:02.880
sustain them might go extinct. And there will be no mice a few generations from now. So to make
link |
01:25:08.560
sure that there will be mice in five generations from now, basically the mice scale back. And
link |
01:25:13.760
a similar thing happens with the predators of mice, they should make sure that the mice don't
link |
01:25:17.440
completely go extinct. So in some sense, if the predators are smart enough, they will be tasked
link |
01:25:22.800
with shepherding their food supply. And maybe the reason why lions have much larger brains
link |
01:25:29.120
than antelopes is not so much because it's so hard to catch an antelope as opposed to run away
link |
01:25:34.240
from the lion. But the lions need to make complex models of their environment, more complex than
link |
01:25:39.760
the antelopes. So first of all, just describing that there's a bunch of complex systems and human
link |
01:25:45.040
beings may not even be the most special or intelligent of those complex systems, even on earth,
link |
01:25:50.240
makes me feel a little better about the extinction of human species that we're talking about.
link |
01:25:54.080
Yes, maybe we're just guy as ploy to put the carbon back into the atmosphere.
link |
01:25:57.360
Yeah, this is just a nice, we tried it out.
link |
01:26:00.000
The big stain on evolution is not as it was trees. Earth evolved trees before they could
link |
01:26:05.120
be digested again, right? There were no insects that could break all of them apart.
link |
01:26:09.440
Saludos is so robust that you cannot get all of it with microorganisms. So many of these
link |
01:26:14.720
trees fell into swamps. And all this carbon became inert and could no longer be recycled into
link |
01:26:19.520
organisms. And we are the species that is destined to take care of that.
link |
01:26:23.520
So this is kind of to get out of the ground, put it back into the atmosphere and the earth is
link |
01:26:28.640
already greening. So within a million years or so when the ecosystems have recovered from the
link |
01:26:33.920
rapid changes that they're not compatible with right now, there's going to be awesome again.
link |
01:26:39.120
And there won't be even a memory of us little apes.
link |
01:26:41.920
I think there will be memories of us. I suspect we are the first generally intelligent species
link |
01:26:46.080
in this sense. We are the first species within industrial society because we will leave more
link |
01:26:50.640
phones than bones in the stratosphere. I like it. But then let me push back. You've kind of
link |
01:26:59.520
suggested that we have a very narrow definition of why aren't trees a higher level of general
link |
01:27:08.800
intelligence? If trees were intelligent, then they would be at different time scales, which
link |
01:27:13.280
means within a hundred years, the tree is probably not going to make models that are as complex as
link |
01:27:17.520
the ones that we make in 10 years. But maybe the trees are the ones that made the phones, right?
link |
01:27:25.520
We could say the entirety of life did it. The first cell never died. The first cell only
link |
01:27:31.360
split, right? And every divided. And every cell in our body is still an instance of the first cell
link |
01:27:36.480
that split off from that very first cell. There was only one cell on this planet as far as we know.
link |
01:27:41.040
And so the cell is not just a building block of life. It's a hypoorganism, right? And we are part
link |
01:27:46.880
of this hypoorganism. So nevertheless, this hypoorganism, no, this little particular branch of
link |
01:27:56.000
it, which is us humans, because of the industrial revolution, and maybe the exponential growth of
link |
01:28:01.200
technology might somehow destroy ourselves. So what do you think is the most likely way we might
link |
01:28:07.840
destroy ourselves? So some people worry about genetic manipulation. Some people, as we've
link |
01:28:13.200
talked about, worry about either dumb artificial intelligence or super intelligent artificial
link |
01:28:18.400
intelligence destroying us. Some people worry about nuclear weapons and weapons of war in
link |
01:28:25.200
general. What do you think? If you were a betting man, what would you bet on in terms of self
link |
01:28:30.320
destruction? And then would it be higher than 50%? Would it be higher than 50%? So it's very likely
link |
01:28:36.160
that nothing that we bet on matters after we win our bets. So I don't think that bets are
link |
01:28:41.920
literally the right way to go about this. I mean, once you're dead, it doesn't, you won't be there
link |
01:28:46.480
to collect. So it's also not clear if we as a species go extinct. But I think that our present
link |
01:28:53.120
civilization is not sustainable. So the thing that will change is there will be probably fewer
link |
01:28:57.520
people on the planet than are today. And even if not, then still most of people that are alive
link |
01:29:02.880
today will not have offspring in 100 years from now because of the geographic changes and so on
link |
01:29:07.680
and the changes in the food supply. It's quite likely that many areas of the planet will only
link |
01:29:13.440
be livable with a close cooling chain in 100 years from now. So many of the areas around the equator
link |
01:29:18.880
and in sub tropical climates that are now quite pleasant to live in will stop to be inhabitable
link |
01:29:26.320
without air conditioning. So you honestly, wow, cooling chain, close knit cooling chain communities.
link |
01:29:32.240
So you think you have a strong worry about the effects of global warming that we see?
link |
01:29:38.000
By itself, it's not the big issue. If you live in Arizona right now, you have basically three months
link |
01:29:42.800
in the summer in which you cannot be outside. And so you have a close cooling chain, you have air
link |
01:29:47.840
conditioning in your car and in your home and you're fine. And if the air conditioning would stop
link |
01:29:52.400
for a few days, then in many areas, you would not be able to survive, right?
link |
01:29:56.720
Can we just pause for a second? You say so many brilliant, poetic things like,
link |
01:30:01.760
what is a close? Do people use that term closed cooling chain?
link |
01:30:05.920
I imagine that people use it when they describe how they get meat into a supermarket, right?
link |
01:30:10.960
If you break the cooling chain and this thing starts to thaw, you're in trouble and you have
link |
01:30:14.880
to throw it away. That's such a beautiful way to put it. It's like calling a city a closed
link |
01:30:22.000
social chain or something like that. I mean, that's right. I mean, the locality of it is really
link |
01:30:26.160
important. It basically means you wake up in a climatized room, you go to work in a climatized
link |
01:30:29.920
car, you work in a closed office, you shop in a climatized supermarket. And in between, you
link |
01:30:35.120
have very short distance in which you run from your car to the supermarket, but you have to make
link |
01:30:39.040
sure that your temperature does not approach the temperature of the environment. The crucial thing
link |
01:30:44.240
is the wet bulb temperature. The what? The wet bulb temperature. It's what you get when you take
link |
01:30:49.760
wet clothes and you put it around your thermometer and then you will move it very quickly through
link |
01:30:55.360
the air. So you get the evaporation heat. And as soon as you can no longer cool your body temperature
link |
01:31:03.280
via evaporation to a temperature below something like, I think, 35 degrees, you die. And which
link |
01:31:11.440
means if the outside world is dry, you can still cool yourself down by sweating. But if it has a
link |
01:31:17.840
certain degree of humidity or if it goes over a certain temperature, then sweating will not save
link |
01:31:21.760
you. And this means even if you're a healthy, fit individual within a few hours, even if you try to
link |
01:31:27.920
be in the shade and so on, you'll die unless you have some climatizing equipment. And this itself,
link |
01:31:34.960
if you as long as you maintain civilization and you have energy supply and you have food trucks
link |
01:31:39.280
coming to your home that are climatized, everything is fine. But what if you lose a large scale open
link |
01:31:44.320
agriculture at the same time? So basically you run into food insecurity because climate becomes very
link |
01:31:49.840
irregular or weather becomes very irregular. And you have a lot of extreme weather events.
link |
01:31:55.120
So you need to roll most of your food maybe indoor or you need to import your food from
link |
01:32:00.640
certain regions. And maybe you're not able to maintain the civilization throughout the planet
link |
01:32:05.760
to get the infrastructure to get the food to your home.
link |
01:32:09.440
But there could be there could be significant impacts in the sense that people begin to suffer.
link |
01:32:13.920
There could be wars over resources and so on. But ultimately, do you do you not have a not a
link |
01:32:20.320
faith? But what do you make of the capacity of technology to technological innovation
link |
01:32:27.520
to help us prevent some of the worst damages that this condition can create? So as an example,
link |
01:32:37.440
as a almost out there example is the work that SpaceX and Elon Musk is doing of trying to
link |
01:32:42.160
also consider our propagation throughout the universe in deep space to colonize other planets.
link |
01:32:50.480
That's one technological step. But of course, what Elon Musk is trying on Mars is not to
link |
01:32:55.600
save us from global warming, because Mars looks much worse than Earth will look like after the
link |
01:33:00.640
worst outcomes of global warming imaginable, right? Yeah, Mars is essentially not habitable.
link |
01:33:06.880
It's exceptionally harsh environment. Yes. But what he is doing, what a lot of people
link |
01:33:11.280
throughout history since the industrial revolution are doing, are just doing a lot of different
link |
01:33:15.680
technological innovation with some kind of target. And when ends up happening is totally
link |
01:33:20.320
unexpected new things come up. So trying to trying to terraform or trying to colonize Mars,
link |
01:33:27.440
extremely harsh environment, might give us totally new ideas of how to expand the or increase the
link |
01:33:34.560
power of this closed cooling circuit that empowers the community. So like, it seems like there's
link |
01:33:43.600
a little bit of a race between our open ended technological innovation of this communal operating
link |
01:33:53.360
system that we have and our general tendency to want to overuse resources and thereby destroy
link |
01:34:02.080
ourselves. You don't think technology can win that race? I think the probability is relatively low,
link |
01:34:08.800
given that our technology is, for instance, the US is stagnating since the 1970s, roughly,
link |
01:34:14.880
in terms of technology. Most of the things that we do are the result of incremental processes.
link |
01:34:19.840
What about Intel? What about Moore's law? It's basically it's very incremental,
link |
01:34:23.840
the things that we're doing is so after the invention of the microprocessor was a major
link |
01:34:28.720
thing, right? The miniaturization of transistors was really major. But the things that we did
link |
01:34:36.960
afterwards, largely, were not that innovative. So we had gradual changes of scaling things
link |
01:34:44.800
into from GPUs into from CPUs into GPUs and things like that. But I don't think that there are
link |
01:34:52.800
basically not many things if you take a person that died in the 70s and was at the top of their game,
link |
01:34:57.840
they would not need to read that many books to be current again.
link |
01:35:01.200
But it's all about books. Who cares about books? There might be things that are beyond books might
link |
01:35:06.720
be a very... Or say papers or... No, papers. Forget papers. There might be things that are...
link |
01:35:10.880
So papers and books and knowledge, that's a concept of a time when you were sitting there by
link |
01:35:16.240
candlelight and individual consumers of knowledge. What about the impact that we're not in the middle
link |
01:35:21.360
of might not be understanding of Twitter, of YouTube. The reason you and I are sitting here today
link |
01:35:28.240
is because of Twitter and YouTube. So the ripple effect and there's two minds,
link |
01:35:35.040
sort of two dumb apes are coming up with a new, perhaps a new clean insights. And there's 200
link |
01:35:41.520
other apes listening right now, 200,000 other apes listening right now. And that effect,
link |
01:35:47.920
it's very difficult to understand what that effect will have. That might be bigger than
link |
01:35:51.440
any of the advancement of the microprocessor or any of the industrial revolution,
link |
01:35:55.360
the ability of spread knowledge. And that knowledge, it allows good ideas to reach millions
link |
01:36:06.880
much faster. And the effect of that, that might be the new, that might be the 21st century,
link |
01:36:11.920
is the multiplying of ideas of good ideas. Because if you say one good thing today,
link |
01:36:19.200
that will multiply across huge amounts of people. And then they will say something and then they
link |
01:36:25.040
will have another podcast and they'll say something and then they'll write a paper. That could be a
link |
01:36:29.520
huge... You don't think that... Yeah, we should have billions of von Neumanns right now in
link |
01:36:35.360
two rings and we don't for some reason. I suspect the reason is that we destroy our
link |
01:36:39.760
attention span. Also the incentives, of course, different. Yeah, we have some Kardashians, yeah.
link |
01:36:44.400
So the reason why we are sitting here and doing this as a YouTube video is because you and me
link |
01:36:48.480
don't have the attention span to write a book together right now. And you guys probably don't
link |
01:36:52.320
have the attention span to read it. So let me tell you... But I guarantee they're still listening.
link |
01:36:56.320
It's bursting care of your attention. It's very short. But we're an hour and 40 minutes in and
link |
01:37:02.560
I guarantee you that 80% of the people are still listening. So there is an attention span. It's
link |
01:37:07.360
just the form. Who said that the book is the optimal way to transfer information? This is still
link |
01:37:14.000
an open question. I mean, that's what we're... Something that social media could be doing,
link |
01:37:17.440
that other forms could not be doing. I think the end game of social media is a global brain.
link |
01:37:22.240
And Twitter is, in some sense, a global brain that is completely hooked on dopamine,
link |
01:37:25.920
doesn't have any kind of inhibition and as a result is caught in a permanent seizure.
link |
01:37:30.480
It's also, in some sense, a multiplayer role playing game. And people use it to play an avatar
link |
01:37:36.560
that is not like them as they were in the sane world and they look through the world to the
link |
01:37:40.400
lens of their phones and think it's the real world. But it's the Twitter world that is
link |
01:37:43.920
distorted by the popularity incentives of Twitter. Yeah, the incentives and just our natural biological,
link |
01:37:51.120
the dopamine rush of a like, no matter how... I try to be very kind of zen like and minimalist and
link |
01:38:01.120
not be influenced by likes and so on. But it's probably very difficult to avoid that to some
link |
01:38:05.520
degree. Speaking of a small tangent of Twitter, what... How can Twitter be done better? I think
link |
01:38:16.080
it's an incredible mechanism that has a huge impact on society by doing exactly what you're
link |
01:38:20.960
doing. Oh, sorry, doing exactly what you described, which is having this...
link |
01:38:27.840
We're like, is this some kind of game and we're kind of our individual RL agents in this game
link |
01:38:33.440
and it's uncontrollable because there's not really a centralized control. Neither Jack Dorsey
link |
01:38:37.280
nor the engineers at Twitter seem to be able to control this game. Or can they? That's sort of
link |
01:38:44.880
a question. Is there any advice you would give on how to control this game? I would give advice
link |
01:38:50.160
because I am certainly not an expert, but I can give my thoughts on this. And our brain has solved
link |
01:38:57.360
this problem to some degree, right? Our brain has lots of individual agents that manage to play
link |
01:39:02.320
together in a way. And we have also many contexts in which other organisms have found ways to solve
link |
01:39:08.160
the problems of cooperation that we don't solve on Twitter. And maybe the solution is to go for
link |
01:39:15.520
an evolutionary approach. So imagine that you have something like Reddit or something like Facebook
link |
01:39:22.480
and something like Twitter. And you think about what they have in common, what they have in common,
link |
01:39:26.400
they are companies that in some sense own a protocol. And this protocol is imposed on a community.
link |
01:39:32.720
And the protocol has different components for monetization, for user management, for user
link |
01:39:38.640
display, for rating, for anonymity, for import of other content and so on. And now imagine that
link |
01:39:43.760
you take these components of the protocol apart and you do it in some sense like communities,
link |
01:39:50.160
visit this social network. And these communities are allowed to mix and match their protocols and
link |
01:39:55.280
design new ones. So for instance, the UI and the UX can be defined by the community. The rules for
link |
01:40:01.600
sharing content across communities can be defined. The monetization can be redefined. The way you
link |
01:40:06.960
reward individual users for what can be redefined, the way users can represent themselves and to
link |
01:40:12.800
each other can redefine. Who could be the redefiner? So can individual human beings build enough
link |
01:40:18.720
intuition to redefine those things? It self can become part of the protocol. So for instance,
link |
01:40:22.800
it could be in some communities, it will be a single person that comes up with these things.
link |
01:40:27.600
And others, it's a group of friends. Some might implement a voting scheme that has some interesting
link |
01:40:32.480
weighted voting. Who knows? Who knows what will be the best self organizing principle for this.
link |
01:40:36.640
But the process can be automated. I mean, it seems like the brain can be automated so people can
link |
01:40:42.000
write software for this. And eventually, the idea is, let's not make an assumption about this thing
link |
01:40:48.400
if you don't know what the right solution is. And those areas that you have no idea
link |
01:40:52.320
whether the right solution will be people designing this ad hoc or machines doing this,
link |
01:40:56.960
whether you want to enforce compliance by social norms like Wikipedia or with software solutions
link |
01:41:03.440
or with AI that goes through the posts of people or with a legal principle and so on.
link |
01:41:08.320
This is something maybe you need to find out. And so the idea would be if you let the communities
link |
01:41:13.680
evolve, and you just control it in such a way that you are incentivizing the most sentient
link |
01:41:19.440
communities, the ones that produce the most interesting behaviors that allow you to interact
link |
01:41:26.880
in the most helpful ways to the individuals. You have a network that gives you information
link |
01:41:31.280
that is relevant to you. It helps you to maintain relationships to others in healthy ways. It
link |
01:41:36.320
allows you to build teams. It allows you to basically bring the best of you into this thing
link |
01:41:40.800
and goes into a coupling into a relationship with others in which you produce things that
link |
01:41:45.360
you would be unable to produce alone. Yes, beautifully put. But the key process of that
link |
01:41:51.600
with incentives and evolution is things that don't adopt themselves to effectively get the
link |
01:42:00.560
incentives have to die. And the thing about social media is communities that are unhealthy or
link |
01:42:07.120
whatever you want to define as the incentives really don't like dying. One of the things that
link |
01:42:11.920
people really get aggressive protest aggressively is when they're censored, especially in America.
link |
01:42:17.840
I don't know. I don't know much about the rest of the world, but the idea of freedom of speech,
link |
01:42:22.960
the idea of censorship is really painful in America. And so what do you think about that
link |
01:42:34.880
having grown up in East Germany? Do you think censorship is an important tool in our brain,
link |
01:42:42.000
in the intelligence, and in the social networks? So basically, if you're not a good member
link |
01:42:50.480
of the entirety of the system, they should be blocked away, well, locked away, blocked.
link |
01:42:56.480
An important thing is who decides that you are a good member? Who? Is it distributed?
link |
01:43:00.400
Or what is the outcome of the process that decides it, both for the individual and for society at
link |
01:43:06.400
large? For instance, if you have a high trust society, you don't need a lot of surveillance.
link |
01:43:11.760
And the surveillance is even in some sense undermining trust, because it's basically punishing
link |
01:43:18.480
people that look suspicious when surveyed, but do the right thing anyway. And the opposite,
link |
01:43:24.960
if you have a low trust society, then surveillance can be a better trade off.
link |
01:43:28.720
And the US is currently making a transition from a relatively high trust or mixed trust
link |
01:43:32.640
society to a low trust society. So surveillance will increase. Another thing is that beliefs are
link |
01:43:38.080
not just inert representations. There are implementations that run code on your brain
link |
01:43:42.560
and change your reality and change the way you interact with each other at some level.
link |
01:43:46.800
And some of the beliefs are just public opinions that we use to display our alignment. So for
link |
01:43:53.360
instance, people might say all cultures are the same and equally good, but still they prefer
link |
01:43:59.440
to live in some cultures over others, very, very strongly so. And it turns out that the
link |
01:44:04.160
cultures are defined by certain rules of interaction. And these rules of interaction
link |
01:44:08.240
lead to different results when you implement them. So if you adhere to certain rules,
link |
01:44:12.720
you get different outcomes in different societies. And this all leads to very tricky
link |
01:44:18.720
situations when people do not have a commitment to shared purpose. And our societies probably need
link |
01:44:24.080
to rediscover what it means to have a shared purpose and how to make this compatible with
link |
01:44:29.440
a non totalitarian view. So in some sense, the US is caught in a conundrum between totalitarianism
link |
01:44:38.000
and diversity. And it doesn't need to how to resolve this and the solutions that the US has
link |
01:44:44.320
found so far very crude because it's a very young society that is also under a lot of
link |
01:44:48.480
tension. It seems to me that the US will have to reinvent itself.
link |
01:44:52.240
What do you think? Just philosophizing, what kind of mechanisms of government do you think
link |
01:45:01.120
we as a species should be involved with US or broadly? What do you think will work well
link |
01:45:07.200
as a system? Of course, we don't know. It all seems to work pretty crappily. Some things worse
link |
01:45:12.080
than others. Some people argue that communism is the best. Others say, yeah, look at the Soviet
link |
01:45:17.760
Union. Some people argue that anarchy is the best and then completely discarding the positive
link |
01:45:24.160
effects of government. There's a lot of arguments. US seems to be doing pretty damn well in the span
link |
01:45:32.240
of history. There's respect for human rights, which seems to be a nice feature, not a bug.
link |
01:45:38.080
And economically, a lot of growth, a lot of technological development.
link |
01:45:41.520
And people seem to be relatively kind on the grand scheme of things. What lessons do you draw
link |
01:45:48.720
from that? What kind of government system do you think is good?
link |
01:45:54.320
Ideally, government should not be perceivable. It should be frictionless. The more you notice
link |
01:46:00.480
the influence of the government, the more friction you experience, the less effective
link |
01:46:05.680
and efficient the government probably is. A government, game theoretically, is an agent
link |
01:46:10.800
that imposes an offset on your payout matrix to make your Nash equilibrium compatible with the
link |
01:46:19.040
common good. So you have these situations where people act on the local incentives.
link |
01:46:24.720
And these local incentives, everybody does the thing that's locally the best for them,
link |
01:46:28.720
but the global outcome is not good. And this is even the case when people care about the global
link |
01:46:32.880
outcome, because a regulation mechanism exists that creates a causal relationship between what
link |
01:46:37.760
I want to have for the global good and what I do. So for instance, if I think that we should fly less
link |
01:46:42.320
and I stay at home, there's not a single plane that is going to not start because of me, right?
link |
01:46:47.360
It's not going to have an influence, but I don't get from A to B. So the way to implement this would
link |
01:46:52.560
basically to have a government that is sharing this idea that we should fly less and is then
link |
01:46:58.160
imposing a regulation that, for instance, makes flying more expensive and gives incentives for
link |
01:47:03.920
inventing other forms of transportation that are less putting that strain on the environment, for
link |
01:47:10.880
instance. So there's so much optimism and so many things you describe. And yet there's the
link |
01:47:16.240
pessimism of you think our civilization is going to come to an end. So that's not 100% probability,
link |
01:47:21.840
nothing in this world is. So what's the trajectory out of self destruction, do you think?
link |
01:47:29.600
I suspect that in some sense, we are both too smart and not smart enough, which means we are
link |
01:47:34.400
very good at solving near term problems. And at the same time, we are unwilling to submit to the
link |
01:47:41.040
imperatives that we would have to follow and if you want to stick around. So that makes it difficult.
link |
01:47:48.320
If you were unable to solve everything technologically, you can probably understand how
link |
01:47:52.480
high the child mortality needs to be to absorb the mutation rate. And how either mutation rate
link |
01:47:57.840
needs to be to adapt to a slowly changing ecosystem environment. So you could in principle
link |
01:48:03.200
compute all these things game theoretically and adapt to it. But if you cannot do this,
link |
01:48:09.600
because you are like me and you have children, you don't want them to die, you will use any kind
link |
01:48:13.520
of medical information to keep child mortality low. Even if it means that our visit in the
link |
01:48:19.760
future generations, we have enormous genetic drift. And most of us have allergies as a result of not
link |
01:48:24.560
being adapted to the changes that we made to our food supply. That's for now, I say technologically
link |
01:48:29.280
speaking, we're just a very young, 300 years industrial revolution. We're very new to this
link |
01:48:34.880
idea. So you're attached to your kids being alive and not being murdered for the good of
link |
01:48:39.280
good of society. But that might be a very temporary moment of time that we might evolve in our
link |
01:48:45.760
thinking. So like you said, when we're both smart and not smart enough. We are probably not this
link |
01:48:51.520
first human civilization that has discovered technology that allows to efficiently overgrace
link |
01:48:57.520
our resources. And this overgracing is, at some point, we think we can compensate this because
link |
01:49:03.120
if we have eaten all the grass, we will find a way to grow mushrooms. But it could also be that
link |
01:49:08.960
the ecosystems tip. And so what really concerns me is not so much the end of the civilization,
link |
01:49:13.760
because we will invent a new one. But what concerns me is the fact that, for instance,
link |
01:49:20.400
the oceans might tip. So for instance, maybe the plankton dies because of ocean acidification
link |
01:49:25.760
and cyanobacteria takeover. And as a result, we can no longer breathe the atmosphere. This would
link |
01:49:31.280
be really concerning. So basically a major reboot of most complex organisms on Earth. And I think
link |
01:49:36.560
this is a possibility. I don't know what the percentage for this possibility is, but it doesn't
link |
01:49:42.480
seem to be outlandish to me if you look at the scale of the changes that we've already triggered on
link |
01:49:46.320
this planet. And so Danny Hiller suggests that, for instance, we may be able to put chalk into
link |
01:49:51.680
the stratosphere to limit solar radiation. Maybe it works. Maybe this is sufficient to
link |
01:49:56.720
counter the effects of what we've done. Maybe it won't be. Maybe we won't be able to implement it
link |
01:50:01.680
by the time it's prevalent. I have no idea how the future is going to play out in this regard.
link |
01:50:08.480
I think it's quite likely that we cannot continue like this. All our cousin species,
link |
01:50:12.560
the other hominids are gone. So the right step would be to what? To rewind
link |
01:50:21.280
towards the industrial revolution and slow the... So try to contain the technological process that
link |
01:50:28.880
leads to the overconsumption of resources? Imagine you get to choose. You have one lifetime.
link |
01:50:34.640
Yes. You get born into a sustainable agricultural civilization, 300, maybe 400 million people on
link |
01:50:41.120
the planet tops. Or before this, some kind of nomadic species with like a million or two million.
link |
01:50:48.800
And so you don't meet new people unless you give birth to them. You cannot travel to other places
link |
01:50:54.000
in the world. There is no internet. There is no interesting intellectual tradition that reaches
link |
01:50:58.160
considerably deep. So you would not discover tumor incompleteness probably and so on.
link |
01:51:02.720
We wouldn't exist. And the alternative is you get born into an insane world.
link |
01:51:07.120
One that is doomed to die because it has just burned 100 million years of trees in a single
link |
01:51:12.240
century. Which one do you like? I think I like this one. It's a very weird thing that when you
link |
01:51:17.440
find yourself on a Titanic and you see this iceberg and it looks like we are not going to
link |
01:51:21.840
miss it. And a lot of people are in denial and most of the counterarguments sound like denial
link |
01:51:26.320
to me. There don't seem to be rational arguments. And the other thing is we are born on this Titanic.
link |
01:51:31.520
Without this Titanic, we wouldn't have been born. We wouldn't be here. We wouldn't be talking. We
link |
01:51:35.120
wouldn't be on the internet. We wouldn't do all the things that we enjoy. And we are not responsible
link |
01:51:40.560
for this happening. It's basically if we had the choice, we would probably try to prevent it.
link |
01:51:46.640
But when we were born, we were never asked when we want to be born, in which society we want to
link |
01:51:51.440
be born, what incentive structures we want to be exposed to. We have relatively little agency
link |
01:51:56.480
in the entire thing. Humanity has relatively little agency in the whole thing. It's basically a
link |
01:52:00.480
giant machine that's tumbling down a hill and everybody is fantastically trying to push some
link |
01:52:04.960
buttons. Nobody knows what these buttons are meaning, what they connect to. And most of them
link |
01:52:09.920
are not stopping this tumbling down the hill. As possible, the artificial intelligence will
link |
01:52:15.200
give us an escape latch somehow. So there's a lot of worry about existential threats of
link |
01:52:24.640
artificial intelligence. But what AI also allows in general forms of automation allows
link |
01:52:32.560
the potential of extreme productivity growth that will also perhaps in a positive way transform
link |
01:52:39.200
society that may allow us to inadvertently to return to the more, to the same kind of ideals
link |
01:52:51.680
of closer to nature that's represented in hunter gatherer societies, that's not destroying the
link |
01:52:59.280
planet, that's not doing overconsumption and so on. I mean, generally speaking, do you have hope
link |
01:53:04.240
that AI can help somehow? I think it is not fun to be very close to nature until you completely
link |
01:53:11.120
subdue nature. So our idea of being close to nature means being close to agriculture,
link |
01:53:17.520
basically forests that don't have anything in them that eats us.
link |
01:53:22.480
See, I mean, I want to disagree with that. I think the niceness of being close to nature
link |
01:53:30.000
is to being fully present and in like, when survival becomes your primary, not just your goal,
link |
01:53:38.160
but your whole existence. I'm not just romanticizing, I can just speak for myself.
link |
01:53:48.560
I am self aware enough that that is a fulfilling existence. That's one that's very true.
link |
01:53:55.920
I prefer to be in nature while I'm not fighting for my survival. I think fighting for your
link |
01:54:00.080
survival while being in the cold and in the rain and being hunted by animals and having
link |
01:54:05.600
open wounds is very unpleasant. There's a contradiction in there. Yes, I and you,
link |
01:54:11.760
just as you said, would not choose it, but if I was forced into it, it would be a fulfilling
link |
01:54:18.160
existence. If you are adapted to it, basically, if your brain is wired up in such a way that you'll
link |
01:54:24.560
get rewards optimally in such an environment and there's some evidence for this that for a certain
link |
01:54:30.640
degree of complexity, people are more happy in such an environment because it's what we largely
link |
01:54:35.840
have evolved for. In between, we had a few thousand years in which I think we have evolved for a
link |
01:54:40.880
slightly more comfortable environment. There is probably something like an intermediate stage
link |
01:54:46.720
in which people would be more happy than there would be if they would have to fend for themselves
link |
01:54:51.920
in small groups in the forest and often die versus something like this where we now have
link |
01:54:57.840
basically a big machine, a big mordor in which we run through concrete boxes and press buttons
link |
01:55:05.600
and machines and largely don't feel well cared for as the monkeys that we are.
link |
01:55:12.960
So returning briefly to, not briefly, but returning to AI. Let me ask a romanticized
link |
01:55:20.240
question. What is the most beautiful to you, silly ape? The most beautiful, surprising idea
link |
01:55:26.320
in the development of artificial intelligence, whether in your own life or in the history of
link |
01:55:31.280
artificial intelligence that you've come across? If you built an AI, it probably can make models at
link |
01:55:37.760
an arbitrary degree of detail of the world and then it would try to understand its own nature.
link |
01:55:44.000
It's tempting to think that at some point when we have general intelligence, we have competitions
link |
01:55:48.400
where we will let the AIs wake up in different kinds of physical universes and we measure how many
link |
01:55:53.760
movements of the Rubik's Cube it takes until it's figured out what's going on in its universe
link |
01:55:58.560
and what it is and its own nature and its own physics and so on. So what if we exist in the
link |
01:56:03.600
memory of an AI that is trying to understand its own nature and remembers its own genesis and
link |
01:56:08.400
remembers Lex and Joshua sitting in a hotel, sparking some of the ideas of that led to the
link |
01:56:14.240
development of general intelligence? So we're a kind of simulation that's running in an AI system
link |
01:56:18.960
that's trying to understand itself. It's not that I believe that, but I think it's a beautiful
link |
01:56:28.560
idea. I mean, you kind of return to this idea with a Turing test of intelligence being
link |
01:56:36.880
of intelligence being the process of asking and answering what is intelligence.
link |
01:56:42.400
I mean, why do you think there is an answer? Why is there such a search for an answer?
link |
01:56:53.600
So does there have to be like an answer? You just had an AI system that's trying to
link |
01:56:59.600
understand the why of what, you know, understand itself.
link |
01:57:04.880
Is that a fundamental process of greater and greater complexity, greater and greater
link |
01:57:09.920
intelligence? Is the continuous trying of understanding itself?
link |
01:57:14.960
No, I think you will find that most people don't care about that because they're well adjusted enough
link |
01:57:19.360
to not care. And the reason why people like you and me care about it probably has to do with the
link |
01:57:24.800
need to understand ourselves. It's because we are in fundamental disagreement with the universe
link |
01:57:29.680
that we wake up in. They look down on me and they say, oh my God, I'm caught in a monkey. What's that?
link |
01:57:34.720
That's the feeling, right? That's the government and I'm unhappy with the
link |
01:57:39.120
entire universe that I find myself in. Oh, so you don't think that's a fundamental
link |
01:57:44.960
aspect of human nature that some people are just suppressing that they wake up shocked
link |
01:57:50.400
they're in the body of a monkey? No, there is a clear adaptive value to not be confused by that.
link |
01:57:56.560
Well, no, that's not what I asked. So, yeah, if there's clear adaptive value, then there's clear
link |
01:58:05.840
adaptive value to while fundamentally your brain is confused by that, by creating an illusion.
link |
01:58:11.440
Another layer of the narrative that says, you know, that tries to suppress that and instead
link |
01:58:18.160
say that, you know, what's going on with the government right now is the most important
link |
01:58:21.760
thing. What's going on with my football team is the most important thing. But it seems to me
link |
01:58:26.480
the, like, for me, it was a really interesting moment reading Ernest Beck's denial of death,
link |
01:58:35.040
that, you know, this kind of idea that we're all, you know, the fundamental thing from which most
link |
01:58:44.880
of our human mind springs is this fear of mortality and being cognizant of your mortality and the
link |
01:58:52.160
fear of that mortality. And then you construct illusions on top of that. I guess you being,
link |
01:59:00.160
just to push on it, you really don't think it's possible that this worry of the big existential
link |
01:59:07.600
questions is actually fundamental as the existentialist thought to our existence.
link |
01:59:13.280
I think that the fear of death only plays a role as long as you don't see the big picture. The thing
link |
01:59:18.880
is that minds are software states, right? Software doesn't have identity. Software in some sense is
link |
01:59:25.040
a physical law. But it feels like there's an identity. I thought that was for this particular
link |
01:59:32.640
piece of software. And the narrative it tells, that's a fundamental property of it. The
link |
01:59:38.240
maintenance of the identity is not terminal. It's instrumental to something else. You maintain
link |
01:59:42.960
your identity so you can serve your meaning. So you can do the things that you're supposed to do
link |
01:59:47.600
before you die. And I suspect that for most people, the fear of death is the fear of dying before
link |
01:59:52.640
they are done with the things that they feel they have to do, even though they cannot quite put their
link |
01:59:56.160
finger on it, what that is. Right. But in the software world, return to the question, then what
link |
02:00:06.320
happens after we die? Why would you care? You will not be longer there. The point of dying is that
link |
02:00:14.080
you are gone. Well, maybe I'm not. This is what, you know, it seems like there's so much,
link |
02:00:23.040
in the idea that this is just, the mind is just a simulation that's constructing a narrative around
link |
02:00:28.800
some particular aspects of the quantum mechanical wave function world that we can't quite get direct
link |
02:00:37.280
access to. Then like the idea of mortality seems to be fuzzy as well. Maybe there's not a clear
link |
02:00:45.280
answer. The fuzzy idea is the one of continuous existence. We don't have continuous existence.
link |
02:00:51.040
How do you know that? Because it's not computable. Because you're saying it's...
link |
02:00:56.560
There is no continuous process. The only thing that binds you together with the
link |
02:00:59.520
Lex Friedman from yesterday is the illusion that you have memories about him. So if you
link |
02:01:03.760
want to upload, it's very easy. You make a machine that thinks it's you. Because it's the same thing
link |
02:01:08.000
that you are. You are a machine that thinks it's you. But that's immortality. Yeah, but it's just
link |
02:01:13.520
a belief. You can create this belief very easily. Once you realize that the question whether you
link |
02:01:18.160
are immortal or not depends entirely on your beliefs and your own continuity. But then you can
link |
02:01:25.280
be immortal by the continuity of the belief. It cannot be immortal, but you can stop being afraid
link |
02:01:32.000
of your mortality because you realize you were never continuously existing in the first place.
link |
02:01:37.760
Well, I don't know if I'd be more terrified or less terrified with that. It seems like the fact
link |
02:01:42.080
that I existed. So you don't know this state in which you don't have a self. You can turn off
link |
02:01:47.520
yourself, you know? I can't turn off myself. You can turn it off. I can. Yes, and you can
link |
02:01:53.840
basically meditate yourself in a state where you are still conscious. There are still things
link |
02:01:57.920
are happening where you know everything that you knew before, but you're no longer identified with
link |
02:02:02.400
changing anything. And this means that yourself in a way dissolves. There is no longer this person.
link |
02:02:09.120
You know that this person construct exists in other states and it runs on this brain of
link |
02:02:13.920
lack of treatment. But it's not a real thing. It's a construct. It's an idea. And you can change
link |
02:02:20.080
that idea. And if you let go of this idea, if you don't think that you are special,
link |
02:02:25.040
you realize it's just one of many people and it's not your favorite person even, right? It's
link |
02:02:29.760
just one of many. And it's the one that you are doomed to control for the most part,
link |
02:02:34.160
and that is basically informing the actions of this organism as a control model. And this is
link |
02:02:39.680
all there is. And you are somehow afraid that this control model gets interrupted
link |
02:02:44.880
or loses the identity of continuity. Yeah, so I'm attached. I mean, yeah, it's a very popular,
link |
02:02:51.440
it's a somehow compelling notion that being, being attached, like there's no need to be attached to
link |
02:02:58.000
this idea of an identity. But that in itself could be an illusion that you construct. So the
link |
02:03:06.880
process of meditation while popular is thought of as getting under the concept of identity,
link |
02:03:12.400
it could be just putting a cloak over it, just telling it to be quiet for the moment.
link |
02:03:16.880
You know, I think that meditation is eventually just a bunch of techniques that let you control
link |
02:03:25.120
attention. And when you can control attention, you can get access to your own source code,
link |
02:03:30.480
hopefully not before you understand what you're doing. And then you can change the way it works
link |
02:03:35.040
temporarily or permanently. So yeah, meditation is to get a glimpse at the source code, get under
link |
02:03:41.760
sort of basically control or turn off the attention. The entire thing is that you learn to
link |
02:03:44.560
control attention. So everything else is downstream from controlling attention.
link |
02:03:48.560
And control the attention that's looking at the attention?
link |
02:03:52.080
Normally, we only get attention in the parts of our mind that create heat where you have a
link |
02:03:55.920
mismatch between model and the results that are happening. And so most people are not self aware
link |
02:04:01.520
because their control is too good. If everything works out roughly the way you want, and the only
link |
02:04:06.560
things that don't work out is whether your football team wins, then you will mostly have models
link |
02:04:11.600
about these domains. And it's only when, for instance, your fundamental relationships to the
link |
02:04:16.560
world around you don't work because the ideology of your country is insane and the other kids are
link |
02:04:21.840
not nerds and don't understand why you understand physics and you don't why you want to understand
link |
02:04:27.040
physics and you don't understand why somebody would not want to understand physics.
link |
02:04:31.920
So we kind of brought up neurons in the brain as reinforcement learning agents.
link |
02:04:37.520
And there's been some successes as you brought up with Go, with AlphaGo AlphaZero,
link |
02:04:44.960
with ideas of self play, which I think are incredibly interesting ideas of systems playing
link |
02:04:49.280
each other in an automated way to improve by playing other systems in a particular construct
link |
02:04:58.480
of a game that are a little bit better than itself and then thereby improving continuously.
link |
02:05:04.000
All the competitors in the game are improving gradually, so being just challenging enough
link |
02:05:09.520
and learning from the process of the competition. Do you have hope for that reinforcement learning
link |
02:05:15.440
process to achieve greater and greater level of intelligence? So we talked about different
link |
02:05:19.760
ideas in AI that we need to be solved. Is RL a part of that process of trying to create an
link |
02:05:27.040
AGI system? So definitely forms of unsupervised learning, but there are many algorithms that
link |
02:05:31.520
can achieve that. And I suspect that ultimately the algorithms that work, there will be class of
link |
02:05:38.000
them or many of them, and they might have small differences of magnitude and efficiency. But
link |
02:05:44.640
eventually what matters is the type of model that you form. And the types of models that we
link |
02:05:49.120
form right now are not sparse enough. What does it mean to be sparse? It means that ideally every
link |
02:05:57.600
potential model state should correspond to a potential world state. So basically, if you vary
link |
02:06:05.680
states in your model, you always end up with valid world states. And our mind is not quite there.
link |
02:06:10.480
So an indication is basically what we see in dreams. The older we get, the more boring our
link |
02:06:15.040
dreams become because we incorporate more and more constraints that we learned about how the world
link |
02:06:20.080
works. So many of the things that we imagine to be possible as children turn out to be
link |
02:06:25.280
constrained by physical and social dynamics. And as a result, fewer and fewer things remain
link |
02:06:31.600
possible. And it's not because our imagination scales back, but the constraints under which it
link |
02:06:36.000
operates become tighter and tighter. And so the constraints under which our neural networks operate
link |
02:06:42.240
are almost limitless, which means it's very difficult to get a neural network to imagine
link |
02:06:46.960
things that look real. Right. So I suspect part of what we need to do is we probably need to
link |
02:06:54.160
build dreaming systems. I suspect that part of the purpose of dreams is to, similar to a
link |
02:06:59.680
generative adversarial network, learn certain constraints, and then it produces alternative
link |
02:07:05.040
perspectives on the same set of constraints. So you can recognize it under different circumstances.
link |
02:07:10.720
Maybe we have flying dreams as children, because we recreate the objects that we know and the maps
link |
02:07:15.440
that we know from different perspectives, which also means from a bird's eye perspective.
link |
02:07:19.520
So I mean, aren't we doing that anyway? I mean, not with our eyes closed and when we're sleeping.
link |
02:07:26.400
Are we just constantly running dreams and simulations in our mind as we try to interpret
link |
02:07:31.440
the environment? I mean, sort of considering all the different possibilities, the way we interact
link |
02:07:36.960
with the environment seems like essentially, like you said, sort of creating a bunch of
link |
02:07:45.360
simulations that are consistent with our expectations, with our previous experiences,
link |
02:07:50.720
with the things we just saw recently. And through that hallucination process, we are able to then
link |
02:08:01.200
somehow stitch together what actually we see in the world with the simulations that match it well
link |
02:08:06.960
and thereby interpret it. I suspect that you and my brain are sadly unusual in this regard,
link |
02:08:12.640
which is probably what got you into MIT. So this obsession of constantly pondering possibilities
link |
02:08:19.520
and solutions to problems. Oh, stop. But I think I'm not talking about intellectual stuff. I'm
link |
02:08:27.520
talking about just doing the kind of stuff it takes to walk and not fall. Yes, this is largely
link |
02:08:35.440
automatic. Yes, but the process is, I mean... It's not complicated. It's relatively easy to
link |
02:08:43.920
build a neural network that in some sense learns the dynamics. The fact that we haven't done it
link |
02:08:49.120
right so far, it doesn't mean it's hard because you can see that a biological organism does it.
link |
02:08:53.440
There's relatively few neurons. So basically, you build a bunch of neural oscillators that
link |
02:08:58.160
entrain themselves with the dynamics of your body in such a way that the regulator becomes
link |
02:09:02.800
isomorphic in its model to the dynamics that it regulates, and then it's automatic. And it's only
link |
02:09:08.480
interesting the sense that it captures attention when the system is off.
link |
02:09:12.080
See, but thinking of the kind of mechanism that's required to do walking as a controller, as a
link |
02:09:19.520
neural network, I think it's a compelling notion, but it discards quietly or at least makes implicit
link |
02:09:29.920
the fact that you need to have something like common sense reasoning to walk. That's not,
link |
02:09:34.640
it's an open question whether you do or not. But my intuition is to act in this world,
link |
02:09:40.400
there's a huge knowledge base that's underlying it somehow. There's so much information
link |
02:09:47.440
of the kind we have never been able to construct in our neural networks or in artificial intelligence
link |
02:09:54.400
systems period, which is like it's humbling, at least in my imagination, the amount of information
link |
02:10:00.960
required to act in this world humbles me. And I think saying that neural networks can accomplish
link |
02:10:08.480
it is missing the fact that we don't, yeah, we don't have yet a mechanism for constructing
link |
02:10:16.720
something like common sense reasoning. I mean, what's your sense about to linger on how much,
link |
02:10:25.440
you know, to linger on the idea of what kind of mechanism would be effective at walking?
link |
02:10:31.200
You said just a neural network, not maybe the kind we have, but something a little bit better
link |
02:10:36.160
would be able to walk easily. Don't you think it also needs to know
link |
02:10:40.720
a huge amount of knowledge that's represented under the flag of common sense reasoning?
link |
02:10:48.240
How much common sense knowledge do we actually have? Imagine that you are really hardworking
link |
02:10:52.400
through all your life and you form two new concepts every half hour. So you end up with
link |
02:10:57.120
something like a million concepts because you don't get that old. So a million concepts,
link |
02:11:02.240
that's not a lot. So it's not just a million concepts. I think it would be a lot, I personally
link |
02:11:09.200
think it might be much more than a million. But if you think just about the numbers,
link |
02:11:13.440
you don't live that long. If you think about how many cycles do your neurons have in your
link |
02:11:17.920
life, it's quite limited. You don't get that old. Yeah, but the powerful thing is the number of
link |
02:11:24.000
concepts, and they're probably deeply hierarchical in nature, the relations as you described between
link |
02:11:31.920
them is the key thing. So it's like even if it's a million concepts, the graph of relations that's
link |
02:11:38.000
formed and some kind of probabilistic relationships, that's what common sense reasoning is,
link |
02:11:46.400
the relationship between things. In some sense, I think of the concepts as the address space for
link |
02:11:53.120
our behavior programs. And the behavior programs allow us to recognize objects and interact with
link |
02:11:57.600
them, also mental objects. And a large part of that is the physical world that we interact with,
link |
02:12:03.440
which is this res extensor thing, which is basically a navigation of information and space.
link |
02:12:08.800
And basically, it's similar to a game engine. It's a physics engine that you can use to describe
link |
02:12:16.080
and predict how things that look in a particular way, that feel when you touch them in particular
link |
02:12:21.360
way, that are appropriate, that have auditory perception and so on, how they work out. So
link |
02:12:25.760
basically the geometry of all these things. And this is probably 80% of what our brain is doing,
link |
02:12:31.840
is dealing with that with this real time simulation. And by itself, a game engine is
link |
02:12:36.560
fascinating, but it's not that hard to understand what it's doing, right? And our game engines
link |
02:12:41.440
are already in some sense, approximating the fidelity of what we can perceive. So if we
link |
02:12:50.160
put on an Oculus Quest, we get something that is still relatively crude with respect to what we
link |
02:12:55.040
can perceive, but it's also in the same ballpark already, right? It's just a couple order of
link |
02:12:59.120
magnitudes away from saturating our perception in terms of the complexity that it can produce.
link |
02:13:05.760
So in some sense, it's reasonable to say that the computer that you can buy and put into your home
link |
02:13:11.440
is able to give a perceptual reality that has a detail that is already in the same ballpark as
link |
02:13:16.960
what your brain can process. And everything else are ideas about the world. And I suspect that
link |
02:13:22.800
they are relatively sparse and also the intuitive models that we form about social interaction.
link |
02:13:28.240
Social interaction is not so hard. It's just hard for us nerds because we all have our wires
link |
02:13:33.200
crossed, so we need to deduce them. But the pyres are present in most social animals. So
link |
02:13:38.320
it's an interesting thing to notice that many domestic social animals, like cats and dogs,
link |
02:13:44.560
have better social cognition than children. I hope so. I hope it's not that many concepts
link |
02:13:51.440
fundamentally to do to exist in this world. For me, it's more like I'm afraid so because
link |
02:13:57.440
this thing that we only appear to be so complex to each other because we are so stupid is a little
link |
02:14:02.560
bit depressing. Yeah, to me that's inspiring if we're indeed as stupid as it seems.
link |
02:14:11.040
I think our brains don't scale and the information processing that we build tend to scale very well.
link |
02:14:16.800
Yeah, but one of the things that worries me is that the fact that the brain doesn't scale
link |
02:14:23.920
means that that's actually a fundamental feature of the brain. All the flaws of the brain,
link |
02:14:30.080
everything we see as limitations, perhaps there is a fundamental, the constraints on the system
link |
02:14:35.120
could be a requirement of its power, which is different than our current understanding of
link |
02:14:43.840
intelligent systems where scale, especially with deep learning, especially with reinforcement
link |
02:14:48.720
learning, the hope behind open AI in deep mind, all the major results really have to do with
link |
02:14:56.320
huge compute. And yeah. It would also be that our brains are so small, not just because they
link |
02:15:01.760
take up so much glucose in our body, like 20% of the glucose so they don't arbitrarily scale.
link |
02:15:07.520
But there are some animals like elephants which have larger brains than us and they don't seem
link |
02:15:11.600
to be smarter. Elephants seem to be autistic. They have very, very good motor control and
link |
02:15:16.160
they're really good with details, but they really struggle to see the big picture. So you can make
link |
02:15:20.240
them recreate drawings stroke by stroke. They can do that, but they cannot reproduce a still life.
link |
02:15:27.040
So they cannot make a drawing of a scene that they see. They will always be only able to
link |
02:15:31.440
reproduce the line drawing, at least as far from what I could see in the experiments.
link |
02:15:35.760
Yeah. Why is that? Maybe smarter elephants would meditate themselves out of existence
link |
02:15:40.720
because their brains are too large. So basically the elephants that were not autistic,
link |
02:15:44.160
they didn't reproduce. Yeah. So we have to remember that the brain
link |
02:15:48.160
is fundamentally interlinked with the body in our human and biological system.
link |
02:15:52.080
Do you think that AGI systems that we try to create or greater intelligence systems would
link |
02:15:56.480
need to have a body? I think that should be able to make use of a body if you give it to them.
link |
02:16:02.960
But I don't think that I fundamentally need a body. So I suspect if you can interact with the
link |
02:16:07.600
world by moving your eyes and your head, you can make controlled experiments. And this allows you
link |
02:16:13.280
to have many magnitudes, fewer observations in order to reduce the uncertainty in your models.
link |
02:16:21.760
So you can pinpoint the areas in your models where you're not quite sure and you just move
link |
02:16:25.040
your head and see what's going on over there and you get additional information. If you just have
link |
02:16:30.000
to use YouTube as an input and you cannot do anything beyond this, you probably need just
link |
02:16:34.560
much more data. But we have much more data. So if you can build a system that has enough
link |
02:16:40.080
time and attention to browse all of YouTube and extract all the information that there is to be
link |
02:16:44.720
found, I don't think there's an obvious limit to what it can do.
link |
02:16:49.040
Yeah, but it seems that the interactivity is a fundamental thing that the physical body allows
link |
02:16:53.760
you to do. But let me ask on that topic, that's what a body is, is allowing the brain to touch
link |
02:16:59.920
things and move things and interact with whether the physical world exists or not, whatever,
link |
02:17:05.840
but interact with some interface to the physical world. What about a virtual world? Do you think
link |
02:17:13.680
we can do the same kind of reasoning, consciousness, intelligence if we put on a VR headset and move
link |
02:17:22.640
over to that world? Do you think there's any fundamental difference between the interface,
link |
02:17:27.200
the physical world, that it's here in this hotel and if we were sitting in the same hotel in the
link |
02:17:31.760
virtual world? The question is, does this nonphysical world or this other environment entice you to
link |
02:17:38.160
solve problems that require general intelligence? If it doesn't, then you probably will not develop
link |
02:17:43.840
general intelligence. And arguably, most people are not generally intelligent because they don't
link |
02:17:47.760
have to solve problems that make them generally intelligent. And even for us, it's not yet clear
link |
02:17:52.240
if we are smart enough to build AI and understand our own nature to this degree. So it could be
link |
02:17:57.360
a matter of capacity. And for most people, it's in the first place a matter of interest. They
link |
02:18:01.440
don't see the point because the benefit of attempting this project are marginal because
link |
02:18:05.920
you're probably not going to succeed in it. And the cost of trying to do it requires complete
link |
02:18:10.320
dedication of your entire life. But it seems like the possibility is what you can do in the
link |
02:18:15.120
virtual world. So imagine that is much greater than you can in the real world. So imagine a
link |
02:18:20.400
situation and be interesting option for me. If somebody came to me and offered what I'll do
link |
02:18:26.960
is so from now on, you can only exist in the virtual world. And so you put on this headset.
link |
02:18:34.000
And when you eat, we'll make sure to connect your body up in a way that when you eat in the
link |
02:18:39.920
virtual world, your body will be nourished in the same way in the virtual world. So
link |
02:18:44.080
the aligning incentives between the our common sort of real world and the virtual world. But
link |
02:18:49.600
then the possibilities become much bigger. Like I could be other kinds of creatures I could do.
link |
02:18:55.200
I can break the laws of physics. We know them. I could do a lot. I mean, the possibilities
link |
02:18:59.840
are endless, right? As far as we think, it's an interesting thought whether like what existence
link |
02:19:06.160
would be like, what kind of intelligence would emerge there, what kind of consciousness, what
link |
02:19:11.840
kind of maybe greater intelligence, even in me, Lex, even at this stage in my life, if I spend
link |
02:19:18.720
the next 20 years in that world to see how that intelligence emerges. And if I was,
link |
02:19:24.320
if that happened at the very beginning, before I was even cognizant of my existence in this
link |
02:19:28.800
physical world, it's interesting to think how that child would develop. And the way
link |
02:19:34.560
virtual reality and digitization of everything is moving, it's not completely out of the realm
link |
02:19:39.520
of possibility that we're all, that some part of our lives will, if not entirety of it, will live
link |
02:19:46.880
in a virtual world to a greater degree than we currently have living on Twitter and social media
link |
02:19:52.800
and so on. Do you have, I mean, does something draw you intellectually or naturally in terms
link |
02:20:00.240
of thinking about AI to this virtual world, or more possibilities?
link |
02:20:05.600
I think that currently it's a waste of time to deal with the physical world before we have
link |
02:20:09.840
mechanisms that can automatically learn how to deal with it. The body gives you a second order
link |
02:20:14.720
agency. What constitutes the body is the things that you can indirectly control. Third order
link |
02:20:21.280
are tools. And the second order is the things that are basically always present. But you operate
link |
02:20:26.800
on them with first order things, which are mental operators. And the zero order is,
link |
02:20:32.080
in some sense, the direct sense of what you're deciding. So you observe yourself initiating
link |
02:20:39.600
an action. There are features that you interpret as the initiation of an action. Then you perform
link |
02:20:44.800
the operations that you perform to make that happen. And then you see the movement of your
link |
02:20:49.360
limbs. And you learn to associate those and thereby model your own agency over this feedback.
link |
02:20:54.560
But the first feedback that you get is from this first order thing already. Basically,
link |
02:20:58.080
you decide to think a thought and the thought is being thought. You decide to change the thought
link |
02:21:02.640
and you observe how the thought is being changed. And in some sense, this is, you could say, an
link |
02:21:07.520
embodiment already. And I suspect it's sufficient as an embodiment for intelligence.
link |
02:21:12.320
And so it's not that important, at least at this time, to consider variations in the second order.
link |
02:21:17.360
Yes. But the thing that you also put mentioned just now is physics that you could change in
link |
02:21:24.080
any way you want. So you need an environment that puts up resistance against you. If there's
link |
02:21:29.120
nothing to control, you cannot make models. There needs to be a particular way that resists you.
link |
02:21:34.640
And by the way, your motivation is usually outside of your mind. It resists your motivation,
link |
02:21:38.880
is what gets you up in the morning, even though it would be much less work to stay in bed.
link |
02:21:43.040
Right? So it's basically forcing you to resist the environment. And it forces your mind to serve
link |
02:21:51.520
it, to serve this resistance to the environment. So in some sense, it is also putting up resistance
link |
02:21:56.720
against the natural tendency of the mind to not do anything. Yeah. But so some of that resistance,
link |
02:22:01.280
just like you described with motivation, is like in the first order, it's in the mind.
link |
02:22:05.600
Some resistance is in the second order, like the actual physical objects pushing against
link |
02:22:10.240
you and so on. It seems that the second order stuff in virtual reality could be recreated.
link |
02:22:14.560
Of course. But it might be sufficient that you just do mathematics and mathematics is
link |
02:22:19.040
already putting up enough resistance against you. So basically just with an aesthetic motive,
link |
02:22:24.160
this could maybe sufficient to form a type of intelligence. It would probably not be a very
link |
02:22:29.840
human intelligence, but it might be one that is already general. So to mess with this zero
link |
02:22:37.360
order, maybe first order, what do you think about ideas of brain computer interfaces? So
link |
02:22:42.320
again, returning to our friend Elon Musk and Neuralink, a company that's trying to,
link |
02:22:47.120
of course, there's a lot of trying to cure diseases and so on with a near term. But the
link |
02:22:52.000
long term vision is to add an extra layer to basically expand the capacity of the brain
link |
02:22:58.480
connected to the computational world. Do you think one that's possible,
link |
02:23:04.400
too, how does that change the fundamentals of the zero of the order in the first order?
link |
02:23:07.840
It's technically possible, but I don't see that the FDA would ever allow me to drill holes in
link |
02:23:11.920
my skull to interface my neocortex on mass envisions. So at the moment, I can do horrible
link |
02:23:17.200
things to mice, but I'm not able to do useful things to people, except maybe at some point
link |
02:23:23.120
down the line in medical applications. So this thing that we are envisioning, which means
link |
02:23:28.240
recreational and recreational brain computer interfaces are probably not going to happen
link |
02:23:34.720
in the present legal system. I love it. How I'm asking you out there, philosophical and
link |
02:23:42.160
sort of engineering questions. And for the first time ever, you jumped to the legal FDA.
link |
02:23:48.080
There would be enough people that would be crazy enough to have holes drilled in their skull to
link |
02:23:51.760
try a new type of brain computer interface. But also if it works, FDA will approve it.
link |
02:23:57.680
I work a lot with autonomous vehicles. Yes, you can say that it's going to be a very difficult
link |
02:24:04.160
regulatory process of approving autonomous, but it doesn't mean autonomous vehicles are
link |
02:24:08.080
never going to happen. No, they will totally happen as soon as we create jobs for at least
link |
02:24:13.200
two lawyers and one regulator per car. Yes, lawyers. Lawyers is the fundamental
link |
02:24:22.640
substrate of reality. In the US, it's a very weird system. It's not universal in the world.
link |
02:24:29.440
The law is a very interesting software once you realize it, right? These circuits are,
link |
02:24:33.760
in some sense, streams of software and there is largely works by exception handling.
link |
02:24:38.080
So you make decisions on the ground and they get synchronized with the next level structure as
link |
02:24:42.000
soon as an exception is being thrown. So it escalates the exception handling. The process is
link |
02:24:48.240
very expensive, especially since it incentivizes the lawyers for producing work for lawyers.
link |
02:24:54.960
Yes, so the exceptions are actually incentivized for firing often. But to return, outside of lawyers,
link |
02:25:04.640
is there anything fundamentally, is there anything interesting, insightful about the
link |
02:25:10.560
possibility of this extra layer of intelligence added to the brain?
link |
02:25:15.280
I do think so, but I don't think that you need technically invasive procedures to do so.
link |
02:25:20.880
We can already interface with other people by observing them very, very closely and getting
link |
02:25:25.120
in some kind of empathetic resonance. And I'm not very good at this, but I noticed that people
link |
02:25:31.680
are able to do this to some degree. And it basically means that we model an interface
link |
02:25:37.360
layer of the other person in real time. And it works despite our neurons being slow,
link |
02:25:42.400
because most of the things that we do are built on periodic processes. So you just need to train
link |
02:25:46.960
yourself with the oscillation that happens. And if the oscillation itself changes slowly enough,
link |
02:25:52.320
you can basically follow along. Right. But the bandwidth of the interaction,
link |
02:26:00.800
it seems like you can do a lot more computation when there's...
link |
02:26:03.680
Yes, of course. But the other thing is that the bandwidth that our brain, our own mind is running
link |
02:26:08.560
on is actually quite slow. So the number of thoughts that I can productively think in any
link |
02:26:13.280
given day is quite limited. But it's much... If they had the discipline to write it down
link |
02:26:18.400
and the speed to write it down, maybe it would be a book every day or so. But if you think about
link |
02:26:22.960
the computers that we can build, the magnitudes at which they operate,
link |
02:26:27.600
right, this would be nothing. It's something that they can put out in a second.
link |
02:26:30.720
Well, I don't know. So it's possible sort of the number of thoughts you have in your brain is...
link |
02:26:37.120
It could be several orders of magnitude higher than what you're possibly able to express through
link |
02:26:42.000
your fingers or through your voice. Most of them are going to be repetitive. Because they...
link |
02:26:48.160
How do you know that? Because they have to control the same problems every day. When I walk,
link |
02:26:52.960
there are going to be processes in my brain that model my walking pattern and regulate them and
link |
02:26:57.600
so on. But it's going to be pretty much the same every day. But that could be because...
link |
02:27:00.560
But that could be because... Every step. But I'm talking about intellectual reason,
link |
02:27:03.040
like thinking. So the question, what is the best system of government? So you sit down and
link |
02:27:06.880
start thinking about that. One of the constraints is that you don't have access to a lot of...
link |
02:27:12.160
You don't have access to a lot of facts, a lot of studies. You always have to interface with
link |
02:27:17.520
something else to learn more, to aid in your reasoning process. If you can directly access
link |
02:27:24.560
all of Wikipedia and try to understand what is the best form of government, then every thought
link |
02:27:29.280
won't be stuck in a loop. Every thought that requires some extra piece of information will
link |
02:27:34.080
be able to grab it really quickly. That's the possibility of... If the bottleneck is literally
link |
02:27:40.080
the information that the bottleneck of breakthrough ideas is just being able to quickly access huge
link |
02:27:49.200
amounts of information, then the possibility of connecting your brain to the computer could lead
link |
02:27:54.080
to totally new... Totally new breakthroughs. You can think of mathematicians being able to
link |
02:28:00.320
just up the orders of magnitude of power in their reasoning about mathematical roots.
link |
02:28:08.720
What if humanity has already discovered the optimal form of government through a evolutionary
link |
02:28:14.160
process? There is an evolution going on. What we discover is that maybe the problem of government
link |
02:28:20.400
doesn't have stable solutions for us as a species, because we are not designed in such a way that we
link |
02:28:25.200
can make everybody conform to them. But there could be solutions that work under different
link |
02:28:30.960
circumstances or that they're the best for certain environment and depends on, for instance,
link |
02:28:35.680
the primary forms of ownership and the means of production. If the main means of production is land,
link |
02:28:42.480
then the forms of government will be regulated by the landowners and you get a monarchy.
link |
02:28:48.800
If you also want to have a form of government in which you depend on some form of slavery,
link |
02:28:54.720
for instance, where the peasants have to work very long hours for very little gain,
link |
02:28:58.640
so very few people can have plumbing, then maybe you need to promise them that you get paid in
link |
02:29:04.480
the afterlife over time. You need a theocracy. For much of human history in the West,
link |
02:29:12.320
we had a combination of monarchy and theocracy that was our form of governance. At the same time,
link |
02:29:18.560
the Catholic Church implemented game theoretic principles. I recently reread Thomas
link |
02:29:24.400
Okinas. It's very interesting to see this because he was not a dualist. He was translating Aristotle
link |
02:29:29.600
in a particular way for designing an operating system for the Catholic society. He says that
link |
02:29:36.640
basically people are animals in very much the same way as Aristotle envisions, which basically
link |
02:29:41.840
organisms with cybernetic control. Then he says that there are initial rational principles that
link |
02:29:46.960
humans can discover and everybody can discover them so they are universal. If you are sane,
link |
02:29:51.600
you should understand you should submit to them because you can rationally deduce them.
link |
02:29:55.840
And these principles are roughly you should be willing to self regulate correctly. You should
link |
02:30:03.920
be willing to do correct social regulation. It's intraorganismic. You should be willing
link |
02:30:11.280
to act on your models. So you have skin in the game. And you should have goal rationality. You
link |
02:30:18.720
should be choosing the right goals to work on. So basically these three rational principles,
link |
02:30:25.200
goal rationality he calls prudence or wisdom. Social regulation is justice. The correct social
link |
02:30:31.600
one and the internal regulation is temperance. And this I think willingness to act on your models
link |
02:30:37.760
is courage. And then he says that there are additionally to these four cardinal virtues,
link |
02:30:43.600
three divine virtues. And these three divine virtues cannot be rationally deduced,
link |
02:30:47.440
but they reveal themselves by the harmony, which means if you assume them and you extrapolate
link |
02:30:52.000
what's going to happen, you will see that they make sense. And it's often been misunderstood as
link |
02:30:57.680
God has to tell you that these are the things. So basically there's something nefarious going on.
link |
02:31:02.800
The Christian conspiracy forces you to believe some guy with a long beard that they discovered
link |
02:31:08.000
this. But so these principles are relatively simple. Again, it's for high level organization
link |
02:31:14.480
for the resulting civilization that you form. Commitment to unity. So basically you serve this
link |
02:31:20.240
higher larger thing, this structural principle on the next level. And he calls that phase.
link |
02:31:26.400
Then there needs to be a commitment to shared purpose. This is basically this global reward
link |
02:31:31.120
that you try to figure out what that should be and how you can facilitate this. And this is love.
link |
02:31:34.960
The commitment to shared purpose is the core of love. You see this sacred thing that is more
link |
02:31:40.080
important than your own organismic interests in the other. And you serve this together. And this
link |
02:31:44.640
is how you see the sacred in the other. And the last one is hope, which means you need to be willing
link |
02:31:49.920
to act on that principle without getting rewards in the here and now, because it doesn't exist yet.
link |
02:31:55.600
Then you start out building the civilization. So you need to be able to do this in the absence
link |
02:32:00.640
of its actual existence yet. So it can come into being.
link |
02:32:04.880
So the way it comes into being is by you accepting those notions and then you see these
link |
02:32:11.040
three divine concepts and you see them realized. And now the problem is divine is a loaded concept
link |
02:32:15.840
in our world, because we are outside of this cult and we are still scarred from breaking free of it.
link |
02:32:20.880
But the idea is basically we need to have a civilization that acts as an intentional agent,
link |
02:32:25.280
like an insect state. And we are not actually a tribal species, we are a state building species.
link |
02:32:30.320
And what enabled state building is basically the formation of religious states and other forms
link |
02:32:37.280
of rule based administration in which the individual doesn't matter as much as the rule or the higher
link |
02:32:42.400
goal. We got there by the question, what's the optimal form of governance? So I don't think
link |
02:32:47.040
that Catholicism is the optimal form of governance because it's obviously on the way out. So it is
link |
02:32:52.560
for the present type of society that we are in. Religious institutions don't seem to be
link |
02:32:58.320
optimal to organize that. So what we discovered right now that we live in in the West is democracy.
link |
02:33:04.240
And democracy is the role of oligarchs that are the people that currently own the means of production
link |
02:33:09.440
that is administered not by the oligarchs themselves, because there's too much disruption,
link |
02:33:14.240
right? We have so much innovation that we have in every generation new means of production
link |
02:33:19.040
that we invent. And corporations die usually after 30 years or so and something other
link |
02:33:24.240
takes a leading role in our societies. So it's administered by institutions. And these institutions
link |
02:33:30.240
themselves are not elected, but they provide continuity. And they are led by electable politicians.
link |
02:33:37.440
And this makes it possible that you can adapt to change without having to kill people, right? So
link |
02:33:41.600
you can, for instance, have a change in governments. If people think that the current
link |
02:33:45.200
government is too corrupt or it's not up to date, you can just elect new people. Or if a journalist
link |
02:33:50.560
finds out something inconvenient about the institution and the institution has no plan B,
link |
02:33:55.840
like in Russia, the journalist has to die. This is what when you run society by the deep state.
link |
02:34:01.680
So ideally, you have an administration layer that you can change if something bad happens,
link |
02:34:08.400
right? So you will have a continuity in the whole thing. And this is the system that we came up in
link |
02:34:12.800
in the West. And the way it's set up in the US is largely result of low level models,
link |
02:34:16.720
so it's mostly just second, third order consequences that people are modeling
link |
02:34:21.440
in the design of these institutions. It's a relatively young society that doesn't really
link |
02:34:25.520
take care of the downstream effects of many of the decisions that are being made.
link |
02:34:29.760
And I suspect that AI can help us this in a way if you can fix the incentives.
link |
02:34:35.040
The society of the US is a society of cheaters. It's basically cheating is so indistinguishable
link |
02:34:40.560
from innovation. And we want to encourage innovation.
link |
02:34:43.040
Can you elaborate on what you mean by cheating?
link |
02:34:45.040
It's basically people do things that they know are wrong.
link |
02:34:47.600
It's acceptable to do things that you know are wrong in this society to a certain degree.
link |
02:34:51.440
You can, for instance, suggest some non sustainable business models and implement them.
link |
02:34:57.440
Right. But you're always pushing the boundaries. I mean, you're...
link |
02:35:00.720
And yes, this is seen as a good thing largely.
link |
02:35:03.920
Yes.
link |
02:35:04.880
And this is different from other societies. So for instance, social mobility is an aspect of
link |
02:35:09.200
this. Social mobility is the result of individual innovation that would not be
link |
02:35:13.280
sustainable at scale for everybody else. Right.
link |
02:35:15.680
Normally, you should not go up. You should go deep, right?
link |
02:35:17.920
We need bakers and indeed we are very good bakers.
link |
02:35:20.400
But in a society that innovates, maybe you can replace all the bakers with a really good machine.
link |
02:35:25.200
Right.
link |
02:35:25.600
And that's not a bad thing. And it's a thing that made the US so successful, right?
link |
02:35:29.520
But it also means that the US is not optimizing for sustainability, but for innovation.
link |
02:35:34.880
And so it's not obvious as the evolutionary processes on rolling is not obvious that that
link |
02:35:39.600
long term would be better. It has side effects. So basically, if you treat, you will have a certain
link |
02:35:46.400
layer of toxic sludge that covers everything that is a result of cheating.
link |
02:35:50.400
And we have to unroll this evolutionary process to figure out if these side effects are so damaging
link |
02:35:55.600
that the system is horrible, or if the benefits actually outweigh the negative effects.
link |
02:36:03.600
How do we get to the which system of government is best?
link |
02:36:05.760
That was from, I'm trying to trace back last five minutes.
link |
02:36:10.240
I suspect that we can find a way back to AI by thinking about the way in which our brain has
link |
02:36:16.480
to organize itself. In some sense, our brain is a society of neurons. And our mind is a
link |
02:36:24.000
society of behaviors. And they need to be organizing themselves into a structure that
link |
02:36:29.680
implements regulation. And government is social regulation. We often see government as the
link |
02:36:35.680
manifestation of power or local interest, but it's actually a platform for negotiating the
link |
02:36:40.240
conditions of human survival. And this platform emerges over the current needs and possibilities
link |
02:36:46.240
in the trajectory that we have. So given the present state, there are only so many options
link |
02:36:51.280
on how we can move into the next state without completely disrupting everything. And we mostly
link |
02:36:55.840
agree that it's a bad idea to disrupt everything because it will endanger our food supply for a
link |
02:37:00.160
while and the entire infrastructure and fabric of society. So we do try to find natural transitions.
link |
02:37:06.960
And there are not that many natural transitions available at any given point.
link |
02:37:10.640
What do you mean by natural transitions?
link |
02:37:12.080
So we try to not have revolutions if we can have it.
link |
02:37:16.560
So speaking of revolutions and the connection between government systems in the mind,
link |
02:37:21.200
you've also said that you said that in some sense becoming an adult means you take charge of your
link |
02:37:27.600
emotions. Maybe you never said that. Maybe I just made that up. But in the context of the mind,
link |
02:37:34.560
what's the role of emotion? And what is it? First of all, what is emotion? What's its role?
link |
02:37:41.200
It's several things. So psychologists often distinguish between emotion and feeling and
link |
02:37:46.480
feeling. And in common day parlance, we don't. I think that emotion is a configuration of the
link |
02:37:52.400
cognitive system. And that's especially true for the lowest level for the affective state.
link |
02:37:57.360
So when you have an affect, it's the configuration of certain modulation parameters like arousal,
link |
02:38:02.080
valence, your attentional focus, whether it's wide or narrow, interception or extra reception,
link |
02:38:08.080
and so on. And all these parameters together put you in a certain way that you relate to the
link |
02:38:13.040
environment and to yourself. And this is in some sense an emotional configuration. In the more
link |
02:38:17.600
narrow sense, an emotion is an affective state that has an object. And the relevance of that
link |
02:38:23.520
object is given by motivation. And motivation is a bunch of needs that are associated with rewards,
link |
02:38:29.040
things that give you pleasure and pain. And you don't actually act on your needs, you act on
link |
02:38:33.040
models of your needs. Because when the pleasure and pain manifests, it's too late, you've done
link |
02:38:37.040
everything. But so you act on expectations that will give you pleasure and pain. And these are
link |
02:38:42.000
your purposes. The needs don't form a hierarchy, they just coexist and compete. And your organism
link |
02:38:47.040
has to, your brain has to find a dynamic homeostasis between them. But the purposes need to be
link |
02:38:52.480
consistent. So you basically can create a story for your life and make plans. And so we organize
link |
02:38:58.560
them all into hierarchies. And there is not a unique solution for this. Some people eat to
link |
02:39:02.560
make art, and other people make art to eat. And they might end up doing the same things,
link |
02:39:07.040
but they cooperate in very different ways. Because their ultimate goals are different than we
link |
02:39:12.560
cooperate based on shared purpose. Everything else that is not cooperation on shared purpose
link |
02:39:16.560
is transactional. I don't think I understood that last piece of achieving the homeostasis.
link |
02:39:26.640
Are you distinguishing between the experience of emotion and the expression of emotion?
link |
02:39:30.240
Of course. So the experience of emotion is a feeling. And in this sense, what you feel is an
link |
02:39:37.680
appraisal that your perceptual system has made of the situation at hand. And it makes this based
link |
02:39:42.880
on your motivation and on your estimates, not your but of the subconscious geometric parts of
link |
02:39:50.240
your mind that assess the situation in the world with something like a neural network.
link |
02:39:54.880
And this neural network is making itself known to the symbolic parts of your mind,
link |
02:40:00.160
to your conscious attention, by mapping them as features into a space. So what you will feel
link |
02:40:06.400
about your emotion is a projection usually into your body map. So you might feel anxiety in your
link |
02:40:11.280
solar plexus, and you might feel it as a contraction, which is all geometry. Your body
link |
02:40:16.960
map is the space that is always instantiated and always available. So it's a very obvious cheat
link |
02:40:22.400
if your non symbolic parts of your brain try to talk to your symbolic parts of your brain to map
link |
02:40:29.360
the feelings into the body map. And then you perceive them as pleasant and unpleasant, depending
link |
02:40:34.000
on whether the appraisal has a negative or positive valence. And then you have different
link |
02:40:38.320
features of them that give you more knowledge about the nature of what you're feeling. So for
link |
02:40:42.960
instance, when you feel connected to other people, you typically feel this in your chest region around
link |
02:40:47.280
your heart. And you feel this is an expansive feeling in which you're reaching out, right?
link |
02:40:52.800
And it's very intuitive to encode it like this. That's why it's encoded like this for most people.
link |
02:40:57.680
But it's encoded. It's encoded. It's a code. It's a code in which the non symbolic parts of
link |
02:41:00.800
your mind talk to the symbolic ones. And then the expression of emotion is then the final step
link |
02:41:05.840
that could be sort of gestural or visual and so on. That's part of the communication.
link |
02:41:10.320
This probably evolved as part of an adversarial communication. So as soon as you started to
link |
02:41:15.360
observe the facial expression and posture of others to understand what emotional state they are in,
link |
02:41:20.400
others started to use this as signaling and also to subvert your model of their emotional state.
link |
02:41:25.200
So we now look at the inflections, at the difference between the standard phase that
link |
02:41:29.360
they're going to make in this situation. When you are at a funeral, everybody expects you to
link |
02:41:33.360
make a solemn phase. But the solemn phase doesn't express whether you're sad or not. It just expresses
link |
02:41:38.000
that you understand what phase you have to make at a funeral. Nobody should know that you are
link |
02:41:42.320
triumphant. So when you try to read the emotion of another person, you try to look at the delta
link |
02:41:48.080
between a truly sad expression and the things that are animated, mating this phase behind the curtain.
link |
02:41:56.400
So the interesting thing is, so having done this podcast and the video component,
link |
02:42:03.440
one of the things I've learned is that now I'm Russian and I just don't know how to express
link |
02:42:09.200
emotion on my face when I see that as weakness. But whatever the people look to me after you
link |
02:42:16.320
say something, they look to my face to help them see how they should feel about what you said,
link |
02:42:22.960
which is fascinating because then they'll often comment on why did you look bored or why did
link |
02:42:27.760
you particularly enjoy that part or why did you whatever. It's a kind of interesting,
link |
02:42:32.480
it makes me cognizant of I'm part, like you're basically saying a bunch of brilliant things,
link |
02:42:37.680
but I'm part of the play that you're the key actor and by making my facial expressions and
link |
02:42:45.920
therefore telling the narrative of what the big point is, which is fascinating.
link |
02:42:51.200
It makes me cognizant that I'm supposed to be making facial expressions. Even this conversation
link |
02:42:55.440
is hard because my preference would be to wear a mask with sunglasses to where I could just listen.
link |
02:43:01.920
Yes, I understand this because it's intrusive to interact with others this way and basically Eastern
link |
02:43:08.000
European society have a taboo against that and especially Russia, the further you go to the
link |
02:43:12.880
east and in the US it's the opposite. You're expected to be hyperanimated in your face and
link |
02:43:19.280
you're also expected to show positive effect. And if you show positive effect without a good
link |
02:43:25.920
reason in Russia, people will think you are a stupid, unsophisticated person. Exactly. And
link |
02:43:34.720
here positive effect without reason goes either appreciate or go unnoticed.
link |
02:43:40.800
No, it's the default. It's being expected. Everything is amazing. Have you seen this
link |
02:43:46.640
Lego movie? No, there was a diagram where somebody gave the appraisals that exist in
link |
02:43:52.080
the US and Russia. So you have your bell curve and the lower 10% in the US, it's a good start.
link |
02:44:01.680
Everything above the lowest 10% is amazing. It's amazing. And for Russians, everything below the
link |
02:44:09.600
top 10% is terrible. And then everything except the top percent is, I don't like it. And the top
link |
02:44:18.000
percent is even so. Yeah, it's funny, but it's kind of true. Yeah.
link |
02:44:27.040
But there's a deeper aspect to this. It's also how we construct meaning in the US.
link |
02:44:32.400
Usually you focus on the positive aspects and you just suppress the negative aspects.
link |
02:44:38.080
And in our Eastern European traditions, we emphasize the fact that if you hold something
link |
02:44:45.040
above the waterline, you also need to put something below the waterline because existence
link |
02:44:49.040
by itself is as best neutral. Right. That's the basic intuition. If at best neutral,
link |
02:44:54.960
or it could be just suffering, the default. There are moments of beauty, but these moments of beauty
link |
02:44:59.520
are inextricably linked to the reality of suffering. And to not acknowledge the reality
link |
02:45:05.280
of suffering means that you are really stupid and unaware of the fact that basically every conscious
link |
02:45:09.600
being spends most of the time suffering. Yeah, you just summarized the ethos of the Eastern Europe.
link |
02:45:17.840
Yeah, most of life is suffering with occasional moments of beauty. And if your facial expressions
link |
02:45:23.360
don't acknowledge the abundance of suffering in the world and in existence itself, then you must
link |
02:45:29.200
be an idiot. It's an interesting thing when you raise children in the US and you in some sense
link |
02:45:36.000
preserve the identity of the intellectual and cultural traditions that are embedded in your
link |
02:45:40.960
own families. And your daughter asks you about Ariel, the mermaid. And you ask you, why is Ariel
link |
02:45:47.520
not allowed to play with the humans? And you tell her the truth. She's a siren. Sirens eat people.
link |
02:45:54.320
You don't play with your food. It does not end well. And then you tell her the original story,
link |
02:45:58.640
which is not the one by Anderson, which is the romantic one. And there's a much darker one,
link |
02:46:02.560
the Undine story. So Undine is a mermaid or a waterwoman. She lives on the ground of a river
link |
02:46:11.920
and she meets this prince and they fall enough. And the prince really, really wants to be with her.
link |
02:46:16.640
And she says, okay, but the deal is you cannot have any other woman. If you marry somebody else,
link |
02:46:21.840
even though you cannot be with me, because obviously you cannot breathe underwater and
link |
02:46:24.880
have other things to do than managing your kingdom with you up here, you will die. And eventually,
link |
02:46:31.760
after a few years, he falls in love with some princess and marries her. And she shows up
link |
02:46:36.640
and quietly goes into his chamber and nobody is able to stop her or willing to do so because
link |
02:46:41.920
she is fierce. And she comes quietly and sat out of his chamber and they ask her,
link |
02:46:47.440
what has happened? What did you do? And she said, I kissed him to death.
link |
02:46:51.920
All done. And you know the Anderson story, right? And the Anderson story, the mermaid is playing
link |
02:46:58.640
with this prince that she saves and she falls in love with him and she cannot live out there. So
link |
02:47:03.680
she is giving up her voice and her tale for a human like appearance. So she can walk among
link |
02:47:10.400
the humans. But this guy does not recognize that she is the one that you marry. Instead,
link |
02:47:15.760
he marries somebody who has a kingdom and economical and political relationships to his
link |
02:47:20.720
own kingdom and so on as he should. She dies. Yeah. Instead, Disney, the little mermaid story
link |
02:47:33.200
has a little bit of a happy ending. That's the western, that's the American way.
link |
02:47:37.040
My own problem is this, of course, that I read Oscar Wilde before I read the other things. So I'm
link |
02:47:42.000
indoctrinated, inoculated with this romanticism. And I think that the mermaid is right. You
link |
02:47:46.960
sacrifice your life for romantic love. That's what you do because if you are confronted with either
link |
02:47:51.840
serving the machine and doing the obviously right thing under the economic and social and
link |
02:47:57.520
other human incentives, that's wrong. You should follow your heart.
link |
02:48:04.000
So do you think suffering is fundamental to happiness along these lines?
link |
02:48:09.520
No. Suffering is the result of caring about things that you cannot change. And if you are able to
link |
02:48:14.640
change what you care about to those things that you can change, you will not suffer.
link |
02:48:18.960
Would you then be able to experience happiness? Yes. But happiness itself is not important.
link |
02:48:25.200
Happiness is like a cookie. When you are a child, you think cookies are very important and you want
link |
02:48:29.520
to have all the cookies in the world. You look forward to being an adult because then you have
link |
02:48:33.040
as many cookies as you want, right? Yes. But as an adult, you realize a cookie is a tool.
link |
02:48:37.760
It's a tool to make you eat vegetables. And once you eat your vegetables anyway, you stop
link |
02:48:41.920
eating cookies for the most part because otherwise you will get diabetes and will not be around for
link |
02:48:46.000
your kids. Yes. But then the cookie, the scarcity of a cookie. If scarcity is enforced nevertheless,
link |
02:48:52.320
so like the pleasure comes from the scarcity. Yes. But the happiness is a cookie that your
link |
02:48:57.120
brain bakes for itself. It's not made by the environment. The environment cannot make you
link |
02:49:01.600
happy. It's your appraisal of the environment that makes you happy. And if you can change
link |
02:49:05.840
your appraisal of the environment which you can learn to, then you can create arbitrary states
link |
02:49:09.680
of happiness. And some meditators fall into the trap. So they discover the room, the
link |
02:49:14.000
basement room in their brain where the cookies are made and they indulge in stuff themselves.
link |
02:49:18.320
And after a few months, it gets really old and the big crisis of meaning comes. Because they
link |
02:49:22.800
thought before that their unhappiness was the result of not being happy enough. So they fixed
link |
02:49:28.000
this, right? They can release the newer transmitters at will if they train. And then the crisis of
link |
02:49:33.600
meaning pops up at a deeper layer. And the question is, why do I live? How can I make a
link |
02:49:38.320
sustainable civilization that is meaningful to me? How can I insert myself into this? And this
link |
02:49:42.880
was the problem that you couldn't solve in the first place. But at the end of all this, let me
link |
02:49:49.760
then ask that same question. What is the answer to that? What could the possible answer be of the
link |
02:49:55.760
meaning of life? What could an answer be? What is it to you? I think that if you look at the
link |
02:50:02.080
limiting of life, you look at what the cell is. The life is the cell. Yes, or this principle,
link |
02:50:09.360
the cell. It's this self organizing thing that can participate in evolution. In order to make it
link |
02:50:14.960
work, it's a molecular machine. It needs a self replicator and like entropy extractor and a
link |
02:50:19.280
Turing machine. If any of these parts is missing, you don't have a cell and it is not living, right?
link |
02:50:24.000
And life is basically the emergent complexity over that principle. Once you have this intelligent
link |
02:50:29.200
super molecule, the cell, there is very little that you cannot make it do. It's probably the optimal
link |
02:50:34.000
computronium and especially in terms of resilience. It's very hard to sterilize a planet once it's
link |
02:50:39.600
infected with life. So it's active function of these three components or the super cell of cell
link |
02:50:47.600
is present in the cell, is present in us, and it's just... We are just an expression of the cell.
link |
02:50:53.280
It's a certain layer of complexity in the organization of cells. So in a way, it's tempting
link |
02:50:58.640
to think of the cell as a von Neumann probe. If you want to build intelligence on other planets,
link |
02:51:04.080
the best way to do this is to infect them with cells and wait for long enough. And with a reasonable
link |
02:51:09.520
chance, the stuff is going to evolve into an information processing principle that is general
link |
02:51:13.840
enough to become sentient. Well, that idea is very akin to sort of the same dream and beautiful
link |
02:51:20.320
ideas that are expressed to cellular automata in their most simple mathematical form. If you just
link |
02:51:24.800
inject the system with some basic mechanisms of replication and so on, basic rules, amazing
link |
02:51:31.440
things would emerge. And the cell is able to do something that James Trady calls existential
link |
02:51:37.280
design. He points out that in technical design, we go from the outside in. We work in a highly
link |
02:51:42.560
controlled environment in which everything is deterministic, like our computers, our labs,
link |
02:51:46.640
or our engineering workshops. And then we use this determinism to implement a particular kind
link |
02:51:52.080
of function that we dream up and that seamlessly interfaces with all the other deterministic
link |
02:51:56.960
functions that we already have in our world. So it's basically from the outside in. And biological
link |
02:52:02.640
systems designed from the inside out as seed will become a seedling by taking some of the
link |
02:52:08.640
relatively unorganized matter around it and turn it into its own structure and thereby
link |
02:52:14.320
subdue the environment. And cells can cooperate if they can rely on other cells having a similar
link |
02:52:19.200
organization that is already compatible. But unless that's there, the cell needs to divide
link |
02:52:25.360
to create that structure by itself. So it's a self organizing principle that works on a
link |
02:52:30.400
somewhat chaotic environment. And the purpose of life in the sense is to produce complexity.
link |
02:52:36.640
And the complexity allows you to harvest like entropy gradients that you couldn't harvest
link |
02:52:40.480
without the complexity. And in the sense, intelligence and life are very strongly connected
link |
02:52:45.520
because the purpose of intelligence is to allow control and the conditions of complexity. So
link |
02:52:50.480
basically you shift the boundary between the ordered systems into the realm of chaos. You
link |
02:52:56.720
build bridgeheads into chaos with complexity. And this is what we are doing. This is not
link |
02:53:02.080
necessarily a deeper meaning. I think the meaning that we have priors for that we are
link |
02:53:05.760
involved for outside of the priors, there is no meaning meaning only exists if a mind projects
link |
02:53:09.920
it. That is probably civilization. I think that what feels most meaningful to me is to try to
link |
02:53:17.280
build and maintain a sustainable civilization. And taking a slight step outside of that, we talked
link |
02:53:23.600
about a man with a beard and God, but something, some mechanism perhaps must have planted the seed,
link |
02:53:35.840
the initial seed of the cell. Do you think there is a God? What is a God? And what would that look
link |
02:53:43.520
like? So if there was no spontaneous our biogenesis, that in the sense that the first cell formed by some
link |
02:53:50.560
happy random accidents where the molecules just happened to be in the right constellation to
link |
02:53:55.040
each other, but there could also be the mechanism of that allows for the random. I mean, there's
link |
02:54:01.040
like turtles all the way down. There seems to be there has to be a head turtle at the bottom.
link |
02:54:05.680
Let's consider something really wild. Imagine, is it possible that a gas giant could become intelligent?
link |
02:54:12.240
What would that involve? So imagine you have vortices that spontaneously emerge on the gas
link |
02:54:17.360
giants like big storm systems that endure for thousands of years. And some of these storm
link |
02:54:22.560
systems produce electromagnetic fields because some of the clouds are ferromagnetic or something.
link |
02:54:27.040
And as a result, they can change how certain clouds react rather than other clouds and thereby
link |
02:54:32.400
produce some self stabilizing patterns that eventually to regulation feedback loops,
link |
02:54:36.640
nested feedback loops and control. So imagine you have such a thing that basically has emergent,
link |
02:54:42.240
self sustaining, self organizing complexity. And at some point this fakes up and realizes
link |
02:54:46.640
and basically lambs Solaris. I am a sinking planet, but I will not replicate because I
link |
02:54:51.360
can recreate the conditions of my own existence somewhere else. I'm just basically an intelligence
link |
02:54:57.040
that has spontaneously formed because it could. And now it builds a von Neumann probe. And the
link |
02:55:02.960
best von Neumann probe is such a thing might be the cell. So maybe it, because it's very,
link |
02:55:06.960
very clever and very enduring, creates cells and sends them out. And one of them has infected our
link |
02:55:11.920
planet. And I'm not suggesting that this is the case, but it would be compatible with the
link |
02:55:15.760
Prince Birmingham hypothesis. And with my intuition that our biogenesis is very unlikely.
link |
02:55:21.360
It's possible, but you probably need to roll the cosmic dice very often, maybe more often
link |
02:55:26.240
than there are planetary surfaces. I don't know. So God is just a system that's large enough
link |
02:55:35.920
that allows randomness. Now, I don't think that God has anything to do with creation.
link |
02:55:39.840
I think it's a mistranslation of the Talmud into the Catholic mythology. I think that Genesis is
link |
02:55:46.080
actually the childhood memories of a God. So the Genesis is the childhood memories of a God.
link |
02:55:52.960
It's basically a mind that is remembering how it came into being. And we typically interpret
link |
02:56:00.240
Genesis as the creation of a physical universe by a supernatural being. And I think when you'll
link |
02:56:06.800
read it, there is light and darkness that is being created. And then you discover sky and
link |
02:56:13.280
ground, create them. You construct the plants and the animals, and you give everything their
link |
02:56:20.080
names and so on. That's basically cognitive development. It's a sequence of steps that every
link |
02:56:25.200
mind has to go through when it makes sense of the world. And when you have children, you can see
link |
02:56:29.600
how initially they distinguish light and darkness. And then they make out directions in it, and they
link |
02:56:34.560
discover sky and ground, and they discover the plants and the animals, and they give everything
link |
02:56:38.240
their name. And it's a creative process that happens in every mind. Because it's not given,
link |
02:56:42.640
right? Your mind has to invent these structures to make sense of the patterns on your retina.
link |
02:56:46.880
Also, if there was some big nerd who set up a server and runs this world on it, this would not
link |
02:56:53.600
create a special relationship between us and the nerd. This nerd would not have the magical power
link |
02:56:58.080
to give meaning to our existence, right? So this equation of a creator God with the God of meaning
link |
02:57:05.200
is a slate of hand. You shouldn't do it. The other one that is done in Catholicism is the
link |
02:57:10.320
equation of the first mover, the prime mover of Aristotle, which is basically the automaton
link |
02:57:15.360
that runs the universe. Aristotle says, if things are moving and things seem to be moving here,
link |
02:57:21.200
something must move them, right? If something moves them, something must move the thing that
link |
02:57:25.440
is moving it. So there must be a prime mover. This idea to say that this prime mover is a
link |
02:57:30.320
supernatural being is complete nonsense, right? It's an automaton in the simplest case. So we
link |
02:57:36.800
have to explain the enormity that this automaton exists at all. But again, we don't have any
link |
02:57:42.640
possibility to infer anything about its properties except that it's able to produce change in
link |
02:57:48.480
information, right? So there needs to be some kind of computational principle. This is all there is.
link |
02:57:53.680
But to say this automaton is identical again with the creator of first cause or with the
link |
02:57:58.160
thing that gives meaning to our life is confusion. Now, I think that what we perceive is the higher
link |
02:58:05.520
being that we are part of. And the higher being that we are part of is the civilization. It's the
link |
02:58:10.800
thing in which we have a similar relationship as the cell has to our body. And we have this prior
link |
02:58:16.800
because we have evolved to organize in these structures. So basically, the Christian God
link |
02:58:22.800
in its natural form without the mythology, if you do undress it, is basically the platonic form of
link |
02:58:27.520
the civilization. Is the ideal? Yes, it's this ideal that you try to approximate when you interact
link |
02:58:35.280
with others, not based on your incentives, but on what you think is right. Wow, we covered a lot
link |
02:58:42.640
of ground. And we left with one of my favorite lines. And there's many, which is happiness
link |
02:58:49.520
is a cookie that the brain bakes itself. It's been a huge honor and a pleasure to talk to you.
link |
02:58:58.240
I'm sure our paths will cross many times again. Josh, thank you so much for talking today.
link |
02:59:03.920
Really appreciate it. Thank you, Lex. It was so much fun. I enjoyed it. Awesome.
link |
02:59:09.280
Thanks for listening to this conversation with Yosha Bach. And thank you to our sponsors,
link |
02:59:14.000
ExpressVPN and Cash App. Please consider supporting this podcast by getting ExpressVPN
link |
02:59:19.760
at expressvpn.com slash Lex pod and downloading Cash App and using code Lex podcast. If you
link |
02:59:28.640
enjoy this thing, subscribe on YouTube, review it with five stars and Apple podcast supported on
link |
02:59:34.480
Patreon or simply connect with me on Twitter at Lex Friedman. And yes, try to figure out how to
link |
02:59:41.280
spell it without the E. And now let me leave you with some words of wisdom from Yosha Bach.
link |
02:59:48.320
If you take this as a computer game metaphor, this is the best level for humanity to play.
link |
02:59:53.680
And this best level happens to be the last level as it happens against the backdrop of a dying world.
link |
03:00:01.600
But it's still the best level. Thank you for listening and hope to see you next time.